Monday, January 31, 2011

Oracle Internet Directory 11g Management

Oracle Internet Directory 11g Management

  • Stop all the processes using the command /oracle/Oracle/Middleware/asinst_1/bin/opmnctl stopall
  • Checking Input for Schema and Data Consistency Violations and Generating the Input Files for SQL*Loader
On UNIX, the bulkload tool usually resides in $ORACLE_HOME/ldap/bin. On Microsoft Windows, this tool usually resides in ORACLE_HOME\ldap\bin.

./Oracle_IDM1/ldap/bin/bulkload
  • Check the input file and generate files for the SQL*Loader by typing:
bulkload connect="connect_string" \
   check="TRUE" generate="TRUE" file="full_path_to_ldif-file_name"

  • All check-related errors are reported as command line output. All schema violations are reported in ORACLE_INSTANCE/diagnostics/logs/OID/tools/bulkload.log. All bad entries are logged in ORACLE_INSTANCE/OID/load/badentry.ldif.
If there are duplicate entries, their DNs are logged in ORACLE_INSTANCE/diagnostics/logs/OID/tools/duplicateDN.log. This is just for information purpose. The bulkload tool does not generate duplicate data for duplicate entries. It ignores duplicate entries.
  •  To Load the data give the command
bulkload connect="connect_string" load="TRUE"
The connect string is “OIDDB”

Deleting Entries or Attributes of Entries by Using bulkdelete
Ø 
 bulkdelete is useful for deleting the attributes of a large number of entries in an existing directory. 
 
bulkdelete can delete entries specified under a naming context. By default, it deletes entries completely. It removes all traces of an entry from the database. If you use the option cleandb FALSE, bulkdelete turns all entries into tombstone entries instead of deleting them completely.

bulkdelete connect=OIDDB cleandb=TRUE verbose=TRUE basedn="dc=sjc, dc=org"
 


HOW TO DELETE AND LOAD OID (Oracle Internet Directory) DATA

HOW TO DELETE AND LOAD OID (Oracle Internet Directory)DATA

Before we start anything make sure ORACLE_HOME, ORACLE_SID and PATH are all set.

         For example (MINDTELLIGENTAPP1):

ORACLE_HOME=/oracle/oraHome_infra_101200
ORACLE_SID=infra
PATH=/oracle/oraHome_infra_101200/bin:/usr/kerberos/bin:/usr/local/bin:/bin:/usr/bin:/usr/X11R6/bin:/oracle/bin

1.         get 'orcldefaultSubscriber' to be used later from production server (optional)

ldapsearch -h mtilawsapp1 -p 399 -D cn=orcladmin -w infra1prod -b "cn=common, cn=products, cn=oracleContext" -s base "objectclass=*" orcldefaultSubscriber

cn=Common,cn=Products,cn=OracleContext
orcldefaultsubscriber=dc=mti,dc=org

2.         run the following 3 command to make sure if we have the existing data (optional)

$ORACLE_HOME/bin/ldapsearch -h mindtelligentapp1.mti.org -p 3060 -b "cn=users, dc=mindtelligentapp1,dc=com" -s base "objectclass=*"

$ORACLE_HOME/bin/ldapsearch -h mindtelligentapp1 -p 3060 -b "cn=groups, dc=mindtelligentapp1,dc=com"  -s base "objectclass=*"

$ORACLE_HOME/bin/ldapsearch -h mindtelligentapp1 -p 3060 -b "cn=groups, cn-OracleContext,dc=mindtelligentapp1,dc=com" -s base "objectclass=*"

3.         stop OID deamon before running bulkdelete

$ORACLE_HOME/opmn/bin/opmnctl stopproc ias-component=OID

4.         run bulkdelette to remove existing data in OID

$ORACLE_HOME/ldap/bin/bulkdelete.sh -connect infra -base "cn=users, dc=mti,dc=org"

$ORACLE_HOME/ldap/bin/bulkdelete.sh -connect infra -base "cn=groups, dc=mti,dc=org"

$ORACLE_HOME/ldap/bin/bulkdelete.sh -connect infra -base "cn=groups, cn=OracleContext, dc=mti,dc=org"

5.                  repeat the following command to remove any duplicate user records:

$ORACLE_HOME/ldap/bin/bulkdelete.sh -connect infra -base "dc=org"


6.                  To run the bulkload utility, set the directory server mode to read/modify:

·         start OID deamon to check if the data has been removed

·         $ORACLE_HOME/opmn/bin/opmnctl startproc ias-component=OID

From Oracle Directory Manager, navigate to the server entry (the main node under the Oracle Internet Directory Servers), and change the Server Mode attribute from Read/Write to Read/Modify from the drop-down list.

If you prefer to use the LDAP command line utilities, use the ldapmodify command:

$ORACLE_HOME/bin/ldapmodify -h mindtelligentapp1.mti.org -p 3060 -D cn=orcladmin -w welcome1 -v -f rm.ldif

where rm.ldif is a file you create, with the following contents:

dn:
changetype: modify
replace: orclservermode
orclservermode: rm

8.         Load users into the test Oracle Internet Directory by using the bulkload utility to load the LDIF file generated from the production system.  When invoking the bulkload utility, be sure to specify the absolute path of the LDIF file, even if it is in the current directory.

$ORACLE_HOME/ldap/bin/bulkload.sh -connect infra -check -generate -restore -load -append /tmp/oidexp012709.ldif

When invoking the bulkload utility, be sure to specify the absolute path of the LDIF file, even if it is in the current directory.

The response looks similar to the following output:

Verifying node "orcl"
-----------------------------
This tool can only be executed if you know database user password
for OiD on orcl
Enter OiD password ::

9.         Provide the password for the schema used by Oracle Internet Directory. This defaults to the password assigned for the ias_admin administrator during installation.

This command loads all the users, provided there is no error reported in the check mode on the exported LDIF file.

10.       Start the directory server with the following command:

$ORACLE_HOME//opmn/bin/opmnctl startproc ias-component=OID

11.              Change the orcldefaultsubscriber and orclsubscribersearchbase (Note this is a one time change.  We only need to do it at the first time we install a new OID):

This change allows us to point to the correct realm for searching.

a)         Buckup current information

$ORACLE_HOME/bin/ldapsearch -p 3060 -D cn=orcladmin -w welcome1 -L -s base -b "cn=Common,cn=Products,cn=OracleContext" "objectclass=*" > /tmp/backup_common_DEV_22Sep09.txt

b)         Create a LDIF file called modify_common_dev.ldif with following:

dn: cn=Common,cn=Products,cn=OracleContext
changetype: modify
replace: orcldefaultsubscriber
orcldefaultsubscriber: dc=mti,dc=org

dn: cn=Common,cn=Products,cn=OracleContext
changetype: modify
replace: orclsubscribersearchbase
orclsubscribersearchbase: dc=org

            c)         Apply the changes

$ORACLE_HOME/bin/ldapmodify -p 3060 -D cn=orcladmin -w welcome1 -v -f /tmp/modify_common_dev.ldif



            d)         Verify:

$ORACLE_HOME/bin/ldapsearch -h mindtelligentapp1 -p 3060 -D cn=orcladmin -w welcome1 -b "cn=common, cn=products, cn=oracleContext" -s base "objectclass=*" orcldefaultSubscriber


$ORACLE_HOME/bin/ldapsearch -L -h mindtelligentapp1 -p 3060 -D cn=orcladmin -w welcome1 -b "cn=common, cn=products, cn=OracleContext, dc=mti,dc=org" -s base "objectclass=*" orclCommonUserSearchBase orclCommonGroupSearchBase orclCommonNicknameattribute

For questions, comments and feedback,  please contact:
 Harvinder Singh Saluja

Configure ViaServ or any Type 4 JDBC Drivers with Oracle SOA 11g 11.1.1.3

For a large County Administration Agency in California, MindTelligent team was assigned the task of integration LEA/CJIS applications using Oracle 11g 11.1.1.3 SOA technology. One of the significant tasks was to configure the SOA servers to have seamless integration is CA Datacom Database with Oracle  and SQL Server data stores.


This section discusses the steps to configure ViaServ OR any Type 4 JDBC Driver with Oracle SOA 11g 11.1.1.3


Weblogic 10.3.3. is shipped with a several drivers that can be used "out of the box" and no configuration may be required. The drivers shipped with Weblogic include Adabas for z/OS,   
CICS/TS for z/OS,Cloudscape, DB2, DB2 for i5/OS, DB2 for z/OS, Derby,
EnterpriseDB, FirstSQL,IMS/DB for z/OS, IMS/TM for z/OS, Informix, Ingres,
MS SQL Server, MaxDB MySQL, Oracle, PointBase,PostgreSQL,
Progress, Sybase,VSAM for z/OS,

For this specific client of of mine I had to configure the Type 4 JDBC driver,
which is not pre-configured with with Weblogic. This following 
section discusses the steps to be taken to configure and 
seamlessly use the driver with weblogic and SOA managed servers.




  • Copy Viajdbc.jar to directory/oracle/Oracle/Middleware/wlserver_10.3/server/lib/ViaJdbc.jar
  • Change the WEBLOGIC_CLASSPATH variable in the commEnv.sh file in the /oracle/Oracle/Middleware/wlserver_10.3/common/bin directory to reflect the path of the driver jar file
  • For the Driver to be visible from SOA targets, it is imperative that we copy the Viaserv.jar file to the directory /oracle/Oracle/Middleware/Oracle_SOA1/soa/modules/oracle.soa.ext_11.1.1.3
  • Restart the admin and managed servers 
  • Create the Data Source.
  • Create a Connection Pool. 
  •  For questions, comments and feedback,  please contact: Harvinder Singh Saluja

Weblogic SOA Domain Configuration to access remote Web Services and SOA Artifacts

This sections covers the configuration of Weblogic SOA domain to Weblogic SOA Domain Configuration to access Web Services deployed outside the firewall.

For SOA composites deployed on Oracle 11g version 11.1.1.3, to consume web services the following changes need to be made to the setSOADomainEnv.sh on Linux  OR  setSOADomainEnv.cmd file. This section will discuss the Linux deployments only.

The setSOADomainEnv.sh can be found in the directory $WL_HOME/user_projects/domains/base_domain/bin.

The  EXTRA_JAVA_PROPERTIES variables needs to be modified as follows

EXTRA_JAVA_PROPERTIES="${EXTRA_JAVA_PROPERTIES} -Dhttps.proxySet=true -Dhttps.proxyHost=localdomain.com -Dhttps.proxyPort=443 -Dhttp.nonProxyHosts=localhost.localdomain|127.0.0.1|192.168.1.7"

export EXTRA_JAVA_PROPERTIES

https.proxyHost: Your proxy Host
https.proxyPort:   Your proxy port, 443 is the default port for https
http.nonProxyHosts: Your non proxy servers, the list can be separated with a "|"



After the changes have been made, restart the Weblogic Server and SOA Server.

For questions, comments and feedback,  please contact:
 Harvinder Singh Saluja

Friday, January 14, 2011

Weblogic 10.3.5: Configure JMS Servers on Weblogic Cluster

 Configure JMS servers Weblogic Cluster

JMS servers are environment-related configuration entities that act as management containers for the queues and topics in JMS modules that are targeted to them. A JMS server's primary responsibility for its destinations is to maintain information on what persistent store is used for any persistent messages that arrive on the destinations, and to maintain the states of durable subscribers created on the destinations. JMS servers also manage message paging on destinations, and, optionally, can manage message and/or byte thresholds, as well as server-level quota for its targeted destinations. As a container for targeted destinations, any configuration or run-time changes to a JMS server can affect all the destinations that it hosts.
The main steps for configuring a JMS server are:
  1. For storing persistent messages, you can simply use the host server's default persistent file store, which requires no configuration on your part. However, you can also create a dedicated file-based store or JDBC store for JMS. Note: User-defined stores can be configured before creating a JMS server or they can be configured on the fly as part of the JMS server configuration process.
    • Create file stores. A file store maintains subsystem data, such as persistent JMS messages and durable subscribers, in a group of files in a directory.
    • Create JDBC stores. A JDBC store maintains subsystem data, such as persistent JMS messages and durable subscribers, in a database.
  2. Create JMS servers After creating a basic JMS server, you can define a number of optional properties:
    • Configure general JMS server properties
    Optional general JMS server properties include selecting a user-defined persistent store, changing message paging defaults, specifying a template to use when your applications create temporary destinations, and specifying expired message scanning parameters.
    • Configure JMS server thresholds and quota
    Define upper and lower byte and message thresholds for destinations in JMS modules targeted to this JMS server, specifying a maximum size allowed for messages and the number of messages and/or bytes available to a JMS server, and selecting a blocking policy to determine whether the JMS server delivers smaller messages before larger ones when a destination has exceeded its maximum number of messages.
    • Configure JMS server message log rotation
    Define message logging properties, such as changing the default name of its log file, as well as configuring criteria for moving (rotating) old log messages to a separate file. A JMS server's log file contains the basic events that a JMS message traverses through, such as message production, consumption, and removal.
  3. If you skipped the targeting step when you created a JMS server, or want to modify the current targeting parameters, you can do so at anytime. You can target a JMS server to a different standalone WebLogic Server instance or migratable target server. Migratable targets define a set of WebLogic Server instances in a cluster that can potentially host a pinned service, such as a JMS server.   Note: In a clustered server environment, a recommended best practice is to target the JMS server to a migratable target, so that a member server will not be a single point of failure. A JMS server can also be configured to automatically migrate from an unhealthy server instance to a healthy server instance, with the help of the WLS health monitoring services. See Configure migratable targets for JMS-related services.
  4. In the event that you need to troubleshoot destinations targeted to a JMS server, you can temporarily pause all message production, insertion (in-flight messages), and consumption operations on all such destinations. Destinations can be paused either on a server restart or at runtime.
    • Pause JMS server message operations on server restart 
    • Pause JMS server message operations at runtime  
    1. Optionally, create JMS Session Pools, which enable your applications to process messages concurrently, and Connection Consumers (queues or topics) that retrieve server sessions and process messages. Note: JMS session pool and connection consumer configuration objects were deprecated in release 9.0 of WebLogic Server. They are not a required part of the J2EE specification, do not support JTA user transactions, and are largely superseded by message-driven beans (MDBs), which are a required part of J2EE.
      1. Create JMS session pools
      2. Create JMS connection consumers
      To create a JMS server

      1.  If you have not already done so, in the Change Center of the Administration Console, click Lock & Edit .
      2. In the Administration Console, expand Services > Messaging and select JMS Servers.
      3. In the Summary of JMS Servers page, click New. Note: Once you create a JMS server, you cannot rename it. Instead, you must delete it and create another one that uses the new name.
      4. On the Create a JMS Server page:
        1. In Name, enter a name for the JMS server.
        2. In Persistent Store, select a pre-configured custom file or JDBC store that will be used by the JMS server or click the Create a New Store button to create a store on the fly. If you leave this field set to none, then the JMS server will use the default file store that is automatically configured on each targeted server instance. For more information about configuring stores, see Configure custom persistent stores. Note: When a JMS server is targeted to a migratable target, it cannot use the default store, so a custom store must be configured and targeted to the same migratable target.
        3. Click Next to proceed to the targeting page.
      5. On the Select Targets page, select the server instance or migratable server target on which to deploy the JMS server. Migratable targets define a set of WebLogic Server instances in a cluster that can potentially host a pinned service, such as a JMS server. Note: In a clustered server environment, a recommended best practice is to target a JMS server to a migratable target, so that a member server will not be a single point of failure. A JMS server can also be automatically migrated from an unhealthy server instance to a healthy server instance, with the help of the server health monitoring services.
      6. Click Finish.
      7. On the Summary of JMS Servers page, click the new JMS server to open it. Note that there are many optional parameters that can be set on the JMS server configuration tabs, including General configuration parameters, Thresholds and Quotas, Logging, and Server Session Pools.
      8. After modifying any values, click Save.
      9. To activate these changes, in the Change Center of the Administration Console, click Activate Changes
       


      Enter the Name and choose the persistent store.




      Choose the soa_server1 (migratable option)


       View Summary of Servers





      Create JMS Server on Second Manages Server (Migratable)




Enter the Name and The Persistent Store different than the first JMS Server



Summary of JMS Server shows new JMS Server




    Amazon Bedrock and AWS Rekognition comparison for Image Recognition

     Both Amazon Bedrock and AWS Rekognition are services provided by AWS, but they cater to different use cases, especially when it comes to ...