Tuesday 27 June 2017

Standard Oracle apps dba Technical question and answers ... :)


Describe the architecture of Oracle Applications?

Oracle Applications follow a 3 Tier architecture:

- Database Tier
- Middle Tier
- Desktop Tier

- Database tier contains RDBMS database called as oracle database server.

- Database tier stores all the data needed by oracle applications. 

- Middle tier or Application Tier containing various servers configured with listeners like web server, forms server, admin server, reports server.

- Database server communicates with middle tier, doesn’t directly communicate with desktop tier.

- Desktop tier contains client desktop where users access oracle applications through web browsers.

How do you delete a responsibility from oracle applications?

- Responsibility is a level of authority given to Oracle Applications users.

- It is a collection of requests, forms and menus attached to user.

- Records are used for monitoring and security purpose so we can’t disable responsibilities. 

- However, responsibilities can be disabled by assigning an end date to the effective period.

- We can delete a responsibility from both backend and frontend.

What are the components of Oracle Applications technology layer?

Oracle Applications technology layer comprises the following products:

- Oracle Applications DBA (AD)
- Oracle Applications object library (FND)
- Oracle Common Modules (AK)
- Oracle Applications Utilities (AU)
- Oracle Alert (ALR)
- Oracle Workflow (WF)
- Oracle Applications Framework (FWK)
- Oracle XML Publisher (XML)

How many Oracle Homes are there in Oracle Applications and what is the importance of each one of them?

Oracle Applications have three Oracle Homes:

- Oracle Home is an environment where oracle products run.

- Database Oracle Home in the database tier that acts as the Oracle Home for the Oracle database.

- Oracle Home in the application tier called as 8.0.6 Oracle Home. It is called the technology stack Oracle Home and used by forms, reports and discoverer.

- IAS Oracle Home, used by the Oracle HTTP Server (Web Listener).

- We can run multiple releases of same products simultaneously by using multiple oracle homes.

Can I enable real application clusters in the database tier along with oracle applications?

- Yes, real application clusters can also be configured with Oracle Applications

- RAC allows multiple instances to mount and open a single database.

- RAC requires shared disk storage and a dedicated network interconnect.

- When any one of the node fails, database services can be still available on remaining nodes. 

- In that case, more than one instance of Oracle runs and the data file are stored at a central location accessible from the entire instance.

Where are Apache log files and configuration files stored and list each of them?

- Apache log files are stored in $IAS_ORACLE_HOME/Apache/Apache/logs.

- The log files are error_log, error_log_pls, access_log and access_log_pls.

- Apache configuration files are stored in $IAS_ORACLE_HOME/Apche/Apache/conf directory.

- The main configuration files are httpd.conf, apps.conf, httpd_pls.conf,oprocmgr.conf, and oracle_apache.conf.

How to recreate a DBC file and where is it located?

- Running Autoconfig will recreate the DBC file.

- DBC is a database connect descriptor file which stores database connection information used by application tier (web browsers) to connect to database.

- The file is located in FND_SECURE directory.

- you can also create the DBC file by running the script adgendbc.sh

- We can find the above script at $COMMON_TOP/admin/install directory.

List out the important configuration files in APPL_TOP?

Important configuration files available in the APPL_TOP are:

- APPLSYS.env/APPSORA.env
- Adovars.env
- SID.xml
- Adconfig.txt
- Adjareas.txt
- Topfile.txt
- Appsweb.cfg
- Hostname_SID.dbc
- Adpltfrm.txt
- Adjborg.txt
- Adjborg2.txt

Which utility is used for changing the password of the application users and how it runs?

- The password of the application users can be changed using the FNDCPASS utility.

- You must have system and applications password for running FNDCPASS.

FNDCPASS will run in the following manner:

- FNDCPASS apps/apps 0 Y system/manager SYSTEM APPLSYS WELCOME.
- FNDCPASS apps/apps 0 Y system/manager ORACLE GL GL1
- FNDCPASS apps/apps 0 Y system/manager USER VISION WELCOME.

What are the scripts do you use while Apps cloning?

- A type of script that’s made a comeback in hot scripts is clone script.

- adpreclone.pl prepares the source system and adcfgclone.pl configures the target system.

- Both the scripts are located in $COMMON_TOP/admin/scripts/contextname directory.

- Adpreclone.pl collects information about the database.

- It also creates generic templates of files containing source specified hardcore values.

What is a patch and name some different types of patches?

- Patch is a program which fixes the bug.

There are 4 different types of patches:

- One-off patch:

- This is the simplest type of patch. It is created to resolve a particular problem.
- Mini pack Patch:
- It is a collection of one-off patches related to a particular module or product.
- Mini Pack version of module is denoted by Alphabetic characters.

- Family pack patch:

- This is a collection of mini pack patches in one family.
- Alphabetic characters denote the family pack version.

- Maintenance pack patch:

- This is a collection of family pack patches.
- Oracle Applications Release 11.5.10 is an example of Maintenance pack.

List out APPS 11i post installation steps?

- Shutdown all Oracle Application Processes.

- Apply Inter-Operability Patch(3830807).

- Apply Discoverer Patch(3170128).

- Re link all Oracle Applications executables.

- Finally, Restart Oracle Application Processes.

What are the contents of patch and drivers present in patching?

Contents of patch:

- Readme files
- Driver files
- Metadata files
- Replacement files

Drivers present in patching:

C driver - It copies all the files and links the executables.

D driver – It is responsible for running SQL scripts which updates the database.

G driver - It is responsible for generating forms, reports and message files.

U driver – It is a united driver containing all copy and database actions.

What do you do if patch fails?

- Evaluate log files to determine the cause of the error.

- Repair the cause of error.

- Now Restart Adpatch.

- Answer 'Yes' when adpatch asks if you want to continue the previous session.

- Adpatch starts from where it left off by skipping all completed jobs.

How can I determine the effects a patch will have on my application system?

This can be done by patch wizard in the Oracle Applications manager.

- Checking total number of files in the patch and which are installed.

- Products that contain updated files.

- Total no of files introduced by the patch.

- Files on the target system changed by the patch.

- Files which depend on patched files.

How can you reduce the downtime when you have to apply multiple patches?

- You can reduce the downtime by merging all the patches into one single patch using admergepatch (admrgpch) tool.
- You can also apply multiple patches one by one by choosing options nocompiledb, nocompilejsp, and nolink.
- Compilation of invalid objects, jsp’s and relinking can be skipped till the last patch is applied.
- In the last patch you can compile all of them and then relink.
- You can also choose the max number of workers which your CPU permits.

List out the Log Files created on running patch?

Running the patch creates the following log files:

Adpatch.log: Information about the patch run will be stored here.

Adpatch.lgi: It contains information about the which has been discarded by adpatch.log
For example, the files which has not been copied by the adpatch.

Adrelink.log: Relinking information performed by patch will be stored here.

Adworkxx.log: Workers log details will be stored here.

What are the important AD tables and define their importance?

AD_APPL_TOPS: It contains information about all APPL_TOP’s using the instance.

AD_BUGS: It stores information about all different bug fixes applied.

AD_PATCH_DRIVER_MINIPACKS: Mini_packs information will be stored here.

AD_PATCH_RUN_BUGS: It stores information about all applied and updated bug fixes.

AD_FILES: It stores information about files.

Is it possible to revert the patch application? Explain in detail how?

- Yes, it’s possible to revert the patching but not 100% sure.

- In test environment it’s always recommended to take the full backup of the database, apply the patch and revert from backup.

- Adpatch is responsible for copying the files from the patch.

- From the patch log, drop the database objects created by the patch.

- Similarly forms in the G driver should be replaced from the backup taken by Adpatch and should be manually generated.

What is a view and how do you recreate a view which becomes invalid when you recreate a patch?

- View is used for accessing a particular portion of a table.

- A view is also called as virtual table.

- For recreating the view you can search for the view name in the SQL directory.

- We can recreate the views running scripts located in the Product top/patch/115/sql directory.

- Most views are recreated using sql script.

Difference between ad_bugs table and ad_applied_patches?

- When you apply the patch from adpatch utility ad_applied_patches table will get updated.

- Ad_applied _patch doesn’t store the information which was not applied using adpatch.

- Such type of information will be stored in ad_bugs table.

- Also, if all the patches are applied using the adpatch utility, that information is also stored in the ad_bugs_table.

What things you do to reduce patch timing?

- Admrgpch utility is used for merging all the patches.

- Various adpatch options like nocompiledb or nocompilejsp are used for reducing patch timing.

- By Using defaults file.

- By using, staged APPL_TOP for upgradation.

- By increasing the batch size, this might result into negative results.

What are the main configuration files in web server (Apache)?

Main configuration files in Oracle Apps Web server are:

- Httpd.conf
- Apps.conf
- Oracle_apache.conf
- Httpd_pls.conf
- Jserv.conf
- ssp_init.txt
- Jserv and zone properties
- Plsql.conf
- Wdbsvr.app

What are GSM and FNDSM?

- GSM stands for Generic Service Management Framework.

- Oracle E-Business suite contains many services used to start at their own earlier, but managing these services are distributed across network.

- So, GSM is an extension of concurrent processing which manages all your services.

- With GSM all services are centrally managed.

- FNDSM is executable and core component in GSM.

What are cursor attributes?

- Cursor attributes are used to get information about the current status of your cursor.

Both explicit and implicit cursors have four attributes as shown:

FOUND - It Returns TRUE if record was fetched successfully, FALSE otherwise.

NOTFOUND - Returns TRUE if record was not fetched successfully, FALSE otherwise.

ROWCOUNT - Returns number of records fetched from cursor at that point in time.

ISOPEN - Returns TRUE if cursor is open, FALSE otherwise.

What is the purpose of cluster?

- Cluster is used storing table data.
- It contains group of tables that share the same data blocks.
- Suppose if you cluster EMP and DEPT table, both share same DEPTNO column.
- Oracle stores all rows of each department physically from both the EMP and DEPT tables in the same data blocks.
- Clusters can’t be used for tables that are frequently fetched.

How to confirm if Apps Listener is up and running?

Execute the following commands:
lsnrctl status APPS _$SID( SID is your instance name)
For example: If our SID is test, then use 

lsnrctl status APPS_test

If the listener is up, you will observe the following output:
- Services summary
- FNDFS has 1 service handler(s)
- FNDSM has 1 service handler(s)

What is Adadmin and why it is used?

- Adadmin (Ad administration) is used to perform a number of administrative tasks to maintain the Oracle Applications.
- For Oracle Applications to run smoothly Adadmin is used.
- Adadmin performs work at both database level and file system level.
- Application users are requested to provide all inputs at Adadmin prompt which displays various options from Adadmin menu.
- You can run non_interactive tasks and this is really useful for scheduling routine tasks that require no user intervention.

What are the adadmin activities that can be run in a parallel way?

- Recreate grants and synonyms.
- Compile APPS schema
- Maintain multiple reporting currencies schema.
- Convert to Multi-org.
- Generate message files.
- Generate form files.
- Generate report files.

What is copy driver (c driver) and what does it do?

- All the files in the patch are copied to APPL_TOP by C driver.
- It extracts all the appropriate files from c library.
- Oracle applications products are relinked by C driver.
- Regenerates the JAR files and compiles the Java server pages(JSP) files.
- Compares the files in the patch with the files in the $APPL_TOP.
- If all the files in the patch are of higher version, Adpatch copies all the files from patch to $APPL_TOP.

What is database driver (D driver) and what does it do?

- It contains all the commands to change the database object.
- All the scripts copied by c driver are applied to the database.
- It makes a list of all the invalid objects that are there in the database.
- Runs SQL scripts which make changes to the database objects.
- Compiles all the invalid objects that are there in the database.

What are generate and unified driver and what does they do?

- Generate driver is named as g<patch number>.drv
- It regenerates all forms, reports and pl/sql libraries that have been affected by the patch.
- Unified driver is a combination of C,D, & G drivers.
- U driver is also named as u<patch_number>.drv
- It requires only a single execution of AutoPatch.

Explain about Forms server?

- Forms server is a component of Oracle Applications server which is used to deploy Forms applications across the web.
- It is previously called as Web Forms.
- It consists of Forms Listener, Runtime engine, Forms CGI.
- Forms listener manages connections and Runtime engine.
- Runtime engine contains Application logic, Data management and pl/sql engine.
- It is an optional entry point and used for Load balancing.

How to check whether forms server is running or not and how to change the port of the forms server?

- Forms server communicates with forms client and Database server.
- Check for the process f60ctl, if it is running it means the forms are up and running.
- You can find the Forms Server listener's log file at $COMMON_TOP/admin/install/f60srvm.txt.
- You can change the port numbers at f60ctl start by specifying port parameters.
- Modify the file appsweb.cgf which is available at $OA_HTML/bin.

Name some of the files related to forms?

Forms files include Script files, Configuration files and text files:

Adfmsctl.sh – It is used for starting and stopping the forms metric client.
Appsweb.cfg – It is used for defining parameter values used by forms CGI located in $OA_HTML/html/bin.
Appsbase.html – This is the default HTML file for starting an applet.
D21c60.txt – This is forms metric client log file.
D21s60.txt – This is forms metric server log file.

What is the difference between socket and servlet mode forms and list some advantages & disadvantages of each?

- In socket mode, we can find a dedicated connection between client desktop and the forms server.
- In servlet mode, Jserv processes the forms requested.
- Socket mode uses up to 40% less bandwidth than forms servlet mode.
- Socked mode is not supported on windows-based platforms.
- Servlet connections can be reestablished if network connections drop unexpectedly whereas it is not possible with socket mode.
- Servlet mode HTTP & HTTPS traffic can be easily recognized by routers, whereas socket mode communication is treated as suspect.

How do I change from socket to servlet mode and vice versa?

- First, you must login to Oracle Applications Manager
- Then you must navigate to sitemap, then find the Autoconfig.
- In Autoconfig you can able to change from socket mode to servlet mode and vice versa
- Run the autoconfig from backend and restart the forms server.
- Due to many advantages Forms servlet mode is preferred and is always recommended for web deployment.

What is JInitiator?

- It is a java virtual machine provided for running web based oracle forms applications inside a client web browser.
- It is implemented as a plug-in or ActiveX Object.
- It allows you to specify the use of Oracle-certified JVM instead of relying on default JVM provided by the browser.
- It is automatically downloaded to a client machine from the application server.
- Its installation and update is performed by standard plug-in mechanism provided by the browser.

What is the function of Oracle HTTP (Apache) server?

- Web listener accepts the Http requests coming from the client browsers and web server services the request.
- Apache server forwards advanced processing requests to the servlet engine. 
- Servlet engine forwards this requests to database.
- If the Http request needs parsing a JSP file then web listener checks the nature of the request.
- It then contacts the Jserv to run a JSP.
- The JSP contacts the database for the information and returns a HTML page displayed in the web browser.

What should I do for debugging when I login to the oracle applications from PHP based applications to avoid hanging?

- First check the Apache log files.
- Then in the Apache log directory, check for the access_log and error_log.
- If you are unable find any information from above steps then check for Jserv logs and JVM.
- Analyze all the errors that you are getting from log files and fix them
- Still you are not able to debug, then perform AOL diagnostics test.

What kind of maintenance activities can be performed from admin server?

Following maintenance activities are performed from admin server:

- Applying the patches
- Maintaining Oracle Applications
- Applying the ad utilities
- Upgrading Oracle Applications

What is the significance of appsweb.cgf file and where is it located?

- It defines the parameter values used by forms WEB CGI.
- It is the main configuration file used by forms server.
- It contains forms server Name, ServerPort, DomainName
- It stores Database Connection Parameters and Jinitiatorversion.
- It is located at $OA_HTML/bin.

What is cloning and why is it required?

Cloning is the process of creating an identical copy of the Oracle application system.

It is required due to following reasons:

- Creating a test copy of your production system before upgrading.
- Moving an existing system to a different machine.
- To test some patches
- Creating a development copy of your environment to be used by the developers.

What are the steps to clone from a single node to a multi-node?

Steps:

- You must login as the owner of file system once the database cloning is done.
- Run the adcfgclone.pl from the common_top/clone bin.
- Accept for target system having more than one application tier server node.
- Collect the details for processing node, admin node, forms node, and web node.
- Now you get a prompt for the various mount point details and it creates the context file for you.
- Follow the same steps from all the nodes.

What are the files you need to copy from APPL_TOP for creating a clone application system?

- APPL_TOP
- OA_HTML
- OA_JAVA
- OA_JRE_TOP
- COMMON_TOP>/util
- COMMON_TOP>/clone
- 806 ORACLE_HOME
- iAS ORACLE_HOME

Does clone preserve the patch history?

Yes, Rapid clone preserves the patch history in following locations:

RDBMS ORACLE_HOME: Preserves the OUI oraInventory.

iAS ORACLE_HOME: Preserves the OUI oraInventory.

806 ORACLE_HOME: Preserves the patch level and Oracle inventory.

APPL_TOP and Database: Preserves the patch level and history tables.

What are the pre-upgrade steps that need to be taken for the up gradation of non_11i instance to 11.5.10?

- First, you need to take a complete backup of the application system.
- Run the TUMS utility
- Review the TUMS report
- Maintain the multilingual tables
- Rename the custom database objects
- Check attachment file upload directory
- You need to save the custom.pll

What is ad splice and what does it do?

- Oracle often releases new products known as off cycle products.
- These products are up gradation to the base releases of Oracle applications.
- Ad splice is the utility which takes care of bringing an off cycle product in to oracle applications.
- Ad splice registers off cycle products as active in the system.
- Therefore, ad utilities recognize the off cycle products as valid product for a particular product release.

What are the important configuration files used by Apache run along with Jserv?

- $APACHE_TOP/Apache/conf/httpd.conf
- $APACHE_TOP/Jserv/etc/Jserv.conf
- $APACHE_TOP/Jserv/etc/Jserv.properties
- $APACHE_TOP/Jserv/etc/zone.properties
- $APACHE_TOP/Apache/conf/oracle_apache.conf
- $APACHE_TOP/Ojsp/conf/ojsp.conf

How do I check the version of Apache?

Version of Apache can be checked using the following command:
$IAS_ORACLE_HOME/Apache/Apache/bin/httpd –v
Output is given as below:
- httpd –v
- Server version: Oracle HTTP server Powered by Apache/1.3.19(UNIX)
- Server built: Dec 6 2005 13:41:10 (iAS 10.2.2.2 rollup 5)
From the above output we can know the version of the apache.
What is Oracle Apps (ERP)?
What is Oracle Apps (ERP)? - ERP stands for Enterprise Resource Planning. A product of Oracle India Pvt. Ltd. As the name suggests Enterprise Resource Planning...........
What are cycles of GL, AP, and AR?
In general GL AP and AR consist of: Structure Phase - defines the process flows and configuration for the system and Conduct business process and data mapping workshops........
What is FlexField? Types of flexfields
We can assume FlexField as a collection of segments or sub-fields which can be added or removed to modify the structure of the FlexField and does not require any levels of progra

Thanks
Srini

Wednesday 26 April 2017

Troubleshooting XML Gateway Inbound & Outbound Transactions

Troubleshooting is the main challenge in any issue. This article focuses on both the Inbound and Outbound flows of XML gateway transactions.
Outbound flow is as follows:
1) ECX message is generated.
2) Execution engine enqueues it into ecx_outqueue.
3) Message is dequeued by OTA.
4) Sent to Trading Partner using corresponding protocol (SMTP,HTTP,HTTPS etc).
Inbound flow is as follows:
1) Message is enqueued to ecx_inqueue by OTA.
2) ECX Inbound agent listener moves it from ecx_inqueue to ecx_transaction queue.
3) The ECX transaction agent listener triggers the subscription which moves the data into the open interface tables.
4)Validation APIs are run on these tables and finally data is pushed into the base tables.
Logs for ECX transactions:
As far as logging is concerned , pre-11.5.10, the logging needs to be written to a directory present in utl_file_dir . 11.5.10 onwards, we use fnd logging.
Please follow below steps to enable Logging for ECX and get the log files.
1) Set the following profiles at the Site/Application Level inside the E-Business Suite. In the case of setting log parameters at application level , you need to set the application name also, which would be “XML Gateway”.
FND: Debug Log Module = ecx%
FND: Debug Log Enabled = Yes
FND: Debug Log Level = Statement
2) After launching your transaction/reproducing the issue, you need to login to OAM and use Site Map > Monitoring > Logs and query for module: ecx%.
3) Collect these log files and upload them for review by Support.
OR
1) Run the $ECX_TOP/patch/115/sql/ECXLGINI.sql.
2) Then review the Logging messages in the OAM UI. Navigation: Log in to the self service applications with sysadmin, then select the System Administration responsibility, navigate to Oracle Application Manager -> Logs. Then search for log messages for Module ecx% to see the log file in the transaction monitor.
3) To get the physical ecx log file we will have to query the column Module in fnd_log_messages as follows:
select module from apps.fnd_log_messages where module like ‘ecx%’ order by log_sequence;
4) Once you get the module name then you can get the log file by running $ECX_TOP/patch/115/sql/ECXLGUTL .
This script will prompt for a module name. Provide the name as obtained in step3 and this should generate a log file, the script will finish with the name and location of that log file. Upload that log file to review.
Logs for OTA :
If message is not dequeued from ecx_outqueue or XML delivery fails then the problems is with OTA and troubleshooting for OTA has to be done.  To enable debugging mode for OTA and get relevant log files please do the following:
1. Shut down Apache using the adapcctl.sh (or adstpall.sh) script.
2. Backup the existing files in the following directories:
$IAS_ORACLE_HOME/Apache/Apache/logs
$IAS_ORACLE_HOME/Apache/Jserv/logs
$IAS_ORACLE_HOME/Apache/Jserv/logs/jvm
3. Enable debug logging for Apache
Modify $IAS_ORACLE_HOME/Apache/Apache/conf/httpd.conf:
LogLevel debug
4. Enable debug logging for mod_jserv
Modify $IAS_ORACLE_HOME/Apache/Jserv/etc/jserv.conf:
ApJServLogLevel debug
5. Enable debug logging for XML Gateway Jserv
Modify $IAS_ORACLE_HOME/Apache/Jserv/etc/xmlsvcs.properties:
log=true
log.channel=true
log.channel.info=true
log.channel.debug=true
wrapper.bin.parameters=-DOXTALogDebugMsg = true
6. Restart Apache for these changes to take effect using the adapcctl.sh (or adstrtal.sh) script.
7. Replicate the problem.
8. Upload the log files for errors and warning messages relevant to the reported issue.  The following files from the $IAS_ORACLE_HOME/Apache/Jserv/logs directory:
a) xmlsvcs_jserv.log
b) jserv.log
c) All files from the $IAS_ORACLE_HOME/Apache/Jserv/logs/jvm directory generated from XML Gateway JVM
These will have filenames of the form XmlSvcsGrp.0.stdout|stderr
Note: OTA from version 11.5.10 CU2 uses the FND logging too. AFLOG parameters are used for enabling logging for OTA. They can be set in the FND profile options
AFLOG_ENABLED= true
DAFLOG_LEVEL= <1-statement,6-Unexpected, Only set at either of these 2 levels).
AFLOG_MODULE= ecx.oxta%
AFLOG_FILENAME= <filename on midtier>.
Basic troubleshooting information needed by support for any SR/Bug logged for ECX transactions:
1) Output of the following 2 scripts:
$ECX_TOP/patch/115/sql/ecxver.sql
$ECX_TOP/patch/115/sql/ECXTEST.sql
2) Check if the ECX agent listeners are up and running (Can be checked from OAM).
3) Collect the log files for ECX as mentioned above.
4 a) For outbound Transactions check if the XML has been generated by querying in the ecx_doclogs table as follows:
select document_number, protocol_address , status , logfile, time_stamp,payload from ecx_doclogs where document_number=‘XXX’;
4b) The XML delivery status can be found from a table named ecx_oxta_logmsg. Use the query below:
SELECT edoc.document_number
,decode(eol.result_code, 1000, ‘Success’, ‘Failure’) AS status
,eol.result_text
FROM ecx_oxta_logmsg eol
,ecx_doclogs edoc
,ecx_outbound_logs eog
WHERE edoc.msgid = eol.sender_message_id
AND eog.out_msgid = edoc.msgid
ORDER BY edoc.document_number;
5) For Inbound transactions please reproduce the issue and then run this SQL:
select il.MSGID, il.status, il.error_id, il.logfile, il.time_stamp, er.message,dl.transaction_type, dl.transaction_subtype, dl.document_number, dl.party_site_id,dl.payload, dl.status
from ECX_INBOUND_LOGS il,
ECX_ERROR_MSGS er,
ECX_DOCLOGS dl
where il.error_id = er.error_id(+)
and il.MSGID = dl.msgid(+)
and il.time_stamp between to_date(”, ‘DD-MON-YYYY HH24:MI’) and
to_date(”, ‘DD-MON-YYYY HH24:MI’)
order by il.time_stamp asc;


References
NOTE:167629.1 – Oracle XML Gateway Troubleshooting Scripts

Thanks 
Srini

Sunday 12 February 2017

What is the Exadata and Exalogic servers



Oracle Exalogic Elastic Cloud is the world’s first and only integrated middleware machine—a combined hardware and software offering designed to revolutionize data center consolidation. Enterprises can bring together tens, hundreds, or even thousands of disparate, mission-critical, performance-sensitive workloads on Oracle Exalogic with maximum reliability, availability, and security. Oracle Exalogic’s unique high-bandwidth, low-latency interconnect fabric means that complex, distributed applications can run with a responsiveness simply not achievable with typical servers used in data centers today.

Oracle Exadata Database Machine provides an optimal solution for all database workloads, ranging from scan-intensive data warehouse applications to highly concurrent OLTP applications. With its combination of smart Oracle Exadata Storage Server Software, complete and intelligent Oracle Database software, and the latest industry standard hardware components from Sun, the Database Machine delivers extreme performance in a highly-available, highly-secure environment. With Oracle’s unique clustering and workload management capabilities, the Database Machine is also well-suited for consolidating multiple databases onto a single grid. Delivered as a complete pre-optimized and pre-configured package of software, servers, and storage, the Sun Oracle Database Machine is simple and fast to implement and ready to tackle your large-scale business applications.


Thanks
Srini

Tuesday 7 February 2017

11i / r12 India Localization Patch Tool



Introduction to India Localization Patch:
Pre-Requisite Patch Check:
select * from jai_applied_patches where patch_number = ‘<Patch_Number>’;
Object Details:
Check the status of the objects that will be updated by the patch driver.
select owner, object_name, object_type, last_ddl_time, status
from all_objects
where object_type like '<OBJ_TYP>' and object_name = ‘<OBJ_NAME>’;
India Localization Patch Tool – 11.5.10.2

Object Version:
select object_name||';;' ||
decode(object_type,'TRIGGER','en','VIEW','vw','PACKAGE','pls','PACKAGE
BODY','plb','compile')||';;' || jai_get_sql_ver_f(object_name, object_type)
from user_objects
where object_name in ‘(‘<OBJ_NAME>');
India Localization Patch Tool – 11.5.10.2

Checking FND Version:
select patch_level, application_name
from fnd_product_installations fpi, fnd_application_tl fat
where patch_level is not null and fpi.application_id = fat.application_id and patch_level like
'%FND%' ;
Invalid objects:
select count(*) from dba_objects where status='INVALID';
India Localization Patch Tool – 11.5.10.2

Connection to JA schema:
Object Backup:
India Localization Patch Tool – 11.5.10.2

Patch Top:
Copy the patch to JA_TOP and unzip it. Change the directory to patch folder. Inside the
patch folder you can see a sql supplied by oracle to find out the version of all the objects
which patch will be updating.
Patching:
DB, Concurrent Node:
perl $JA_TOP/inpatch/indpatch.pl drvr_file=8277581.drv fnd_patchset=H
appspwd=<PWD> japwd=<PWD> mode=D systempwd=<PWD>
perl $JA_TOP/inpatch/indpatch.pl drvr_file=8277581.drv fnd_patchset=H
appspwd=<PWD> japwd=<PWD> mode=C systempwd=<PWD>
strings -a ja_in_localization_setup_checks_trg.sql | grep Header
Form Node 1:
India Localization Patch Tool – 11.5.10.2

perl $JA_TOP/inpatch/indpatch.pl drvr_file=8277581.drv fnd_patchset=H appspwd=
<PWD> japwd=<PWD> mode=F systempwd=<PWD>
strings -a ja_in_localization_setup_checks_trg.sql | grep Header
Form Node 2:
perl $JA_TOP/inpatch/indpatch.pl drvr_file=8277581.drv fnd_patchset=H
appspwd=<PWD> japwd=<PWD> mode=F systempwd=<PWD>
strings -a ja_in_localization_setup_checks_trg.sql | grep Header
Backup Folder:
A backup folder will be created inside the patch top. In this location you can find all the
objects that are going to be replaced by patching.
India Localization Patch Tool – 11.5.10.2

Patch Log Location and Log File:
India Localization Patch Tool – 11.5.10.2

and more ….
Monitoring:
There is no tool to monitor India Localization Patch like how we can adctrl for adpatch in
EBS. But we can find by checking for the processes running in OS.

 

 

Thanks

Srini

Monday 23 January 2017

purging a cursor when Stuck in execution plans



I was at a UKOUG RAC/HA special interest group meeting last month. My favorite presentation of the day {out of what was a very good selection} was by Harald Van Breederode, an Oracle University DBA trainer from the Netherlands.  Harald’s presentation was on Checkpoints, a feature of Oracle that most DBAs tend to know a little about but never look into extensively. If you are a member of the UKOUG you can get the slides {If you are not a member, and you are in the UK {{or even Europe}}, nag your boss, get a membership – the conferences and meetings are a great source of info}.
Anyway, that is not the topic of this Blog. I finally downloaded a copy of the slides today and I checked out Harald’s blog. I immediately learnt something, which isthe topic of this blog.
In Oracle 10.2.0.4 (and upwards I assume) you can now flush a specific cursor out of the library cache using dbms_shared_pool.purge. You need to create the package first, it is not installed by default:-
@?/rdbms/admin/dbmspool
Package created.
Grant succeeded.
View created.
Package body created.
You also need to set an event to allow the purge to work. See metalink note 751876.1:
“The fix has gone into 10.2.0.4. However, the fix is event protected. You need to set the event 5614566, to make use of purge. Unless the event is set, purge will have no effect.”
alter session set events ‘5614566 trace name context forever’;
Now that you have the feature installed you can find the address and hash_value of a cursor you want to purge with eg:
select address, hash_value from v$sqlarea
where sql_text = ‘select count(c2) from skew where c1 = :bind’;
ADDRESS HASH_VALUE
——– ———-
27308318 2934790721
And purge it.
exec sys.dbms_shared_pool.purge(‘&address, &hash_value’,’c’)
PL/SQL procedure successfully completed.
For more comprehensive details check out
Why would you want to flush a specific cursor out of the library cache? Because it is performing poorly and you want the CBO to have another go at coming up with a plan. Before this nice sys.dbms_shared_pool.purge function you had three other options to flush the cursor out.
  • You can flush the whole shared pool
    alter system flush shared_pool;
    You really want to avoid doing this on a live system, as it flushes all SQL, PL/SQL (functions, packages, procedures} and data dictionary information, which will now have to be read back in when next required. It’s like unloading a whole delivery truck just to throw out one letter and then reloading the truck.
  • Alter one of the objects referenced by the SQL statement, with some DDL. Some suggest doing a grant or revoke on a table, I used to try adding a table comment {to what it already was}. I’m still not that comfortable with that option on a live system as you are doing something to a live object.
  • Another option is to gather new stats on one of the objects referenced by the cursor, which is fine so long as there is a small object you can quickly execute a dbms_stats.gather_xxxx_statistics on.
So I’ll look more into this package as I think it is cleaner way to do it. Mind you, there is that setting of an event on a live system…
How does a statement get a poor plan that is going to be fixed simply by asking the CBO to parse it again?
In Harald’s posting he also covers a main reason as to why you would want to flush a sql cursor out of the shared pool. His worked example shows how a SQL statement with a bind variable is open to getting an execution plan the first time it is parsed which is suitable for that first value of the bind – but NOT suitable for most other executions. eg the first value passed to the bind is maybe a very common one matching a large percentage of the table and so an index is not used, but most executions of the statement are with a bind value that matches very few records, and thus the index lookup is the best plan. But once the sql statement is parsed, the plan will be used until the statement disappears out of the  shared pool.
Another cause of a poor plan is when the statistics for an object referenced by the SQL statement changes. The CBO sometimes just chooses a plan that is not good. You may be thinking that, in this case, the optimizer will simply come to the same plan if asked again. Maybe not.
An issue I kept encountering at one site was very interesting. One of a small handful of simple SQL statements would go rouge overnight. Usually about 2 or 3am in the morning. Instead of a plan using a a highly specific index and a couple of table joins, a very, very poor plan was used instead. It only ever occurred early in the morning and only when the automated statistics gathering job had gathered stats on one of the tables involved. It took a while to spot this as the SQL would usually go rogue a while after the stats on the relevant tables had been gathered. This is because SQL statements are not invalidated when the underlying segments have their stats re-gathered by the automated job, they are invalidated “a little later”. It seems on Oracle 10.2 to be within an hour of the gather but not always. {To be totally accurate, this delayed invalidation is due to the DBMS_STATS parameter “no invalidate” defaulting to the value DBMS_STATS.AUTO_INVALIDATE but it can be overridden if you wish}
What seemed to be happening, though I never 100% proved it {so I am guessing, do not take this as a fact} is that one table would have new stats and the range of values for a column would include recent data {let’s say values 1 to 10,000,000}. Another table had the old information and so the range of known values was different {1 to 9,200,000}. This discrepancy would cause the wrong plan to be chosen. {I have a wilder theory which is that the indexes for a table had a different range of values for a column as the table stats had, but there are problems with my theory}.
By the time I logged in to the system in the morning to fix the overnight performance problem, stats on all relevant tables had finished being gathered and prompting the code to re-parse was the solution.
That leads me to the last point {sorry, a very long and wordy post again}.
Sometimes you can’t purge the cursor. The execution plan is stuck. Why?
Let’s say you have a cursor that is executed say 100 times a minute. Normally each execution runs in less than 50 milliseconds. All is fine. It has now gone rogue and it is taking 5 seconds to run, as the execution plan is poor. Each execution completes but at any time there is always at least one session running the code, usually several.
A SQL cursor will not be flushed from the shared pool if it is in use. Thus this rogue cursor gets stuck in the SGA. You can’t alter any of the objects as you can never get an exclusive lock on them. You flush the shared pool in desperation {even on your live system} and the cursor stays there, as even alter system flush shared_pool will not flush out an in-flight cursor. You could try locking the table, but like the DML, you are likely never to get that lock.
In this situation you have to identify the application running the code, stop it and wait for all executions to finish. Not maybe an issue in my example of 5 seconds to complete, but I’ve had the same problem with code run every few minutes now taking over an hour, so we could not wait an hour for it to sort out. In that situation we also had to kill sessions.
Of course, stopping and starting the database will cure the problem but the business may not be too happy about the whole database being shut down. {This prompts a thought – I’ve never noticed this but I wonder if you get the same problem on RAC but only impacting one node?}
This new package may help with stuck execution plans in that you can just keep trying over and over again to flush the one cursor until you hit a moment when no current execution is running.


================


Troubleshooting performance problems is an art by itself, especially when the problem is transient and only shows up once in a while. A classic example of such a problem is an unlucky peeked bind variable causing a sub-optimal execution plan based on the first execution of a SQL statement containing a bind variable on a skewed column. The problem is twofold: first to find such a statement and second to make the problem to go away. The latter is what this posting is all about. In order to demonstrate the problem and the fix we need to have something to play around with.
SQL> create table skew(c1 number(6), c2 char(20));

Table created.

SQL> insert into skew select 1,1 from dual connect by level <= 10000;

10000 rows created.

SQL> update skew set c1 = 2 where rownum <= 10;

10 rows updated.

SQL> create index skew_idx on skew(c1);

Index created.

SQL> exec dbms_stats.gather_table_stats(null,'skew', -
> method_opt => 'for all columns size 2')

PL/SQL procedure successfully completed.

SQL> select c1, count(*) from skew group by c1;

        C1   COUNT(*)
---------- ----------
         1       9990
         2         10
We now have an indexed table with skewed data in it with current object statistics in place including a histogram on the skewed column. Lets execute a query using a bind variable on the skewed column and see what the query optimizer expects and what execution plan it considers optimal.
SQL> var x number
SQL> exec :x := 1;

PL/SQL procedure successfully completed.

SQL> select count(c2) from skew where c1 = :x;

 COUNT(C2)
----------
      9990

SQL> select *
  2  from table(dbms_xplan.display_cursor(null,null,'basic rows'));

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------
EXPLAINED SQL STATEMENT:
------------------------
select count(c2) from skew where c1 = :x

Plan hash value: 568322376

-------------------------------------------
| Id  | Operation          | Name | Rows  |
-------------------------------------------
|   0 | SELECT STATEMENT   |      |       |
|   1 |  SORT AGGREGATE    |      |     1 |
|   2 |   TABLE ACCESS FULL| SKEW |  9990 |
-------------------------------------------
The example above shows that the query optimizer predicted the cardinality correctly and choosed the optimal execution plan based upon this information. It could do so because there is a histogram available to describe the data skew in the table. Now see what happens if we bind the value 2 and execute the query again:
SQL> exec :x := 2;

PL/SQL procedure successfully completed.

SQL> select count(c2) from skew where c1 = :x;

 COUNT(C2)
----------
        10

SQL> select *
  2  from table(dbms_xplan.display_cursor(null,null,'basic rows'));

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------
EXPLAINED SQL STATEMENT:
------------------------
select count(c2) from skew where c1 = :x

Plan hash value: 568322376

-------------------------------------------
| Id  | Operation          | Name | Rows  |
-------------------------------------------
|   0 | SELECT STATEMENT   |      |       |
|   1 |  SORT AGGREGATE    |      |     1 |
|   2 |   TABLE ACCESS FULL| SKEW |  9990 |
-------------------------------------------
Because the statement is not hard parsed again the same execution plan is used based on the prediction of 9990 rows. Because the query only returns 10 rows this execution plan is no longer optimal for the given value for the bind variable. If this query gets executed many times with this value of the bind variable we do have a performance problem for as long as this execution plan remains in the library cache. If this is indeed the case it might be beneficial to flush this cursor out of the shared pool. Starting with 10.2.0.4.0 this can be done using the PURGE procedure in the DBMS_SHARED_POOL package as demonstrated below:
SQL> @?/rdbms/admin/dbmspool

Package created.

Grant succeeded.

View created.

Package body created.

SQL> select address, hash_value from v$sqlarea
  2  where sql_text = 'select count(c2) from skew where c1 = :x';

ADDRESS  HASH_VALUE
-------- ----------
27308318 2934790721

SQL> exec sys.dbms_shared_pool.purge('&addressash_value','c')

PL/SQL procedure successfully completed.
Because the DBMS_SHARED_POOL package is not installed at database creation time, it has to be installed manually as shown above. The PURGE procedure needs the ADDRESS and HASH_VALUE of the cursor being flushed and the flag ‘C’ to indicate that we are flushing a cursor. This knowledge comes out of the dbmspool.sql script. The ADDRESS and HASH_VALUE can be retrieved from V$SQLAREA as shown in the example. A successful execution of the PURGE procedure indicates that the parent cursor is gone among with its children. A next execution of the query will force a hard parse and the creation of a new execution plan as we can see below:
SQL> select count(c2) from skew where c1 = :x;

 COUNT(C2)
----------
        10

SQL> select *
  2  from table(dbms_xplan.display_cursor(null,null,'basic rows'));

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------
EXPLAINED SQL STATEMENT:
------------------------
select count(c2) from skew where c1 = :x

Plan hash value: 3493361220

---------------------------------------------------------
| Id  | Operation                    | Name     | Rows  |
---------------------------------------------------------
|   0 | SELECT STATEMENT             |          |       |
|   1 |  SORT AGGREGATE              |          |     1 |
|   2 |   TABLE ACCESS BY INDEX ROWID| SKEW     |    10 |
|   3 |    INDEX RANGE SCAN          | SKEW_IDX |    10 |
---------------------------------------------------------
This time the query optimizer predicted the correct number of rows for the given value of the bind variable and selected the optimal execution plan for the given situation. The difficulty is of course to detect these situations before we can correct them. An indication could be a difference in the predicted number of rows and the actual number of rows in an execution plan, but therefore we need to set the STATISTICS_LEVEL parameter to ALL or add the GATHER_PLAN_STATISTICS hint to all possible affected statements which might be difficult to do. Once a possible affected statement has been found we can see the used bind value in the execution plan by using the PEEKED_BINDS options in the format specifier in the call to DBMS_XPLAN.
SQL> exec :x := 1;

PL/SQL procedure successfully completed.

SQL> select count(c2) from skew where c1 = :x;

 COUNT(C2)
----------
      9990

SQL> select *
  2  from table(dbms_xplan.display_cursor(null,null,'basic rows peeked_binds'));

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------
EXPLAINED SQL STATEMENT:
------------------------
select count(c2) from skew where c1 = :x

Plan hash value: 3493361220

---------------------------------------------------------
| Id  | Operation                    | Name     | Rows  |
---------------------------------------------------------
|   0 | SELECT STATEMENT             |          |       |
|   1 |  SORT AGGREGATE              |          |     1 |
|   2 |   TABLE ACCESS BY INDEX ROWID| SKEW     |    10 |
|   3 |    INDEX RANGE SCAN          | SKEW_IDX |    10 |
---------------------------------------------------------

Peeked Binds (identified by position):
--------------------------------------

   1 - :X (NUMBER): 2
In this final example we bounded the value 1 again and executed the query which retrieved 9990 rows whilst the execution plan shows a prediction of only 10 rows. By using PEEKED_BINDS we receive extra information from DBMS_XPLAN telling us that this particular execution plan is based on the value 2 of the first bind variable in the statement which is named ‘:x’ and is a number data type.
Conclusion: By using the PURGE procedure in the DBMS_SHARED_POOL package we can flush a cursor out of the Library Cache when the execution plan causes performance problems due to an unlucky bind variable value. However this is only a temporary solution. The definitive solution is Adaptive Cursor Sharing which is introduced in Oracle11g.

========================================

Purging Cursors From the Library Cache Using Full_hash_value

Introduction: Purging cursors from the library cache is a useful technique to keep handy for troubleshooting. Oracle has introduced a procedure call to do that in version 11 with backports to 10g. Besides  this has been covered by several blogs already (including , Oracle support (note 457309.1 for example) and the actual package file in $ORACLE_HOME/rdbms/admin/dbmspool.sql

Most of the examples and discussions in the links above utilize with the following syntax:

SQL> exec sys.dbms_shared_pool.purge(‘&address, &hash_value’,'c’)

What's new in 11.2:

A new (overloaded) procedure in dbms_shared_pool.purge is available in 11.2 and allows to purge statements identified by thier full_hash_value of the statement. One of the advantages compared to the previous method is that the full_hash_value is a property of a given sql statement and does not depend on the memory address of the (parent) cursor. Note this has been tested in 11.2.0.3 64 bit for Linux.


Example:

myapp_user_SQL> select /*MYTEST*/ sysdate from dual; -- put test SQL statement that we want to flush in the following
admin_user_SQL> select a.FULL_HASH_VALUE from V$DB_OBJECT_CACHE a where name='select /*MYTEST*/ sysdate from dual';  
-- find full_hash_value to be used in the next step
-- in this example the full_hash_value is 98d0f8fcbddf4095175e36592011cc2c
admin_user_SQL> exec sys.dbms_shared_pool.purge(HASH=>'98d0f8fcbddf4095175e36592011cc2c',namespace=>0,heaps=>1) 

Additional info:

full_hash_value is a 128-bit MD5 hash of the sql statement
A few methods to find full_hash_value given different input are listed here below:
  • find full_hash_value from cache, query v$db_object_cache 
    • select a.FULL_HASH_VALUE from V$DB_OBJECT_CACHE a where name='select /*MYTEST*/ sysdate from dual';
  • find full_hash_value from hash_value
    • select full_hash_value from  v$db_object_cache where hash_value=538037292
  • find full_hash_value from sql_id
    • find hash_value from sql_id using DBMS_UTILITY.SQLID_TO_SQLHASH
    • select  full_hash_value from  v$db_object_cache where hash_value= DBMS_UTILITY.SQLID_TO_SQLHASH('1frjqb4h13m1c');
  • compute full_hash_value from SQL tex
namespace=>0 means 'SQL  AREA' , which is the relevant namespace for cursors.

Conclusions:

We have discussed a method to purge cursors for the library cache that uses the full_hash_value of the cursor instead of the address and hash_value which is the more common approach (and the only one documented in previous versions). This method discussed here is available in 11.2.

Thanks
Srini