Thursday 29 September 2022

How to run Autoconfig on EBS r12.2 patch and run FS.

 

If we are changing any parameter in the Context file in R12.2 environment then the parameter should be change in both the RUN and PATCH file system context file and then we should run the Autoconfig.

Steps:


Stop all the Services


On RUN File System

1. Source the RUN file system environment.

2. Modify the Parameter in Context File

3. Run Autoconfig on RUN Filesystem

  cd $ADMIN_SCRIPTS_HOME

  ./adautocfg.sh

  

Important Steps

Disable the EBS LOGON TRIGGER

Connect to System User

  conn system/manager

  alter trigger ebs_logon disable;

  

On Patch File System  

1. Source the PATCH file system environment.

2. Modify the Parameter in Context File

3. Run Autoconfig on PATCH Filesystem

  cd $ADMIN_SCRIPTS_HOME

  ./adautocfg.sh

 Important Steps

Enable the EBS LOGON TRIGGER

Connect to System User

  conn system/manager

  alter trigger ebs_logon enable; 

 

Start all services  


Thanks,

Srini


Wednesday 28 September 2022

Oracle R12.2 Patching bugs and fixups

 General

In this blog I have discussed fixes/workaround issues faced by our customers on ADOP.


Following is the list of occurred issues:


FS_CLONE/Prepare issues due to missed Patch Context File on DB

FS_CLONE/Prepare due to less /usr/tmp Space

FS_CLONE while wlst2apply apply

Node Abandon

Invalid Cross References

FS_CLONE/Prepare issue due to missed Patch Context File on DB

As part of cloning process, sometime, we ignore running autoconfig on Patch File system to gain time of delivering clone instance, and this leads to patch context files missing on the database. While running ADOP database context files get validated. However, due to unavailability of patch file system context files in database we get the following error.

Logfile Error details:


Validating configuration on node: [Appl_Host].


Log: /u01-DEV/app/appti/DEV/fs_ne/EBSapps/log/adop/98/20220131_021858/fs_clone/validate/Appl_Host

[UNEXPECTED]Error occurred running "perl /u01-DEV/app/appti/DEV/fs1/EBSapps/appl/ad/12.0.0/patch/115/bin/txkADOPValidations.pl  -contextfile=/u01-DEV/app/appti/DEV/fs1/inst/apps/DEV_Appl_Host/appl/admin/DEV_Appl_Host.xml -patchctxfile=/u01-DEV/app/appti/DEV/fs2/inst/apps/DEV_Appl_Host/appl/admin/DEV_Appl_Host.xml -phase=fs_clone -logloc=/u01-DEV/app/appti/DEV/fs_ne/EBSapps/log/adop/98/20220131_021858/fs_clone/validate/Appl_Host -promptmsg=hide"

[UNEXPECTED]Error 1 occurred while Executing txkADOPValidation script on Appl_Host

Check Validation log:


NONPROD [appti@Appl_Host Appl_Host]$ cat txkADOPValidations.error Use of uninitialized value $result in split at /u01-DEV/app/appti/DEV/fs1/EBSapps/appl/au/12.0.0/perl/TXK/ADOPValidationUtils.pm line 1294. No such file or directory at /u01-DEV/app/appti/DEV/fs1/EBSapps/appl/au/12.0.0/perl/TXK/ADOPValidationUtils.pm line 230.


Issue: Issue occurred due to no Contextfile of Patch context file in fnd_oam_context_files . Use the following query to check Patch context file:


SQL> select * from FND_OAM_CONTEXT_FILES

where NAME not in (‘TEMPLATE’,‘METADATA’,‘config.txt’)

and CTX_TYPE=‘A’

and (status is null or upper(status) in (‘S’,‘F’))

and EXTRACTVALUE(XMLType(TEXT),'//file_edition_type') = ‘patch’; 2 3 4 5

no rows selected


Fix:


Upload Patch Context to DB using the following command:


Source Run file environment  $ADJVAPRG oracle.apps.ad.autoconfig.oam.CtxSynchronizer action=upload contextfile=<fulle patch of Patch Contex_file> logfile=<fulle path of logfile>  $ADJVAPRG oracle.apps.ad.autoconfig.oam.CtxSynchronizer action=upload contextfile=/u01-DEV/app/appti/DEV/fs2/inst/apps/DEV_Appl_Host/appl/admin/DEV_Appl_Host.xml logfile=/tmp/patch_context_upload.log


SQL> select path from FND_OAM_CONTEXT_FILES

where NAME not in (‘TEMPLATE’,‘METADATA’,‘config.txt’)

and CTX_TYPE=‘A’

and (status is null or upper(status) in (‘S’,‘F’))

and EXTRACTVALUE(XMLType(TEXT),'//file_edition_type') = ‘patch’; 2 3 4 5


PATH

/u01-DEV/app/appti/DEV/fs2/inst/apps/DEV_Appl_Host/appl/admin/DEV_Appl_Host.xml Query with Output :


Now FS_CLONE should complete fine


FS_CLONE/Prepare errors due to less /usr/tmp Space

Due to less /usr/tmp space on the host, you may encounter the following error while running adop fs_clone or in the prepare stage

Error details: START: Creating FMW archive.


Running /u01-DEV/app/appti/DEV/fs1/FMW_Home/oracle_common/bin/copyBinary.sh -javaHome /u01-DEV/app/appti/DEV/fs1/EBSapps/comn/adopclone_Appl_Host/FMW/t2pjdk -al /u01-DEV/app/appti/DEV/fs1/EBSapps/comn/adopclone_Appl_Host/FMW/FMW_Home.jar -smw /u01-DEV/app/appti/DEV/fs1/FMW_Home -ldl /u01-DEV/app/appti/DEV/fs1/inst/apps/DEV_Appl_Host/admin/log/clone/fmwT2PStage -invPtrLoc /etc/oraInst.loc -silent true -debug true Script Executed in 27558 milliseconds, returning status 255 ERROR: Script failed, exit code 255

Issue: Insufficient space in /usr/tmp . Require Min 15G free space .


Fix:

Create a directory with 16G available space

`mkdir -p /u02-DEV/app/appti/DEV/temp

T2P_JAVA_OPTIONS="-Djava.io.tmpdir=/u02-DEV/app/appti/DEV/temp"

TMP=/u02-DEV/app/appti/DEV/temp

TEMP=/u02-DEV/app/appti/DEV/temp

TMPDIR=/u02-DEV/app/appti/DEV/temp

export T2P_JAVA_OPTIONS

export TMP

export TEMP

export TMPDIR`

Now rerun FS_CLONE


FS_CLONE while wlst2apply apply

ADOP FS_CLONE/Prepare running creating wlst2apply errors out due to less JVM memory.


Error details: .jar -server_start_mode=prod -log=/u01-DEV/app/appti/DEV/fs1/inst/apps/DEV_Appl_Host/logs/appl/rgf/TXK/CLONINGCLIENT7100708042842072347/unpack.log -log_priority=debug « read template from “/u01-DEV/app/appti/DEV/fs1/inst/apps/DEV_Appl_Host/logs/appl/rgf/TXK/CLONINGCLIENT7100708042842072347/packed_template.jar”


succeed: read template from “/u01-DEV/app/appti/DEV/fs1/inst/apps/DEV_Appl_Host/logs/appl/rgf/TXK/CLONINGCLIENT7100708042842072347/packed_template.jar” « set config option ServerStartMode to “prod” succeed: set config option ServerStartMode to “prod” « write Domain to “/u01-DEV/app/appti/DEV/fs2/FMW_Home/user_projects/domains/EBS_domain” Exception in thread “Thread-1” java.lang.OutOfMemoryError: Java heap space at java.util.Arrays.copyOfRange(Arrays.java:2694) at java.lang.String.(String.java:203) at java.io.BufferedReader.readLine(BufferedReader.java:349) at java.io.BufferedReader.readLine(BufferedReader.java:382) at com.oracle.cie.common.util.CRLF.readData(CRLF.java:129) at com.oracle.cie.common.util.CRLF.processFile(CRLF.java:67) at com.oracle.cie.common.util.CRLF.process(CRLF.java:57) at com.oracle.cie.domain.util.stringsub.SubsScriptHelper.processCRLF(SubsScriptHelper.java:167) at com.oracle.cie.domain.DomainGenerator.generate(DomainGenerator.java:478) at com.oracle.cie.domain.script.ScriptExecutor$2.run(ScriptExecutor.java:2991)


Issue:

Insufficient JVM heap size to complete fs_clone

Fix:

export CONFIG_JVM_ARGS="-Xmx2048m  -Xms1024m -XX:MaxPermSize=512m -XX:-UseGCOverheadLimit"

Run FS_CLONE

Node Abandoned

When the node gets abandoned during adop in FS_CLONE/PREPARE, you can fix the abandoned node by:


Note: Try this workaround only with issues in the FS_CLONE, PREPARE phase. If the issue occurs in Cutover phase in multi node environment try Standard process of Deleting and Adding Abandoned Node

Error:

Checking for existing adop sessions.

No pending session exists.

Starting new adop session.

[ERROR] Nodes Appl_Host_Admin,Appl_Host_EXternal were abandoned in previous patching cycle

[ERROR] To recover these nodes, follow the instructions in My Oracle Support Knowledge Document 1677498.1

[UNEXPECTED]Unrecoverable error occurred. Exiting current adop session.


Fix:


a) Get current adop_session_id from ad_adop_sessions table . select * from ad_adop_sessions order by adop_session_id desc;


b) Take backup of ad_adop_sessions Create table applsys.ad_adop_sessions_bkp21jan22 as select * from ad_adop_sessions;


c)update ad_adop_sessions set abandon_flag=null where adop_session_id=156; Then run adop -validate* to check all nodes are validated successfully


Invalid Cross References

ValidateOHSConfigFilesForCrossRef …

    WARNING: File - /u01-DEV/app/appti/DEV/fs2/FMW_Home/webtier/instances/EBS_web_OHS1/bin/opmnctl contains cross reference in it.

    WARNING: File - /u01-DEV/app/appti/DEV/fs2/FMW_Home/webtier/instances/EBS_web_OHS1/bin/opmnctl contains cross reference in it.

    WARNING: File - /u01-DEV/app/appti/DEV/fs2/FMW_Home/webtier/instances/EBS_web_OHS1/bin/opmnctl contains cross reference in it.

   Corrective Action: Contact Oracle Support to identify the best course of action.

Check ADOP Validation Log in $ADOP_LOG_HOME/ 98/20220205_063351/validate/<Appl_Host> Fix:

Correct opmnctl file in $FMW_HOME/ webtier/instances/EBS_web_OHS1/bin with file system


Take backup before making changes


Thanks,

Srini

Patch script's in EBS 12.2

 How To Check if a Patch is Applied in 12.2.x using SQL*PLUS


In EBS 12.2 we all know that Online patching has been introduced. Thus during a patching cycle if a patch has been applied using adop phase=apply and later we aborted it before the cycle gets complete, the table ad_bugs and ad_applied_patches will not be updated. So we cannot query these two tables. As per metalink note:Doc ID 1963046.1, use the below query to find out if a patch has been applied or not.


**************************************************

select AD_PATCH.IS_PATCH_APPLIED(\'$release\',\'$appltop_id\',\'$patch_no\',\'$language\')

from dual;



Example sql using the APPL_TOP ID for a specific node in a multinode environment (1045 is the APPL_TOP ID):


SELECT adb.bug_number,ad_patch.is_patch_applied('R12', 1045, adb.bug_number)

FROM ad_bugs adb

WHERE adb.bug_number in (20034256);


or for single app tier installations:


select ad_patch.is_patch_applied('R12',-1,20034256) from dual;


expected results:

EXPLICIT = applied

NOT APPLIED = not applied / aborted



Also don't forget to complete the APPL_TOP snapshot using adadmin option.

***********************************************************************


ADOP Online patching concepts


adop phases:


prepare       : Prepare the instance for online patching.

apply         : Apply patch(es) to the Patch Edition.

finalize      : Ready the instance for cutover.

abort         : Abort the patching cycle.

cutover       : Promote the Patch Edition to Run Edition.

cleanup       : Drop obsolete objects and seed data from Old Editions.



actualize_all : Actualize all objects in the Patch Edition.

cleanup_full  : Cleanup and drop Old Editions.

abandon       : yes|no - Abandon failed patches.




adop patch log directory:


<INSTALL BASE>/fs_ne/EBSapps/log/adop



adop patch process cycle steps:


Download any required technology patches and unzip the contents. The patch contents may be unzipped into  $NE_BASE/EBSapps/patch.


1. Prepare the system for patching.


   source <EBS_ROOT>/EBSapps.env run


   $ adop phase=prepare


2. Apply technology patches to the Oracle Home under the Patch f/s using the information below.


   source <EBS_ROOT>/EBSapps.env patch


3. Apply any Oracle E-Business Suite patches planned for this patching cycle


   $ adop phase=apply patches=<patch_list>


4. After all patches have been successfully applied, complete the patching cycle.


   $ adop phase=finalize

   $ adop phase=cutover


Note: Below steps can be done when applications are up and running


5. source <EBS_ROOT>/EBSapps.env run

 

   $ adop phase=cleanup


6. To complete the process and synchronize the technology level between patch and run f/s.


   $ adop phase=fs_clone




adop hotpatch steps (we cannot abort hotpatch and abondon cannot be done):


Hotpatch which can apply directly on run fs


$ adop phase=apply patches=<patch_list> hotpatch=yes


After hotpatch please run phase=cleanup and phase=fs_clone to sync the run fs with patch fs to prepare for next patching cycle




adop re-apply patch forcefully:


If we try to re-apply the patch which is already applied/exists then adop patch terminates

with below message and by default it takes N when prompted. to overcome this we need to

re-apply patch with options=forceapply.

This Patch seems to have been applied already.

Would you like to continue anyway  [N] ? N *



$ adop phase=apply patches=<patch list> hotpatch=yes options=forceapply




 adop deal with "Continue As If It Were Successful" error:




$ adop phase=apply patches=<patch list> abandon=no restart=yes flags=autoskip




To define workers in adop:




$ adop phase=apply patches=<patch list> workers=5




To define patchtop in adop:




$ adop phase=apply patches=<patch list> patchtop=<patch location base>




adop merge patch:




$ adop phase=apply patches=<patch list> merge=yes




Restarting adop From A Failed Session:





 $ adop phase=abort

 $ adop phase=cleanup cleanup_mode=full

 $ adop phase=fs_clone


Then reapply the patch




adop apply for language patch:




$ adop phase=apply patches=18023722_ESA:u18023722.drv




adop non-interactive with patch top and define driver:






$ adop phase=apply options=nocopyportion patchtop=$XLA_TOP/patch/115 patches=driver:xla5584908.drv




adop Steps to follow to skip the failed workers:




1. Use adctrl and select option#8 (This will not be visible) to skip thefailed jobs

2. Restart adop using "restart=yes" parameter


** If the failed jobs are numerous, then better to re-start this patch once again with autoskip option.


ie.  adop restart=no abandon=yes flags=autoskip


This command will restart the patch once again from starting onwards and will skip all the failures if any comes. But make sure to review the log file at the end of the patch application that you have skipped the failures that you want to.





Weblogic Server Smart Update Patching :


Steps to apply weblogic server smart update patch:



Refer Note:

How to Apply WebLogic Server (WLS) Patches Using Smart Update [Video] (Doc ID 876004.1)


1.Download the patch

2.Copy the files (for example, E5W8.jar and WGQJ.jar) and the patch-catalog_xxx.xml from the zip file to the target machine. You do not need the readme file.

Path: $FMW_HOME/utils/bsu/cache_dir


3.cd $FMW_HOME/utils/bsu


To install patch:

./bsu.sh -prod_dir=$FMW_HOME/wlserver_10.3 -patchlist=7FC9 -verbose -install


To Verify:

./bsu.sh -prod_dir=$FMW_HOME/wlserver_10.3 -status=applied -verbose -view | grep 7FC9


To Rollback:

./bsu.sh -remove -patchlist=7FC9 -prod_dir=$FMW_HOME/wlserver_10.3 -verbose


We can also apply the Smart Update patch in graphical (GUI) mode.





Steps to apply opatch on FMW Web Tier HOME :



Set the Environment as below (replace <INSTALL_BASE> path as required):


$ export ORACLE_HOME=$FMW_HOME/webtier

$ export PATH=$ORACLE_HOME/OPatch:$PATH


$ opatch lsinventory



Apply opatch:


$ opatch apply





Steps to apply opatch on FMW oracle_common HOME :



Set the Environment as below:


$ export ORACLE_HOME=$FMW_HOME/oracle_common

$ export PATH=$ORACLE_HOME/OPatch:$PATH


$ opatch lsinventory


Apply opatch:


$ opatch apply





Queries :


 Query to check if a patch is applied in Oracle EBS R12.2.x:


In Oracle E Business Suite (ebs erp) R12.2.x you cannot query the AD_BUGS table to check if patches have been applied..



The AD_BUGS table may have entries for patches that were applied but later the patching cycle was aborted (not really applied).





The way to check whether a patch is really applied is to use the AD_PATCH.IS_PATCH_APPLIED PL/SQL function.


Usage:


select AD_PATCH.IS_PATCH_APPLIED(\'$release\',\'$appltop_id\',\'$patch_no\',\'$language\')

from dual;


Example sql:


SELECT adb.bug_number,ad_patch.is_patch_applied('11i', 1045, adb.bug_number)

FROM ad_bugs adb

WHERE adb.bug_number in (20034256);


or for single app tier installations:


select ad_patch.is_patch_applied('R12',-1,20034256) from dual;


Expected results:


EXPLICIT = applied

NOT APPLIED = not applied / aborted


Note: If you are sure patch is applied, but showing as not applied then do the following workaround.



1. Start adadmin after source the RUN FS env.

2. Select "2. Maintain Applications Files menu" in "AD Administration Main Menu".

3. In "Maintain Applications Files", select "4. Maintain snapshot information".

4. Select "2. Update current view snapshot" in the "Maintain Snapshot Information".

5. Select "1. Update Complete APPL_TOP" in the "Maintain Current View Snapshot Information".



Query to check current AD patchset:


SELECT a.application_short_name, b.patch_level

FROM fnd_application_vl a,fnd_product_installations b

WHERE a.application_id = b.application_id

  and application_short_name = 'AD';




Query to check patches applied correctly and in the expected sequence:




1.1.Run this sql statement:


   select * from ad_adop_session_patches order by end_date desc;


1.2. Run this piece of sql code:


   set pagesize 200;

   set linesize 160;

   column adop_session_id format 999999999999;

   column bug_number format a15;

   column status format a15;

   column applied_file_system_base format a23;

   column patch_file_system_base format a23;

   column adpatch_options format a15;

   column node_name format a15;

   column end_date format a15;

   column clone_status format a15;


   select ADOP_SESSION_ID, BUG_NUMBER, STATUS, APPLIED_FILE_SYSTEM_BASE, PATCH_FILE_SYS   TEM_BASE, ADPATCH_OPTIONS, NODE_NAME, END_DATE, CLONE_STATUS

   from ad_adop_session_patches

   order by end_date desc;



Below are possible values of STATUS column:

N - Not Applied In the current node but applied in other nodes

R - Patch Application is going on.

H - Patch failed in the middle. (Hard Failure)

F - Patch failed in the middle but user tried to skip some failures.

S - Patch Application succeeded after skipping the failed jobs.

Y - Patch Application succeeded.

C - Reserved for clone and config_clone. Indicates clone completed




Query to Check AD and TXK C Patch levels:




SELECT codelevel FROM AD_TRACKABLE_ENTITIES WHERE abbreviation in ('txk','ad');




Restarting adop


1 If you have shut down the workers, or if adop quits while performing processing actions, it saves all the actions completed up to that point in restart files.

Investigate and Patching Utilities resolve the problem that caused the failure,



then restart adop. After you restart adop, it will ask if you want to continue with the previous session (at the point where the processing stopped), or start a new session.

To restart from beginning,  use restart=no abandon=yes

To restart a patch you need to use restart=yes abandon=no .


eg. adop restart=no abandon=yes  phase=apply

If still can not restart,

2 Check the contents of the AD_ADOP_SESSIONS and AD_ADOP_SESSION_PATCHES tables

---------------------------------------------------------------------------------------

2.1

column id format 99

column nn format a10

column nt format a6

select adop_session_id id, prepare_status pr, apply_status ap,

finalize_status fi, cutover_status cu, cleanup_status cl, abort_status ab, status st, node_name nn, node_type nt

from ad_adop_sessions

order by adop_session_id desc;


2.2

column id format 99

column bn format a12

column pr format 99999999

column afs format a40

column pfs format a40

column cs format a9

column nn format a10

column ed format a28

column drv format a28

column pt format a35


select adop_session_id id, bug_number bn, patchrun_id pr, status st,

node_name nn, cast(end_date as timestamp) ed, driver_file_name drv, patch_top  pt

from ad_adop_session_patches

order by end_date desc;


2.3 explanation example of above output

=========================

 Examining the AD_ADOP_SESSIONS table,

 ID P A F C C A S NN         NT

 --- - - - - - - - ---------- ------

3 X P N X N X F test2    master

2 R N N N N Y C test2    master


It looks like the first ADOP cycle (prepare phase) was attempted at session ID 2 and then for some reason it didn't finish and then an abort was  initiated which did finish.  A new ADOP session (ID 3) was attempted but via hotpatch and it failed.  Since this table only reports what went on with ADOP there's no information about adpatch usage, i.e. if it was used before the

 first ADOP cycle or in between session 2 and 3, or after session 3.




2.4 To diagnostic further for the root cause why patch application failed, we need adpatch log for the failed patch xxxx.

cd  $APPL_TOP_NE/../log/adop/<session id>

eg.

 /d01/EBS/VIS/fs_ne/EBSapps/log/adop/<session id>/timestamp/context_name/xxxx/log


3 If you are just wants to start from a fresh page and considering the state of these tables

I would suggest making a manual modification to the  AD_ADOP_SESSIONS table just so it would not hold up ADOP execution due to an

incomplete hotpatch session.

 Do the following:

update ad_adop_sessions set status='C' where adop_session_id=3;


4 restart adop and retest.


How to Synchronize the AD tables FND_NODES, ADOP_VALID_NODES, and FND_OAM_CONTEXT_FILES in 12.2 when adop fails. (Doc ID 2064223.1)


thanks,
Srini

Monday 19 September 2022

Rman backup Management

 Dear's


An essential part of a backup and recovery strategy is managing backups after you create them. Backup management includes deleting obsolete backups and performing periodic checks to ensure that backups are available and usable.

A backup recorded in the Oracle Recovery Manager (RMAN) repository has one of the following status values:

Available, meaning that the backup is still present on disk or tape, as recorded in the repository

Expired, meaning that the backup no longer exists on disk or tape, but is still listed in the repository

Unavailable, meaning that the backup is temporarily not available for data recovery operations (because, for example, it is stored on a tape that is stored offsite or on a disk that is currently not mounted)

Backups can also be obsolete. An obsolete backup is, based on the currently configured retention policy, no longer needed to satisfy data recovery goals.

Maintenance tasks that you can perform in RMAN include the following:

Viewing details about your backups

Cross-checking your repository, which means checking whether backups listed in the repository exist and are accessible, and marking as expired any backups not accessible at the time of the cross-check

Deleting the record of expired backups from your RMAN repository

Deleting obsolete backups from the repository and from the backup media

Validating backups to ensure that a given backup is available and not corrupted


Cross-Checking Backups  :

LIST BACKUP SUMMARY;

CROSSCHECK BACKUPSET 1345;

CROSSCHECK DATAFILECOPY 1,5;

CROSSCHECK BACKUP;


Deleting Expired Backups  : 

Deleting expired backups removes from the Oracle Recovery Manager (RMAN) repository those backups that are listed as EXPIRED. Expired backups are those found to be inaccessible during a cross-check. No attempt is made to delete the files containing the backup from disk or tape; this action updates only the RMAN repository.


CROSSCHECK BACKUPSET;

DELETE EXPIRED BACKUP;

LIST BACKUP SUMMARY;

DELETE OBSOLETE;

SELECT * FROM V$RECOVERY_FILE_DEST;

SELECT * FROM V$RECOVERY_AREA_USAGE;

Background Processes In ORACLE.

 Dear's ,

A multiprocessor Oracle database system uses background processes. Background processes are the processes running behind the scene and are meant to perform certain maintenance activities or to deal with abnormal conditions arising in the instance. Each background process is meant for a specific purpose and its role is well defined.

Background processes consolidate functions that would otherwise be handled by multiple database programs running for each user process. Background processes asynchronously perform I/O and monitor other Oracle database processes to provide increased parallelism for better performance and reliability.

A background process is defined as any process that is listed in V$PROCESS and has a non-null value in the pname column.

Not all background processes are mandatory for an instance. Some are mandatory and some are optional. Mandatory background processes are DBWn, LGWR, CKPT, SMON, PMON, and RECO. All other processes are optional, will be invoked if that particular feature is activated.

Oracle background processes are visible as separate operating system processes in Unix/Linux. In Windows, these run as separate threads within the same service. Any issues related to background processes should be monitored and analysed from the trace files generated and the alert log.

Background processes are started automatically when the instance is started.

 


To find out background processes from database:

SQL> select SID, PROGRAM from v$session where TYPE=’BACKGROUND’;

To find out background processes from OS:

$ ps -ef | grep ora_|grep SID

 Mandatory Background Processes in Oracle 

DBWn 

LGWR 

CKPT 

SMON 

PMON 

RECO

 


If any one of these 5 or  6 mandatory background processes is killed/not running, the instance will be aborted.


1) Database Writer (maximum 36) DBW0-DBW9,DBWa-DBWj


Whenever a log switch is occurring as redo log file changes from CURRENT stage to ACTIVE stage, oracle calls DBWn and synchronizes all the dirty blocks in database buffer cache to the respective datafiles, scattered or randomly.


DBWn will be invoked in following scenarios:

When the dirty blocks in SGA reaches to a threshold value, oracle calls DBWn.

When the database is shutting down with some dirty blocks in the SGA, then oracle calls DBWn.

DBWn has a time out value (3 seconds by default) and it wakes up whether there are any dirty blocks or not.

When a checkpoint is issued.

When a server process cannot find a clean reusable buffer after scanning a threshold number of buffers.

When a huge table wants to enter into SGA and oracle could not find enough free space where it decides to flush out LRU blocks and which happens to be dirty blocks. Before flushing out the dirty blocks, oracle calls DBWn.

When Table Dropped or Truncated.

When tablespace is going to OFFLINE/READ ONLY/BEGIN BACKUP.

2) Log Writer (maximum 1) LGWR


LGWR writes redo data from redo log buffers to (online) redolog files, sequentially.

Redolog file contains changes to any datafile. The content of the redolog file is file id, block id and new content.

LGWR will be invoked more often than DBWn as log files are really small when compared to datafiles (KB vs GB). For every small update we don’t want to open huge gigabytes of datafiles, instead write to the log file.

Redolog file has three stages CURRENT, ACTIVE, INACTIVE and this is a cyclic process. Newly created redolog file will be in UNUSED state.

LGWR will be invoked in following scenarios:


LGWR is invoked whenever 1/3rd of the redo buffer is filled up.

Whenever the log writer times out (3sec).

Whenever 1MB of redolog buffer is filled (This means that there is no sense in making the redolog buffer more than 3MB).

Shutting down the database.

Whenever checkpoint event occurs.

When a transaction is completed (either committed or rollbacked) then oracle calls the LGWR and synchronizes the log buffers to the redolog files and then only passes on the acknowledgement back to the user. Which means the transaction is not guaranteed although we said commit, unless we receive the acknowledgement. When a transaction is committed, a System Change Number (SCN) is generated and tagged to it. Log writer puts a commit record in the redolog buffer and writes it to disk immediately along with the transaction’s redo entries. Changes to actual data blocks are deferred until a convenient time (Fast-Commit mechanism).

When DBWn signals the writing of redo records to disk. All redo records associated with changes in the block buffers must be written to disk first (The write-ahead protocol). While writing dirty buffers, if the DBWn process finds that some redo information has not been written, it signals the LGWR to write the information and waits until the control is returned.

3) Checkpoint (maximum 1) CKPT


Checkpoint is a background process which triggers the checkpoint event, to synchronize all database files with the checkpoint information. It ensures data consistency and faster database recovery in case of a crash.


When a checkpoint occurs  it will invoke the DBWn and updates the SCN block of the all datafiles and the control file with the current SCN. This is done by LGWR. This SCN is called checkpoint SCN.


Checkpoint event can be occurred in following conditions:


Whenever database buffer cache filled up.

Whenever times out (3seconds until 9i, 1second from 10g).

Log switch occurred.

Whenever manual log switch is done.

SQL> ALTER SYSTEM SWITCH LOGFILE;

Manual checkpoint.

SQL> ALTER SYSTEM CHECKPOINT;

Graceful shutdown of the database.

Whenever BEGIN BACKUP command is issued.

When the time specified by the initialization parameter LOG_CHECKPOINT_TIMEOUT (in seconds), exists between the incremental checkpoint and the tail of the log.

When the number of OS blocks specified by the initialization parameter LOG_CHECKPOINT_INTERVAL, exists between the incremental checkpoint and the tail of the log.

The number of buffers specified by the initialization parameter FAST_START_IO_TARGET required to perform roll-forward is reached.

Oracle 9ionwards, the time specified by the initialization parameter FAST_START_MTTR_TARGET (in seconds) is reached and specifies the time required for a crash recovery. The parameter FAST_START_MTTR_TARGET replaces LOG_CHECKPOINT_INTERVAL and FAST_START_IO_TARGET, but these parameters can still be used.

4) System Monitor (maximum 1) SMON :


If the database is crashed (power failure) and next time when we restart the database SMON observes that last time the database was not shutdown gracefully. Hence it requires some recovery, which is known as INSTANCE CRASH RECOVERY. When performing the crash recovery before the database is completely open, if it finds any transaction committed but not found in the datafiles, will now be applied from redolog files to datafiles.

If SMON observes some uncommitted transaction which has already updated the table in the datafile, is going to be treated as a in doubt transaction and will be rolled back with the help of before image available in rollback segments.

SMON also cleans up temporary segments that are no longer in use.

It also coalesces contiguous free extents in dictionary managed tablespaces that have PCTINCREASE set to a non-zero value.

InRAC environment, the SMON process of one instance can perform instance recovery for other instances that have failed.

SMON wakes up about every 5 minutes to perform housekeeping activities.

5) Process Monitor (maximum 1) PMON


If a client has an open transaction which is no longer active (client session is closed) then PMON comes into the picture and that transaction becomes in doubt transaction which will be rolled back.

PMON is responsible for performing recovery if a user process fails. It will rollback uncommitted transactions. If the old session locked any resources that will be unlocked by PMON.

PMON is responsible for cleaning up the database buffer cache and freeing resources that were allocated to a process.

PMON also registers information about the instance and dispatcher processes with Oracle (network) listener.

PMON also checks the dispatcher & server processes and restarts them if they have failed.

PMON wakes up every 3 seconds to perform housekeeping activities.

 

it might be optional .. 

6) Recoverer (maximum 1) RECO [Mandatory from Oracle 10g]

This process is intended for recovery in distributed databases. The distributed transaction recovery process finds pending distributed transactions and resolves them. All in-doubt transactions are recovered by this process in the distributed database setup. RECO will connect to the remote database to resolve pending transactions.

Optional Background Processes In ORACLE 


Archiver (maximum 10) ARC0-ARC9 :

The ARCn process is responsible for writing the online redolog files to the mentioned archive log destination after a log switch has occurred. ARCn is present only if the database is running in archivelog mode and automatic archiving is enabled. The log writer process is responsible for starting multiple ARCn processes when the workload increases. Unless ARCn completes the copying of a redolog file, it is not released to log writer for overwriting.

Coordinated Job Queue Processes(maximum 1000) CJQ0/Jnnn:

Job queue processes carry out batch processing. All scheduled jobs are executed by these processes. The initialization parameter JOB_QUEUE_PROCESSES specifies the maximum job processes that can be run concurrently. These processes will be useful in refreshing materialized views.

CQJ0–

Job queue controller process wakes up periodically and checks the job log. If a job is due, it spawns Jnnnn processes to handle jobs.

Dedicated Server

Dedicated server processes are used when MTS is not used. Each user process gets a dedicated connection to the database. These user processes also handle disk reads from database datafiles into the database block buffers.

LISTENER

The LISTENER process listens for connection requests on a specified port and passes these requests to either a distributor process if MTS is configured, or to a dedicated process if MTS is not used. The LISTENER process is responsible for load balance and failover in case a RAC instance fails or is overloaded. 

CALLOUT Listener

Used by internal processes to make calls to externally stored procedures. 

Lock Monitor(maximum 1) LMON

Lock monitor manages global locks and resources. It handles the redistribution of instance locks whenever instances are started or shutdown. Lock monitor also recovers instance lock information prior to the instance recovery process. Lock monitor co-ordinates with the Process Monitor (PMON) to recover dead processes that hold instance locks.

Lock Manager Daemon (maximum 10) LMDn:

LMDn processes manage instance locks that are used to share resources between instances. LMDn processes also handle deadlock detection and remote lock requests.

Lock processes (maximum 10) LCK0- LCK9

The instance locks that are used to share resources between instances are held by the lock processes.

Block Server Process (maximum 10) BSP0-BSP9

Block server Processes have to do with providing a consistent read image of a buffer that is requested by a process of another instance, in certain circumstances.

Queue Monitor (maximum 10) QMN0-QMN9

This is the advanced queuing time manager process. QMNn monitors the message queues. QMN used to manage Oracle Streams Advanced Queuing.

Event Monitor (maximum 1) EMN0/EMON

This process is also related to advanced queuing, and is meant for allowing a publish/subscribe style of messaging between applications.

Dispatcher (maximum 1000) Dnnn

Intended for multi-threaded server (MTS) setups. Dispatcher processes listen to and receive requests from connected sessions and places them in the request queue for further processing. Dispatcher processes also pickup outgoing responses from the result queue and transmit them back to the clients. Dnnn are mediators between the client processes and the shared server processes.

Shared Server Processes (maximum 1000) Snnn:

Intended for multi-threaded server (MTS) setups. These processes pickup requests from the call request queue, process them and then return the results to a result queue. These user processes also handle disk reads from database datafiles into the database block buffers.

Parallel Execution/Query Slaves (maximum 1000) Pnnn

These processes are used for parallel processing. It can be used for parallel execution of SQL statements or recovery.

Trace Writer (maximum 1) TRWR

Trace writer writes trace files from an Oracle internal tracing facility.

Input/output Slaves (maximum 1000) Innn

These processes are used to simulate asynchronous I/O on platforms that do not support it. The initialization parameter DBWR_IO_SLAVES is set for this purpose.

Data Guard Monitor (maximum 1) DMON

The Data Guard broker process. DMON is started when Data Guard is started. This is broker controller process is the main broker process and is responsible for coordinating all broker actions as well as maintaining the broker configuration files. This process is enabled/disabled with the DG_BROKER_START parameter.

Data Guard Broker Resource ManagerRSM0

The RSM process is responsible for handling any SQL commands used by the broker that need to be executed on one of the databases in the configuration.

Data Guard Broker Instance Slave ProcessINSV

Performs Data Guard broker communication among instances in an Oracle RAC environment. 

Data Guard Broker Fast Start Failover Pinger ProcessFSFP

Maintains fast-start failover state between the primary and target standby databases. FSFP is created when fast-start failover is enabled. 

LGWR Network Server processLNS:

In Data Guard, LNS process performs actual network I/O and waits for each network I/O to complete. Each LNS has a user configurable buffer that is used to accept outbound redo data from the LGWR process. The NET_TIMEOUT attribute is used only when the LGWR process transmits redo data using a LGWR Network Server (LNS) process.

Managed Recovery ProcessMRP

In Data Guard environment, this managed recovery process will apply archived redo logs to the standby database.

Remote File Server processRFS

The remote file server process, in Data Guard environment, on the standby database receives archived redo logs from the primary database.

Logical Standby ProcessLSP

The logical standby process is the coordinator process for a set of processes that concurrently read, prepare, build, analyse, and apply completed SQL transactions from the archived redo logs. The LSP also maintains metadata in the database. The RFS process communicates with the logical standby process (LSP) to coordinate and record which files arrived.

Wakeup Monitor Process (maximum 1) WMON

This process was available in older versions of Oracle to alarm other processes that are suspended while waiting for an event to occur. This process is obsolete and has been removed.

Recovery Writer (maximum 1) RVWR

This is responsible for writing flashback logs (to FRA).

Fetch Archive Log (FAL) Server

Services requests for archive redo logs from FAL clients running on multiple standby databases. Multiple FAL servers can be run on a primary database, one for each FAL request. 

Fetch Archive Log (FAL) Client

Pulls archived redo log files from the primary site. Initiates transfer of archived redo logs when it detects a gap sequence. 

Data PumpMaster Process DMnn

Creates and deletes the master table at the time of export and import. Master table contains the job state and object information. Coordinates the Data Pump job tasks performed by Data Pump worker processes and handles client interactions. The Data Pump master (control) process is started during job creation and coordinates all tasks performed by the Data Pump job. It handles all client interactions and communication, establishes all job contexts, and coordinates all worker process activities on behalf of the job. Creates the Worker Process. 

Data Pump Worker ProcessDWnn

It performs the actual heavy-duty work of loading and unloading of data. It maintains the information in master table. The Data Pump worker process is responsible for performing tasks that are assigned by the Data Pump master process, such as the loading and unloading of metadata and data.

Shadow Process

When client logs in to an Oracle Server the database creates and Oracle process to service Data Pump API.

Client Process

The client process calls the Data pump API.

Memory Manager(maximum 1) MMAN

MMAN dynamically adjust the sizes of the SGA components like buffer cache, large pool, shared pool and java pool and serves as SGA memory broker. It is a new process added to Oracle 10g as part of automatic shared memory management.

Memory Monitor(maximum 1) MMON

MMON monitors SGA and performs various manageability related background tasks. MMON, the Oracle 10g background process, used to collect statistics for the Automatic Workload Repository (AWR).

Memory Monitor Light(maximum 1) MMNL

New background process in Oracle 10g. This process performs frequent and lightweight manageability-related tasks, such as session history capture and metrics computation. This process will flush the ASH buffer to AWR tables when the buffer is full or a snapshot is taken.

Change Tracking Writer (maximum 1) CTWR

CTWR will be useful in RMAN. Optimized incremental backups using block change tracking (faster incremental backups) using a file (named block change tracking file). CTWR (Change Tracking Writer) is the background process responsible for tracking the blocks.

File Monitor (FMON)

The database communicates with the mapping libraries provided by storage vendors through an external non-Oracle Database process that is spawned by a background process called FMON. FMON is responsible for managing the mapping information. When you specify the FILE_MAPPING initialization parameter for mapping datafiles to physical devices on a storage subsystem, then the FMON process is spawned.

CPU patching information

 Dear all,


Need to go with PSU OR BP

if you want to go with PSU rollback BP on the node and apply PSU vice versa

Oracle Database - Overview of Database Patch Delivery Methods - 12.1.0.2 and older ( Doc ID 1962125.1 )

Assistant: Download Reference for Oracle Database/GI Update, Revision, PSU, SPU(CPU), Bundle Patches, Patchsets and Base Releases ( Doc ID 2118136.2 )



Critical Patch Update (CPU) Program Oct 2021 Patch Availability Document (PAD) (Doc ID 2796575.1)


Critical Patch Update (CPU) Program Oct 2021 Patch Availability Document (PAD) (Doc ID 2796575.1)



Thanks,

Srini