Showing posts with label SHELL SCRIPT. Show all posts
Showing posts with label SHELL SCRIPT. Show all posts

Tuesday, 25 June 2024

Oracle Database Bounce automatically shell scripts

 Dear All,


In this post i am going to share you the step by step Oracle database bounce automation steps .


Note : this scripts works in my demo instance this is purely education purpose you can test before you implement on any important servers .


Start the database automatically script : 

Below script will help you to start the database automatically , you need to schedule cron jobs and schedule the below scripts as per your requirement.


export ORACLE_HOME=/u01/db/product/12C/db_home

export ORACLE_SID=racsinfo

export NLS_LANG=******.AL32UTF8

export LD_LIBRARY_PATH=/u01/db/product/12C/db_home/lib:/usr/X11R6/lib:/u01/db/product/12C/db_home/ctx/lib

export PATH=$ORACLE_HOME/bin:$PATH

export TNS_ADMIN=/u01/db/product/12C/db_home/network/admin


date > database_instance_start.log

sqlplus -s / as sysdba << EOF

spool database_instance_start.log append;

startup;

spool off;

EOF


lsnrctl start racsinfo > database_listener_start.log


Stop the database automatically script : 

Below script will help you to stop the database automatically , you need to schedule cron jobs and schedule the below scripts as per your requirement.

export ORACLE_HOME=/u01/db/product/12C/db_home

export ORACLE_SID=racsinfo

export NLS_LANG=*******.AL32UTF8

export LD_LIBRARY_PATH=/u01/db/product/12C/db_home/lib:/usr/X11R6/lib:/u01/db/product/12C/db_home/ctx/lib

export PATH=$ORACLE_HOME/bin:$PATH

export TNS_ADMIN=/u01/db/product/12C/db_home/network/admin


date > database_instance_stop.log

sqlplus -s / as sysdba << EOF

spool database_instance_stop.log append;

--select to_char(sysdate,'YYYY/MM/DD HH24:MI:SS') from dual;

shutdown immediate;

spool off;

EOF


lsnrctl stop racsinfo > database_listener_stop.log


Happy Learning from Racsinfotech ... Lab Video will be available in youtube cannel @RACSINFOTECH

Thanks,

Srini 

Sunday, 28 July 2019

Awk (Aho, Weinberger, Kernighan) ans Sed (stream editor) commands


Unix Sed And Awk Text Processing Utilities

Unix provides sed and awk as two text processing utilities that work on a line-by-line basis. The sed program (stream editor) works well with character-based processing, and the awk program (Aho, Weinberger, Kernighan) works well with delimited field processing.
Both use regular expressions to find patterns and support commands to process the matches.
Commandawk – this command is a useful and powerful command used for pattern matching as well as for text processing.
Common Syntaxawk [options] ‘program text’ file
Example$ls -l | awk ‘{print $3}’
This command will display only the third column from the long listing of files and directories.
Commandsed – this is a powerful command for editing a ‘stream’ of text. It can read input from a text file or from piped input, and process the input in one pass..
Common Syntaxsed[OPTION]…..[-f][file]
Example1sed -n ‘/hello/p’ file1
This command will display all the lines which contains hello
Example2sed ‘s/hello/HELLO/’ file1
This command will substitute hello with HELLO everywhere in the file.
Example3sed ‘/hello/,+2d’ file1
This command will delete the two lines starting with the first match of ‘hello’





Awk is a scripting language used for manipulating data and generating reports.The awk command programming language requires no compiling, and allows the user to use variables, numeric functions, string functions, and logical operators.
Awk is a utility that enables a programmer to write tiny but effective programs in the form of statements that define text patterns that are to be searched for in each line of a document and the action that is to be taken when a match is found within a line. Awk is mostly used for pattern scanning and processing. It searches one or more files to see if they contain lines that matches with the specified patterns and then performs the associated actions.
Awk is abbreviated from the names of the developers – Aho, Weinberger, and Kernighan.
WHAT CAN WE DO WITH AWK ?
1. AWK Operations:
(a) Scans a file line by line
(b) Splits each input line into fields
(c) Compares input line/fields to pattern
(d) Performs action(s) on matched lines
2. Useful For:
(a) Transform data files
(b) Produce formatted reports
3. Programming Constructs:
(a) Format output lines
(b) Arithmetic and string operations
(c) Conditionals and loops
Syntax:
awk options 'selection _criteria {action }' input-file > output-file
Options:
-f program-file : Reads the AWK program source from the file 
                  program-file, instead of from the 
                  first command line argument.
-F fs            : Use fs for the input field separator
Sample Commands
Example:
Consider the following text file as the input file for all cases below.
$cat > employee.txt 
ajay manager account 45000
sunil clerk account 25000
varun manager sales 50000
amit manager account 47000
tarun peon sales 15000
deepak clerk sales 23000
sunil peon sales 13000
satvik director purchase 80000 
1. Default behavior of Awk : By default Awk prints every line of data from the specified file.
$ awk '{print}' employee.txt
Output:
ajay manager account 45000
sunil clerk account 25000
varun manager sales 50000
amit manager account 47000
tarun peon sales 15000
deepak clerk sales 23000
sunil peon sales 13000
satvik director purchase 80000 
In the above example, no pattern is given. So the actions are applicable to all the lines. Action print without any argument prints the whole line by default, so it prints all the lines of the file without failure.
2. Print the lines which matches with the given pattern.
$ awk '/manager/ {print}' employee.txt 
Output:
ajay manager account 45000
varun manager sales 50000
amit manager account 47000 
In the above example, the awk command prints all the line which matches with the ‘manager’.
3. Spliting a Line Into Fields : For each record i.e line, the awk command splits the record delimited by whitespace character by default and stores it in the $n variables. If the line has 4 words, it will be stored in $1, $2, $3 and $4 respectively. Also, $0 represents the whole line.
$ awk '{print $1,$4}' employee.txt 
Output:
ajay 45000
sunil 25000
varun 50000
amit 47000
tarun 15000
deepak 23000
sunil 13000
satvik 80000 
In the above example, $1 and $4 represents Name and Salary fields respectively.
Built In Variables In Awk
Awk’s built-in variables include the field variables—$1, $2, $3, and so on ($0 is the entire line) — that break a line of text into individual words or pieces called fields.
NR: NR command keeps a current count of the number of input records. Remember that records are usually lines. Awk command performs the pattern/action statements once for each record in a file.
NF: NF command keeps a count of the number of fields within the current input record.
FS: FS command contains the field separator character which is used to divide fields on the input line. The default is “white space”, meaning space and tab characters. FS can be reassigned to another character (typically in BEGIN) to change the field separator.
RS: RS command stores the current record separator character. Since, by default, an input line is the input record, the default record separator character is a newline.
OFS: OFS command stores the output field separator, which separates the fields when Awk prints them. The default is a blank space. Whenever print has several parameters separated with commas, it will print the value of OFS in between each parameter.
ORS: ORS command stores the output record separator, which separates the output lines when Awk prints them. The default is a newline character. print automatically outputs the contents of ORS at the end of whatever it is given to print.
Examples:
Use of NR built-in variables (Display Line Number)
$ awk '{print NR,$0}' employee.txt 
Output:
1 ajay manager account 45000
2 sunil clerk account 25000
3 varun manager sales 50000
4 amit manager account 47000
5 tarun peon sales 15000
6 deepak clerk sales 23000
7 sunil peon sales 13000
8 satvik director purchase 80000 
In the above example, the awk command with NR prints all the lines along with the line number.
Use of NF built-in variables (Display Last Field)
$ awk '{print $1,$NF}' employee.txt 
Output:
ajay 45000
sunil 25000
varun 50000
amit 47000
tarun 15000
deepak 23000
sunil 13000
satvik 80000 
In the above example $1 represents Name and $NF represents Salary. We can get the Salary using $NF , where $NF represents last field.
Another use of NR built-in variables (Display Line From 3 to 6)
$ awk 'NR==3, NR==6 {print NR,$0}' employee.txt 
Output:
3 varun manager sales 50000
4 amit manager account 47000
5 tarun peon sales 15000
6 deepak clerk sales 23000 
More Examples
For the given text file:
$cat > geeksforgeeks.txt

A    B    C
Tarun    A12    1
Man    B6    2
Praveen    M42    3
1) To print the first item along with the row number(NR) separated with ” – “ from each line in geeksforgeeks.txt:
$ awk '{print NR "- " $1 }' geeksforgeeks.txt
1 - Tarun
2 – Manav    
3 - Praveen
2) To return the second row/item from geeksforgeeks.txt:
$ awk '{print $2}' geeksforgeeks.txt
A12
B6
M42
3) To print any non empty line if present
$ awk 'NF > 0' geeksforgeeks.txt
0
4) To find the length of the longest line present in the file:
$ awk '{ if (length($0) > max) max = length($0) } END { print max }' geeksforgeeks.txt
13
5) To count the lines in a file:
$ awk 'END { print NR }' geeksforgeeks.txt
3
6) Printing lines with more than 10 characters:
$ awk 'length($0) > 10' geeksforgeeks.txt
Tarun    A12    1
Praveen    M42    3
7) To find/check for any string in any column:
$ awk '{ if($3 == "B6") print $0;}' geeksforgeeks.txt
8) To print the squares of first numbers from 1 to n say 6:
$ awk 'BEGIN { for(i=1;i<=6;i++) print "square of", i, "is",i*i; }'
square of 1 is 1
square of 2 is 4
square of 3 is 9
square of 4 is 16
square of 5 is 25
square of 6 is 36

Thanks
Srini

Monday, 9 November 2015

How To Register Shell Script as Concurrent Program in Oracle Apps




Shell Script

Introduction :

 This Shell Script is developed to automatically get the .ctl (Control) file and .csv (Coma separated) file for SQL Loader.

   And after successful uploading the .csv file is moved to another location.

   This will facilitate user to put .csv file of any name.

   I have registered a request in Oracle Application to call that script for data uploading. 

 

Step 1:

 Shell Script:

echo "Following are System Parameters"
echo "~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~"
p0=$0
p1=$1
p2=$2
p3=$3
p4=$4
echo "1st System Parameter    :"$p0
echo "2nd System Parameter    :"$p1
echo "3rd System Parameter    :"$p2
echo "4th System Parameter    :"$p3
echo "5th System Parameter    :"$p4
echo "~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~"
echo "~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~"
echo "Following are User Parameters  "
echo "~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~"
u1=$5
echo "1st User Parameter    :"$u1
echo "~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~"
cd /home/oracle/alrazi
data_file=$(ls *.csv)
ctl_file=$(ls *.ctl)
echo $data_file
echo $ctl_file
sqlldr apps/**** control=$ctl_file data=$data_file
if [ $? -ne 0 ]
then
echo " Error Occured "
exit 1
else
mv $data_file /home/oracle/alrazi/alraziback
echo "Upload Successful"
echo "Moved File: " $data_file
echo "Successfully to Backup Folder "
exit 0
fi


 

Step 2:

save the shell script to $AP_TOP/bin as XXTEST.prog








Step 3:

create link as follows

 ln -s $FND_TOP/bin/fndcpesr XXTEST

 

Step 4:

Create Executable
 


Step 5:

Register as Concurrent Program


 


Step 6:

 Define User Parameters

 

 

Step 7:

 Run Request

 

Log file