Oracle Business Intelligence Applications (OBIA) are complete, prebuilt BI solutions that deliver intuitive, role-based intelligence for everyone in an organization from front line employees to senior management that enable better decisions, actions, and business processes. Designed for heterogeneous environments, these solutions enable organizations to gain insight from a range of data sources and applications including Siebel, Oracle E-Business Suite, PeopleSoft, and third party systems such as SAP.
Oracle BI Applications are built on the Oracle BI Suite Enterprise Edition, a comprehensive, innovative, and leading BI platform. This enables organizations to realize the value of a packaged BI Application, such as rapid deployment, lower TCO, and built-in best practices, while also being able to very easily extend those solutions to meet their specific needs, or build completely custom BI applications, all on one common BI architecture.
Oracle BI Applications includes the following:
■ Oracle Financial Analytics
■ Oracle Human Resources Analytics
■ Oracle Supply Chain and Order Management Analytics
■ Oracle Procurement and Spend Analytics
■ Oracle Project Analytics
■ Oracle Sales Analytics
■ Oracle Service Analytics
■ Oracle Contact Center Telephony Analytics
■ Oracle Marketing Analytics
■ Oracle Loyalty Analytics
■ Oracle Price Analytics
and more
Oracle BI Applications is a prebuilt business intelligence solution.
Oracle BI Applications supports Oracle sources, such as Oracle E-Business Suite Applications, Oracle's Siebel Applications, Oracle's PeopleSoft Applications, Oracle's JD Edwards Applications, and non-Oracle sources, such as SAP Applications. If you already own one of the above applications, you can purchase Oracle Business Intelligence Enterprise Edition and Oracle BI Applications to work with the application.
Oracle BI Applications also provides complete support for enterprise data, including financial, supply chain, workforce, and procurement and spend sources. These enterprise applications typically source from both Oracle data sources, such as Oracle EBS and PeopleSoft and non-Oracle data sources, such as SAP
Topology for an Oracle BI Applications Deployment
Oracle BI Applications and Informatica PowerCenter can be deployed flexibly across a wide range of topologies on different platforms and combinations of platforms.
Machine A (Windows-only)
Machine A is a machine that has installed Oracle Business Intelligence Enterprise Edition, on which you run the Oracle BI Applications installer to install the Oracle BI Applications files.
Note: The instance of Oracle Business Intelligence Enterprise Edition does not need to be the functional version of Oracle Business Intelligence Enterprise Edition that you will use to deploy dashboards in your live system. This instance is only required to enable the Oracle BI Applications installer to install the Oracle BI Applications files on a machine
After the Oracle BI Applications files have been installed on Machine A, the DAC Client is installed on Machine B, and the DAC Server is installed on Machine C.
In addition, the following files are copied from the installation machine (Machine A) to the Business Intelligence Deployment Tier (Machine Group F) as follows:
- The OracleBI\Server\Repository\OracleBIAnalyticsApps.rpd file is copied from Machine A to the machine that runs the BI Server in Machine Group F.
- The OracleBIData\Web\Catalog\EnterpriseBusinessAnalytics\*.* files are copied from Machine A to the machine that runs the BI Presentation Services Catalog in Machine Group F.
• ETL Tier (Functional)
o Machine B (Windows-only)
Runs the DAC Client and Informatica PowerCenter Client Tools.
o Machine C (Windows, UNIX, Linux)
Runs the DAC Server and Informatica PowerCenter Services.
o Machine D (Windows, UNIX, Linux)
Hosts the transactional (OLTP) database.
o Machine E (Windows, UNIX, Linux)
Hosts the Oracle Business Analytics Warehouse database
• BI Deployment Tier (Functional)
The BI Deployment tier is used to deploy the business intelligence dashboards.
o Machine Group F (Windows, UNIX, Linux)
Machine Group F is a group of machines that runs the Oracle Business Intelligence Enterprise Edition components. For example, one machine might run the BI Server and another machine might run the BI Presentation Services
Oracle Business Analytics warehouse architecture when deployed with Informatica PowerCenter and DAC
High-level analytical queries, like those commonly used in Oracle Business Intelligence, scan and analyze large volumes of data using complex formulas. This process can take a long time when querying a transactional database, which impacts overall system performance.
For this reason, the Oracle Business Analytics Warehouse was constructed using dimensional modeling techniques to allow for fast access to information required for decision making. The Oracle Business Analytics Warehouse derives its data from operational applications and uses Informatica PowerCenter to extract, transform, and load data from various supported transactional database systems (OLTP) into the Oracle Business Analytics Warehouse.
• The Client tier contains the Informatica PowerCenter client tools and the Oracle BI Data Warehouse Administration Console (DAC). DAC is a command and control interface for the data warehouse to allow for set up, configuration, administration, and monitoring of data warehouse processes.
• The Server tier contains the following:
o DAC Server. Executes the instructions from the DAC Client. The DAC Server manages data warehouse processes, including scheduling, loading of the ETL, and configuring the subject areas to be loaded. It dynamically adjusts its actions based on information in the DAC Repository. Depending on your business needs, you might incrementally refresh the Oracle Business Analytics Warehouse once a day, once a week, once a month, or on another similar schedule.
o DAC Repository. Stores the metadata (semantics of the Oracle Business Analytics Warehouse) that represents the data warehouse processes.
o Informatica PowerCenter Services:
Integration Services - The Integration Service reads workflow information from the repository. The Integration Service connects to the repository through the Repository Service to fetch metadata from the repository.
Repository Services - The Repository Service manages connections to the PowerCenter Repository from client applications. The Repository Service is a separate, multi-threaded process that retrieves, inserts, and updates metadata in the repository database tables.
o Informatica Repository. Stores the metadata related to Informatica workflows.
• The Database tier contains the OLTP and OLAP databases.
The Informatica Repository stores all of the Informatica object definitions for the ETL mappings that populate the Oracle Business Analytics Warehouse. It is a series of repository tables that are stored in a database, which can be a transactional, analytical, or separate database
Oracle BI Applications Components (with Informatica/DAC)
Oracle Business Analytics Warehouse Overview
The Oracle Business Analytics Warehouse is a unified data repository for all customer-centric data, which supports the analytical requirements of the supported source systems.
The Oracle Business Analytics Warehouse includes the following:
• A complete relational enterprise data warehouse data model with numerous prebuilt star schemas encompassing many conformed dimensions and several hundred fact tables.
• An open architecture to allow organizations to use third-party analytical tools in conjunction with the Oracle Business Analytics Warehouse using the Oracle Business Intelligence Server
• A set of ETL (extract-transform-load) processes that takes data from a wide range of source systems and creates the Oracle Business Analytics Warehouse tables.
• The Oracle Business Intelligence Data Warehouse Administration Console (DAC), a centralized console for the set up, configuration, administration, loading, and monitoring of the Oracle Business Analytics Warehouse
Important points
The metadata for a source system is held in a container
The PowerCenter Services can be installed on UNIX or on Windows.
The PowerCenter Client Tools must be installed on Windows.
You must co-locate the DAC Client with the Informatica PowerCenter Client Tools.
■ You must co-locate the DAC Server with Informatica PowerCenter Services.
DAC produces parameter files that are used by Informatica. If an execution plan
fails in DAC and you want to debug the workflow by running it directly from
Informatica, then the parameter file produced by DAC should be visible to
Informatica. This is one reason for the requirement to co-locate the DAC and
Informatica components as stated above
The DAC installer installs the DAC Client and DAC Server on Windows.
DAC Client only runs on Windows.
DAC Servers runs on Windows, UNIX, and Linux.
The DAC Server can run on Linux, but it must first be installed on a Windows
machine, then copied over to a Linux machine. Oracle does not provides an installer
for DAC on UNIX
The DAC Client can only be installed and run on Windows.
■ The DAC Client must be installed on the machine where Informatica PowerCenter
Client Tools was installed.
■ The DAC Server must be installed on the machine where Informatica PowerCenter
Services was installed.
■ You must install Informatica PowerCenter Services before you install DAC.
■ The correct version of the JDK is installed by the DAC installer.
■ The DAC installer installs DAC in the DAC_HOME\bifoundation\dac directory.
The DAC Client uses the Informatica pmrep and pmcmd command line programs
when communicating with Informatica PowerCenter. The DAC Client uses pmrep to
synchronize DAC tasks with Informatica workflows and to keep the DAC task source
and target tables information up to date.
In order for the DAC Client to be able to use the pmrep and pmcmd programs, the
path of the Informatica Domain file 'domains.infa' must be defined in the environment
variables on the DAC Client machine.
When you install DAC using the DAC installer, the Informatica Domain file is defined
in the environment variables on the DAC Client machine.
INFA_DOMAINS_FILE
C:\Informatica\9.0.1\clients\PowerCenterClient\domains.infa
The DAC Server uses the following command line programs to communicate with
Informatica PowerCenter:
■ pmrep is used to communicate with PowerCenter Repository Services.
■ pmcmd is used to communicate with PowerCenter Integration Services to run the
Informatica workflows.
The pmrep and pmcmd programs are installed during the PowerCenter Services
installation in the INFA_HOME\server\bin directory on the Informatica PowerCenter
Services machine
Oracle Business Intelligence Applications V7.9.6.3 requires Oracle Business
Intelligence Enterprise Edition V11.1.1.5.0.
■ Oracle Business Intelligence Applications V7.9.6.3 requires Informatica
PowerCenter V9.0.1. Hotfix 2.
■ Oracle Business Intelligence Applications V7.9.6.3 requires Oracle Data Warehouse
Console V10.1.3.4.1.
The OBIEE server uses the metadata to generate the SQL queries. It is stored in the
Repository or often referred to as .rpd. The repository has three layer as shown below, the physical layer, logical layer and the presentation layer.
Metadata maps the OBAW physical tables to a generic business model and includes more than 100 presentation catalogs (aka subject areas) to allow queries using Oracle BI clients such as Answers, Dashboards
High Level Data Flow
• Source – eBS (Raw Data)
• ETL – Extraction Transform and Load (Informatica – PowerCenter or
ODI)
• OBAW - Business Analytics Warehouse
• OBIEE Metadata
• OBIEE Content – Reports and Dashboards
• A prebuilt Informatica repository which includes mappings (wrapped in workflows) to extract data from the supported source systems (various versions of standard applications such as Siebel CRM - yes it was first - , Oracle eBusiness Suite, Peoplesoft, JDEdwards and SAP (BI Apps version 7.8.4) ) and load the data into the Oracle Business Analysis Warehouse. (Note: Oracle BI Applications version 7.9.5.2 includes a repository for the Oracle Data Integrator instead of Informatica but supports only Oracle Financials).
• The Oracle Business Analysis Warehouse (OBAW), a prebuilt schema (stars that is) which serves as a turnkey data warehouse including dozens of stars and accompanying indexes, aggregates, time dimensions and slowly changing dimension handling (and yes, it can be optimized).
• A prebuilt repository for the Data Warehouse Administration Console (DAC) server which is the orchestration engine behind the ETL process. The DAC client and server were included until version 7.9.5. Since then it is a separate installer.
• A prebuilt Oracle BI Server repository (rpd file) which maps the OBAW physical tables to a generic business model and includes more than 100 presentation catalogs (aka subject areas) to allow queries and segmentation using Oracle BI clients such as Answers, Dashboards and Segment Designer. Did I mention that it also takes care of authentication and data security?
• A prebuilt presentation catalog (repository) containing hundreds of requests and ready-to-use dashboards which enable tight integration between the source applications and the BI infrastructure (example: Click the Service Analytics screen in Siebel CRM, work with the dashboard data, drill down to details and click an action button to navigate back to the Siebel CRM record).
Oracle BI EE 10g is the successor of Siebel Analytics and Oracle BI Applications is the successor of Siebel Analytics Applications.
Detailed Data flow
First, the DAC scheduler kicks off jobs to loadrefresh the OBAW at regular intervals or alternatively, these jobs could be kicked off manually from the DAC client.
The DAC server uses the object and configuration data stored in the DAC repository to issue commands to the informatica Server.
The informatica server executes the commands issued from DAC, and uses the objects and configuration data stored in the informatica repository.
Then the data are extracted, transferred and loaded from the transactional databases into the OBAW target tables.
After the ETL is complete and OBAW is online, an OBIEE end user runs a dashboard or report in the Answers or Interactive dashboard.
The request comes through the web server and interacts with the presentation server.
The presentation server interacts with OBI Server and OBI server will understand the requests and hits the OBAW if it’s not cashed and extracts the data and provides it to the presentation server.
The presentation server formats the data into the required format and through web server, the request is presented to the end user
ETL process
ETL mappings are split into two main mappings, SDE mappings and SIL mappings.
2 SDE mappings load the staging tables,then SIL mappings (SILOS, SIL_Vert, PLP) load the final physical warehouse tables
3 SILOS SIL mappings are for all sources except Siebel Verticals. SIL_Vert SIL mappings are for Siebel Verticals only
4 Staging tables are suffixed with S, so W_AP_XACTS_F is the final table and the staging table is W_AP_XACTS_FS
Monday, October 24, 2011
Monday, December 6, 2010
DB Recovery
incomplete recovery of database until CANCLE after reaching some time from hot snap backup and archives
SQL>startup mount
SQL> recover database using backup controlfile until CANCEL
ORA-00279: change 5965295809608 generated at 12/01/2010 17:54:21 needed for
thread 1
ORA-00289: suggestion :
/oraarchive/vis/abcd001k_VIS_0001_0000006134_697765707.arc
ORA-00280: change 5965295809608 for thread 1 is in sequence #6134
Specify log: {=suggested | filename | AUTO | CANCEL}
/oraarchive/vis/abcd001k_VIS_0001_0000006134_697765707.arc
ORA-00279: change 5965295965197 generated at 12/02/2010 00:00:16 needed for
thread 1
ORA-00289: suggestion :
/oraarchive/vis/abcd001k_VIS_0001_0000006135_697765707.arc
ORA-00280: change 5965295965197 for thread 1 is in sequence #6135
ORA-00278: log file
'/oraarchive/vis/abcd001k_VIS_0001_0000006134_697765707.arc' no longer
needed for this recovery
Specify log: {=suggested | filename | AUTO | CANCEL}
/oraarchive/vis/abcd001k_VIS_0001_0000006135_697765707.arc
ORA-00279: change 5965295965276 generated at 12/02/2010 00:00:36 needed for
thread 1
ORA-00289: suggestion :
/oraarchive/vis/abcd001k_VIS_0001_0000006136_697765707.arc
ORA-00280: change 5965295965276 for thread 1 is in sequence #6136
ORA-00278: log file
'/oraarchive/vis/abcd001k_VIS_0001_0000006135_697765707.arc' no longer
needed for this recovery
Specify log: {=suggested | filename | AUTO | CANCEL}
/oraarchive/vis/abcd001k_VIS_0001_0000006136_697765707.arc
ORA-00279: change 5965296032263 generated at 12/02/2010 04:00:15 needed for
thread 1
ORA-00289: suggestion :
/oraarchive/vis/abcd001k_VIS_0001_0000006137_697765707.arc
ORA-00280: change 5965296032263 for thread 1 is in sequence #6137
ORA-00278: log file
'/oraarchive/vis/abcd001k_VIS_0001_0000006136_697765707.arc' no longer
needed for this recovery
Specify log: {=suggested | filename | AUTO | CANCEL}
/oraarchive/vis/abcd001k_VIS_0001_0000006137_697765707.arc
ORA-00279: change 5965296032275 generated at 12/02/2010 04:00:19 needed for
thread 1
ORA-00289: suggestion :
/oraarchive/vis/abcd001k_VIS_0001_0000006138_697765707.arc
ORA-00280: change 5965296032275 for thread 1 is in sequence #6138
ORA-00278: log file
'/oraarchive/vis/abcd001k_VIS_0001_0000006137_697765707.arc' no longer
needed for this recovery
Specify log: {=suggested | filename | AUTO | CANCEL}
/oraarchive/vis/abcd001k_VIS_0001_0000006138_697765707.arc
ORA-00279: change 5965296081988 generated at 12/02/2010 08:00:15 needed for
thread 1
ORA-00289: suggestion :
/oraarchive/vis/abcd001k_VIS_0001_0000006139_697765707.arc
ORA-00280: change 5965296081988 for thread 1 is in sequence #6139
ORA-00278: log file
'/oraarchive/vis/abcd001k_VIS_0001_0000006138_697765707.arc' no longer
needed for this recovery
Specify log: {=suggested | filename | AUTO | CANCEL}
/oraarchive/vis/abcd001k_VIS_0001_0000006139_697765707.arc
ORA-00279: change 5965296081997 generated at 12/02/2010 08:00:19 needed for
thread 1
ORA-00289: suggestion :
/oraarchive/vis/abcd001k_VIS_0001_0000006140_697765707.arc
ORA-00280: change 5965296081997 for thread 1 is in sequence #6140
ORA-00278: log file
'/oraarchive/vis/abcd001k_VIS_0001_0000006139_697765707.arc' no longer
needed for this recovery
Specify log: {=suggested | filename | AUTO | CANCEL}
/oraarchive/vis/abcd001k_VIS_0001_0000006140_697765707.arc
ORA-00279: change 5965296141979 generated at 12/02/2010 12:00:15 needed for
thread 1
ORA-00289: suggestion :
/oraarchive/vis/abcd001k_VIS_0001_0000006141_697765707.arc
ORA-00280: change 5965296141979 for thread 1 is in sequence #6141
ORA-00278: log file
'/oraarchive/vis/abcd001k_VIS_0001_0000006140_697765707.arc' no longer
needed for this recovery
Specify log: {=suggested | filename | AUTO | CANCEL}
CANCEL
ORA-01547: warning: RECOVER succeeded but OPEN RESETLOGS would get error below
ORA-01194: file 1 needs more recovery to be consistent
ORA-01110: data file 1: '/d03/oraVIS/vis/db/apps_st/data/system01.dbf'
To open the database with foruce, do below:
1) Set _ALLOW_RESETLOGS_CORRUPTION=TRUE in init.ora file.
2) Startup Mount
3) Alter database open resetlogs.
4) reset undo_management to “manual” in init.ora file.
5) startup database
6) Create new undo tablespace
changed undo_management to “AUTO” and undo_tablespace to “NewTablespace”
7) Bounce database.
SQL>startup mount
SQL> recover database using backup controlfile until CANCEL
ORA-00279: change 5965295809608 generated at 12/01/2010 17:54:21 needed for
thread 1
ORA-00289: suggestion :
/oraarchive/vis/abcd001k_VIS_0001_0000006134_697765707.arc
ORA-00280: change 5965295809608 for thread 1 is in sequence #6134
Specify log: {
/oraarchive/vis/abcd001k_VIS_0001_0000006134_697765707.arc
ORA-00279: change 5965295965197 generated at 12/02/2010 00:00:16 needed for
thread 1
ORA-00289: suggestion :
/oraarchive/vis/abcd001k_VIS_0001_0000006135_697765707.arc
ORA-00280: change 5965295965197 for thread 1 is in sequence #6135
ORA-00278: log file
'/oraarchive/vis/abcd001k_VIS_0001_0000006134_697765707.arc' no longer
needed for this recovery
Specify log: {
/oraarchive/vis/abcd001k_VIS_0001_0000006135_697765707.arc
ORA-00279: change 5965295965276 generated at 12/02/2010 00:00:36 needed for
thread 1
ORA-00289: suggestion :
/oraarchive/vis/abcd001k_VIS_0001_0000006136_697765707.arc
ORA-00280: change 5965295965276 for thread 1 is in sequence #6136
ORA-00278: log file
'/oraarchive/vis/abcd001k_VIS_0001_0000006135_697765707.arc' no longer
needed for this recovery
Specify log: {
/oraarchive/vis/abcd001k_VIS_0001_0000006136_697765707.arc
ORA-00279: change 5965296032263 generated at 12/02/2010 04:00:15 needed for
thread 1
ORA-00289: suggestion :
/oraarchive/vis/abcd001k_VIS_0001_0000006137_697765707.arc
ORA-00280: change 5965296032263 for thread 1 is in sequence #6137
ORA-00278: log file
'/oraarchive/vis/abcd001k_VIS_0001_0000006136_697765707.arc' no longer
needed for this recovery
Specify log: {
/oraarchive/vis/abcd001k_VIS_0001_0000006137_697765707.arc
ORA-00279: change 5965296032275 generated at 12/02/2010 04:00:19 needed for
thread 1
ORA-00289: suggestion :
/oraarchive/vis/abcd001k_VIS_0001_0000006138_697765707.arc
ORA-00280: change 5965296032275 for thread 1 is in sequence #6138
ORA-00278: log file
'/oraarchive/vis/abcd001k_VIS_0001_0000006137_697765707.arc' no longer
needed for this recovery
Specify log: {
/oraarchive/vis/abcd001k_VIS_0001_0000006138_697765707.arc
ORA-00279: change 5965296081988 generated at 12/02/2010 08:00:15 needed for
thread 1
ORA-00289: suggestion :
/oraarchive/vis/abcd001k_VIS_0001_0000006139_697765707.arc
ORA-00280: change 5965296081988 for thread 1 is in sequence #6139
ORA-00278: log file
'/oraarchive/vis/abcd001k_VIS_0001_0000006138_697765707.arc' no longer
needed for this recovery
Specify log: {
/oraarchive/vis/abcd001k_VIS_0001_0000006139_697765707.arc
ORA-00279: change 5965296081997 generated at 12/02/2010 08:00:19 needed for
thread 1
ORA-00289: suggestion :
/oraarchive/vis/abcd001k_VIS_0001_0000006140_697765707.arc
ORA-00280: change 5965296081997 for thread 1 is in sequence #6140
ORA-00278: log file
'/oraarchive/vis/abcd001k_VIS_0001_0000006139_697765707.arc' no longer
needed for this recovery
Specify log: {
/oraarchive/vis/abcd001k_VIS_0001_0000006140_697765707.arc
ORA-00279: change 5965296141979 generated at 12/02/2010 12:00:15 needed for
thread 1
ORA-00289: suggestion :
/oraarchive/vis/abcd001k_VIS_0001_0000006141_697765707.arc
ORA-00280: change 5965296141979 for thread 1 is in sequence #6141
ORA-00278: log file
'/oraarchive/vis/abcd001k_VIS_0001_0000006140_697765707.arc' no longer
needed for this recovery
Specify log: {
CANCEL
ORA-01547: warning: RECOVER succeeded but OPEN RESETLOGS would get error below
ORA-01194: file 1 needs more recovery to be consistent
ORA-01110: data file 1: '/d03/oraVIS/vis/db/apps_st/data/system01.dbf'
To open the database with foruce, do below:
1) Set _ALLOW_RESETLOGS_CORRUPTION=TRUE in init.ora file.
2) Startup Mount
3) Alter database open resetlogs.
4) reset undo_management to “manual” in init.ora file.
5) startup database
6) Create new undo tablespace
changed undo_management to “AUTO” and undo_tablespace to “NewTablespace”
7) Bounce database.
Friday, September 17, 2010
ASM
Adding datafile
ALTER TABLESPACE TDW_PROD ADD DATAFILE '+DATA_DISKGRP/bivis/datafile/bivis_28.dbf' SIZE 1024M AUTOEXTEND ON NEXT
1024M MAXSIZE UNLIMITED
queries
select NAME,TOTAL_MB,FREE_MB,FREE_MB/TOTAL_MB*100 from v$ASM_DISK;
select NAME,TOTAL_MB,FREE_MB,FREE_MB/TOTAL_MB*100 from v$ASM_DISKGROUP;
select NAME,TOTAL_MB,FREE_MB,TOTAL_MB-FREE_MB,FREE_MB/TOTAL_MB*100 from v$ASM_DISK
select NAME, REDUNDANCY, FAILGROUP from V$ASM_DISK ;
select operation from v$ASM_OPERATION;
select name,path,MOUNT_STATUS,HEADER_STATUS,MODE_STATUS,STATE,total_mb,free_mb from v$asm_disk
dropping/adding/cheking diskALTER DISKGROUP CHECK ALL;
ALTER DISKGROUP DROP DISK ; This command drops the disk and also initiates a REBAL operation
We can monitor the status of REBAL operation from v$asm_operation
Run select * from v$asm_disk and get different statuses of hdisk8.
eg:
ALTER DISKGROUP DATA_DISKGRP DROP DISK DATA_DISKGRP_0004;
DATA_DISKGRP_004 is alias for /dev/rhdisk8.
SQL> select OPERATION from v$asm_operation;
OPERA
-----
REBAL
ALTER DISKGROUP DATA_DISKGRP ADD DISK ‘mention the disk path’ NAME DATA_DISKGRP_004 REBALANCE POWER 5;
This query will give more details about disk usage.
SET LINESIZE 145
SET PAGESIZE 9999
SET VERIFY off
COLUMN disk_group_name FORMAT a15 HEAD 'Disk Group Name'
COLUMN disk_file_path FORMAT a17 HEAD 'Path'
COLUMN disk_file_name FORMAT a20 HEAD 'File Name'
COLUMN disk_file_fail_group FORMAT a20 HEAD 'Fail Group'
COLUMN total_mb FORMAT 999,999,999 HEAD 'File Size (MB)'
COLUMN used_mb FORMAT 999,999,999 HEAD 'Used Size (MB)'
COLUMN pct_used FORMAT 999.99 HEAD 'Pct. Used'
break on report on disk_group_name skip 1
compute sum label """" of total_mb used_mb on disk_group_name
compute sum label ""Grand Total: "" of total_mb used_mb on report
SELECT
NVL(a.name, '[CANDIDATE]') disk_group_name
, b.path disk_file_path
, b.name disk_file_name
, b.failgroup disk_file_fail_group
, b.total_mb total_mb
, (b.total_mb - b.free_mb) used_mb
, ROUND((1- (b.free_mb / b.total_mb))*100, 2) pct_used
FROM
v$asm_diskgroup a RIGHT OUTER JOIN v$asm_disk b USING (group_number)
ORDER BY
a.name
/
ALTER TABLESPACE TDW_PROD ADD DATAFILE '+DATA_DISKGRP/bivis/datafile/bivis_28.dbf' SIZE 1024M AUTOEXTEND ON NEXT
1024M MAXSIZE UNLIMITED
queries
select NAME,TOTAL_MB,FREE_MB,FREE_MB/TOTAL_MB*100 from v$ASM_DISK;
select NAME,TOTAL_MB,FREE_MB,FREE_MB/TOTAL_MB*100 from v$ASM_DISKGROUP;
select NAME,TOTAL_MB,FREE_MB,TOTAL_MB-FREE_MB,FREE_MB/TOTAL_MB*100 from v$ASM_DISK
select NAME, REDUNDANCY, FAILGROUP from V$ASM_DISK ;
select operation from v$ASM_OPERATION;
select name,path,MOUNT_STATUS,HEADER_STATUS,MODE_STATUS,STATE,total_mb,free_mb from v$asm_disk
dropping/adding/cheking diskALTER DISKGROUP
ALTER DISKGROUP
We can monitor the status of REBAL operation from v$asm_operation
Run select * from v$asm_disk and get different statuses of hdisk8.
eg:
ALTER DISKGROUP DATA_DISKGRP DROP DISK DATA_DISKGRP_0004;
DATA_DISKGRP_004 is alias for /dev/rhdisk8.
SQL> select OPERATION from v$asm_operation;
OPERA
-----
REBAL
ALTER DISKGROUP DATA_DISKGRP ADD DISK ‘mention the disk path’ NAME DATA_DISKGRP_004 REBALANCE POWER 5;
This query will give more details about disk usage.
SET LINESIZE 145
SET PAGESIZE 9999
SET VERIFY off
COLUMN disk_group_name FORMAT a15 HEAD 'Disk Group Name'
COLUMN disk_file_path FORMAT a17 HEAD 'Path'
COLUMN disk_file_name FORMAT a20 HEAD 'File Name'
COLUMN disk_file_fail_group FORMAT a20 HEAD 'Fail Group'
COLUMN total_mb FORMAT 999,999,999 HEAD 'File Size (MB)'
COLUMN used_mb FORMAT 999,999,999 HEAD 'Used Size (MB)'
COLUMN pct_used FORMAT 999.99 HEAD 'Pct. Used'
break on report on disk_group_name skip 1
compute sum label """" of total_mb used_mb on disk_group_name
compute sum label ""Grand Total: "" of total_mb used_mb on report
SELECT
NVL(a.name, '[CANDIDATE]') disk_group_name
, b.path disk_file_path
, b.name disk_file_name
, b.failgroup disk_file_fail_group
, b.total_mb total_mb
, (b.total_mb - b.free_mb) used_mb
, ROUND((1- (b.free_mb / b.total_mb))*100, 2) pct_used
FROM
v$asm_diskgroup a RIGHT OUTER JOIN v$asm_disk b USING (group_number)
ORDER BY
a.name
/
Issues -- Forms
Fixing issues of Custom Forms in Forms Servlet using Doc id 269634.1
You need to put your CUSTOM TOP variable (where the form resides) path information into Apache/Jserv/etc/formservlet.ini file
So open formservlet.ini and add this entry at the end
# Begin customizations
XXTAC_TOP=/d01/app/applmgr/prodappl/xxtac/11.5.0
# End customizations
You need to put your CUSTOM TOP variable (where the form resides) path information into Apache/Jserv/etc/formservlet.ini file
So open formservlet.ini and add this entry at the end
# Begin customizations
XXTAC_TOP=/d01/app/applmgr/prodappl/xxtac/11.5.0
# End customizations
Wednesday, September 15, 2010
UNIX
ls commnadb
Copy the specific things to ur file
ls -l > s1.sh
cat s1.sh |awk '{print $9}' > s2.sh
ls *.dbf|wc -l --- number of files in the PWD
t
ls -l /data/ora007/od08oadb01/*dbf|awk '{print $9}' > 007.txt
$9 is column number
column 9 is redirected to new file 007.txt
# echo "$userid"
system/xxxxx
# echo "$userid"|awk 'BEGIN { FS = "/" } {print $2}'
xxxxx
date=`date +%F`
# date +%F
2010-09-19
# date
Sun Sep 19 22:49:21 EDT 2010
ls -l *dbf|awk 'BEGIN {total=0} {total+=$5} END {print total/1024/1024/1024}'
$5 is the column number
column five size column in GB will be printed
ls -l [a-g]*dbf|awk 'BEGIN {total=0} {total+=$5} END {print total/1024/1024/1024}'
gives the total size of the DBF's starting from a to g
To find out the size of the datafiles
from [a-h]*dbf
du -sk [a-h]*dbf |awk 'BEGIN {total=0} {total+=$1} END {print total/1024/1024}'
du -sk [i]*dbf |awk 'BEGIN {total=0} {total+=$1} END {print total/1024/1024}'
du -sk [j-z]*dbf |awk 'BEGIN {total=0} {total+=$1} END {print total/1024/1024}'
ls -l | grep 'Jan 9' | awk '{print$9}' >one.sh -------in this Jan 9 is the day for which we are compressing and one.sh is the file name. Next open one.sh and do
:%s,^,compress ,g
OR
: %s,^,rm -rf ,g
save the file and run it also do a chmod 777 one.sh
To remove one month files at a time
ls -l|grep 'Aug '|awk '{print$9}' > one.sh
chmod 777 one.sh
vi one.sh
: %s,^,rm -f ,g
./one.sh
ls -l |grep 'Feb' |awk '{print$9}' > one.sh
ls -l |grep 'Feb' |grep *.req |awk '{print$9}' > one.sh
sdiff
sdiff -s -w 50 tb.txt sb.txt
50 is word length
diff $IAS_ORACLE_HOME/Apache/Apache/conf/httpd.conf $HOME/httpd.conf.orig > /tmp/httpddiff.txt
SIZEDIFF=`ls -lA /tmp/httpddiff.txt | awk -F' ' '{print $5}'`
if [ $SIZEDIFF -gt 0 ]
then
mailx -s "VIS irecruitment setup is missing in PROD" emaladdress < /tmp/httpddiff.txt
mailx -s "vis irecruitment setup is missing in PROD"" onemoreemailaddress < $HOME/irecmessage.txt
fi
eg of mailx command:
cat /tmp/mailbody | mail -s "Ignore Alerts: ABC PREPROD Apachebounce on hostname" abcd@abc.ice.com -- -r alertfromserver@abc.ice.com < a
to check any errors present in the file
cat /oracle/qa/qaappl/admin/QA/log/u2819091.log |grep -i ORA-
VNC SERVER
vncserver -geometry 1024x1024 -depth 8 -cc 3
vncserver :49 -nolisten local -httpd ${VNC}/classes -cc 3
vncserver :99 -geometry 1024x1024
You can kill sessions you started.
$ vncserver -kill :[display number]
$ vncserver -kill :0
Wait for 5 mins and delete the file as follows:
$ rm /tmp/X11-unix/X0
login as appl user for the instance refreshed and start the vnc services as follows:
$ vncserver :0 -localhost
$ xhost +
To set your password the first time (or reset it later) run this command:
$ /usr/bin/vncpasswd
strings
strings -a hrrunprc.sql | grep Header
To create a directory, change ownership, create symbolic link
mkdir -p /u20/app/KEYCS1/apkeycs1/applmgr/11510
mkdir -p /u20/app/KEYCS1/apkeycs1/applmgr/common
mkdir -p /u20/app/KEYCS1/apkeycs1/applmgr/CUSTOM
mkdir -p /u20/app/KEYCS1/apkeycs1/applmgr/product/806
mkdir -p /u20/app/KEYCS1/apkeycs1/applmgr/product/iAS
mkdir -p /u20/app/KEYCS1/orkeycs1/oracle/product
mkdir -p /u22/oradata/KEYCS1/oradata/data01
mkdir -p /u22/oradata/KEYCS1/oradata/data02
mkdir -p /u22/oradata/KEYCS1/oradata/data03
ln -s /u20/app/KEYCS1/apkeycss1/applmgr applmgr
ln -s /u20/app/KEYCSS1/orkeycss1/oracle oracle
ln -s /u22/oradata/KEYCSS1/oradata oradata
chown apkeycs1:oaa applmgr
chown orkeycs1:dba oracle
chown orkeycs1:dba oradata
Changing group for user
# usermod -g dba ora10g
Check the group created in “/etc/group”
# cat /etc/group |grep dba
dba::110:
useradd -c ""App user KEYCS1"" -d /u20/app/KEYCS1/apkeycs1 -g oaa apkeycs1
useradd -c ""Ora user KEYCS1"" -d /u20/app/KEYCS1/orkeycs1 -g dba orkeycs1
chown -R apkeycs1:oaa /u20/app/KEYCS1/apkeycs1
chown -R orkeycs1:dba /u20/app/KEYCS1/orkeycs1
chown -R orkeycs1:dba /u22/oradata/KEYCS1
Commands useful to clear space issues in mount points
bdf|awk '{print $5,$6}' > one.txt
find . -type f -mtime +5
find . -type f -mtime +5 -exec rm -f()
find . -type f -mtime +5|xargs rm -f
find . -name *.req -mtime +7 -exec compress -f {} \;
find . -name *.req -mtime +30 -exec rm -f {} \;
find . -name *.req -mtime +7 -exec gzip {} \;
find . -name "*" -mtime +20 -exec rm {} \;
find . -name "*" -mtime +30 -exec ls -la {} \;|awk '{print $5}'>a.sh
find . -size +500000000 -exec ls -al {} \;
find . -name "*" -mtime +20 -exec rm {} \;
find . -size +52428800 -exec ls -al {} \;
find . -size +100000000c -exec ls -l {} \;
$du -sh
$du -sk
find /eadvi4/applmgr/common/admin/log/EADVI4_hostname \( -name '*.*' \) -mtime +120 -exec rm -rf {} \;
delete files /u03/applmgr/prod/TEMPS and /u03/applmgr/qlty/TEMPS older than 15 days.
find /u03/applmgr/prod/TEMPS/ -mtime +15 -exec rm -f {} \;
find /u03/applmgr/qlty/TEMPS/ -mtime +15 -exec rm -f {} \;
find . -name "*.dbg" -mtime +1 -exec rm -f {} \;
Script to delete the dump files 21 days old need to be in cron
30 20 * * * find /ORACLE/eul10g/backup \( -name '*.dmp*' \) -mtime +21 -exec rm -f {} \; ---- As oracle user cron.
To find Symbolink links and replace to corrent ones
more rename_softlinks.sh
#!/bin/ksh
cd $ORACLE_HOME
find . -type l -exec ls -l {} \; | grep prod | awk '{gsub("prod","qa",$11); print "rm "$9; print "ln -s " $11 " " $9}'
To find links under any top
find . -type l -print | xargs ls -ld
Recreate soft links in $CUSTOM_TOP/bin
unlink visSQL
ln -s ../../../fnd/11.5.0/bin/fndcpesr visSQL
How to find the symbolic links that point to the old path in your oracle_home and appl_top.
This command is useful in cloning after restore from source to target that symbolic link are not pointing to source.
ls -al `find . -type l` | grep $OLD_PATH
To find all the text files that contains the old path at the UNIX level.
find . -type f -print|grep -v ":"|xargs file|grep "text"|awk '{FS=":"}{print $1}'|xargs grep -il $OLD_PATH
How to Sort files based on Size of file in a Directory
Useful in finding out spaces issues
ls -l | sort -nrk 5 | more
How to find a "word" or pattern in all files in a directory & subdirectories
find . -name "*" -exec grep -l {} \; -print
for example I want to search for word oracle
find . -name "*" -exec grep -l oracle {} \; -print
How to delete files older than N number of days , Useful to delete old log files
find . -name '*.*' -mtime + -exec rm {} \;
for example if I want to delete all files older than 7 days
find . -name '*.*' -mtime +7 -exec rm {} \;
To edit window registry
regedit (on run prompt)
To view a Linux desktop remotely
1. SSH to Linux server
2. Start the VNC server by typing 'vncserver' at the command line. Remember the display number.
3. Connect to the VNC server using a VNC viewer. Enter host name, display number, and VNC password for the user who started the session.
4. When finished, don't log out of Linux, just close the VNC viewer window.
5. If you wish, kill the VNC server session when finished. Or keep it running and use it next time you need to connect.
To use VNC to control or view a computer behind a firewall
The official VNC FAQ has a good explanation of using VNC through a firewall
at http://www.realvnc.com/faq.html#firewall. You must either open port 5900+N where N equals the VNC port assigned when you created a connection, or you must use SSH tunneling to channel the VNC traffic through ports that normally allow traffic through a firewall.
More detailed information is available at the link above.
ps command
ps -fu apdev
sort, delete rows, columns
ls -lrt |sort -nr -k5 |head -15
ls -lart |head -200
ls -lrt > a.sh
vi a.sh
esc :3000,$d -- to delete all line after 3000 use ^d for before 3000 deletion
$cut -c 55-95 a.sh > b.sh
vi b.sh
esc :%s,compress,^,g
save
sh b.sh
To know port is free or not
netstat -na
lsof |grep
To know n/w IP
netstat -iv
tar commands
Simple tar commands
tar -cvf directory.tar direcotry
tar -xvf director.tar
tar cvfzp TEST_apps_inst_myserver.tar.gz apps inst
tar cvfzp TEST_dbhome_myserver.tar.gz db/tech_st
Extract each of the tar files with the command tar xvfzp
tar xvfzp TEST_apps_inst_myserver.tar.gz
nohup tar -zxvf ------.gzip or .gz &
tar cf - . | ssh 192.168.100.107 "cd /path/target_directory; tar xf - "
ssh servernmae "cd /ORACLE/prod/9.2.0;tar cf - ." | tar xf -
Remote tar and untaa eg:
ssh usaxsxx "cd /ORACLE/prod/9.2.0;tar cf - ." | tar xf -
ssh usaxsxx "cd /ORACLE/prod/8.0.6; tar cf - ." | tar xf -
ssh usaxsxx "cd /ORACLE/prod/9iAS; tar cf - ." | tar xf -
ssh usaxsxx "cd /ORACLE/apps/prod; tar cf - [a-b]*" | tar xf -
ssh usaxsxx "cd /ORACLE/apps/prod; tar cf - [d-s]*" | tar xf -
ssh usaxsxx "cd /ORACLE/apps/prod; tar cf - [u-z]*" | tar xf -
ssh usaxsxx "cd /ORACLE/apps/prod; tar cf - c[a-n]*" | tar xf -
ssh usaxsxx "cd /ORACLE/apps/prod; tar cf - c[p-z]*" | tar xf -
ssh usaxsxx "cd /ORACLE/apps/prod/common; tar cf - [a-j]*" | tar xf -
ssh usaxsxx "cd /ORACLE/apps/prod/common; tar cf - [p-z]*" | tar xf -
ssh usaxsxx "cd /ORACLE/apps/prod/common; tar cf - outbound" | tar xf -
tar cf - [a-b]* | ssh remotehost "cd /ORACLE/apps/narvis; tar xf - "
tar cf - [d-s]* | ssh remotehost "cd /ORACLE/apps/narvis; tar xf - "
tar cf - [u-z]* | ssh remotehost "cd /ORACLE/apps/narvis; tar xf - "
tar cf - c[a-n]* | ssh remotehost "cd /ORACLE/apps/narvis; tar xf - "
tar cf - c[p-z]* | ssh remotehost "cd /ORACLE/apps/narvis; tar xf - "
tar cf - [a-j]* | ssh remotehost "cd /ORACLE/apps/narvis/common; tar xf - "
tar cf - [p-z]* | ssh remotehost "cd /ORACLE/apps/narvis/common; tar xf - "
tar cf - . | ssh remotehost "cd /ORACLE/narvis/8.0.6.3; tar xf - "
tar cf - . | ssh remotehost "cd /ORACLE/narvis/9iAS; tar xf - "
tar cf - . | ssh remotehost "cd /ORACLE/narvis_conf/8.0.6.3; tar xf - "
tar cf - . | ssh remotehost "cd /ORACLE/narvis_conf/9iAS; tar xf - "
Tar split
nohup tar cvf - OraHome_1 2> /11iGold/backup/OraHome_1.log | gzip | split -b 2000m - /11iGold/backup/OraHome_1.tar.gz &
This can be restored using the follwing command if required:
nohup cat /11iGold/backup/OraHome_1.tar.gz?? | gunzip | tar xf - &
untar/tar syntax in a single set of options
nohup cat /backups/code_tree_backups/100906/920.gz | gunzip | tar xvf - > 920.log &
nohup cat /backups/code_tree_backups/100906/1159.gz | gunzip | tar xvf - > ApplTop.log &
nohup cat /backups/code_tree_backups/100906/806.gz | gunzip | tar xvf - > 806.log &
nohup cat /backups/code_tree_backups/100906/common.gz | gunzip | tar xvf - > common.log &
nohup cat /backups/code_tree_backups/100906/custom.gz | gunzip | tar xvf - > custom.log &
nohup cat /backups/code_tree_backups/100906/ias.gz | gunzip | tar xvf - > ias.log &
nohup tar -cvpf - /u01/oradata/ 2> /scratch/u01.log | gzip -c > /scratch/u01.gz &
nohup tar -cvpf - /u13/oradata/ 2>/scratch/u13.log|gzip -c>/scratch/u13.gz &
nohup tar -cvpf - PROD_LVL0_20070103_9ii6jsga_s16690_p11 PROD_LVL0_20070103_9ki6jsgb_s16692_p11 PROD_LVL0_20070103_9ni6jtlj_s16695_p11 PROD_LVL0_20070103_9ri6k05o_s16699_p11 PROD_LVL0_20070103_9si6k0ig_s16700_p11 PROD_LVL0_20070103_9ti6k0ku_s16701_p11 PROD_LVL0_20070103_9mi6jtl9_s16694_p11 c-620808606-20070104-00 snapcf_PROD.f 2>/backup/backup1.log|gzip - c >/backup/backup1.gzip &
nohup tar -zcvpf $BACKUP_DIR/u01.tar /u01/PROD/oradata/ >$BACKUP_DIR/u01.log 2>&1 &
nohup tar -zxvpf $BACKUP_DIR/u10.tar &
rcp command
rcp -pr applmgr@remoteserver1:/u03/oracle/tst/tstcomn/* . &
rcp -pr applmgr@remoteserver1:/u03/oracle/tst/tstora/8.0.6 . &
rcp -pr applmgr@remoteserver1:/u03/oracle/tst/tstora/iAS . &
To zip files several files at a time
zip -r /tmp/`uname -n`_`date +%m%d%y.%H%M`_iAS.zip \
$IAS_ORACLE_HOME/Apache/Apache/conf/* \
$IAS_ORACLE_HOME/Apache/Apache/logs/* \
$IAS_ORACLE_HOME/Apache/Jserv/etc/* \
$IAS_ORACLE_HOME/Apache/Jserv/logs/*
R12 config files backup commands
as oratst user :
REFRESH=/clone/TST/REFRESH_FILES_DB.`date +%m%d%y`
rm -rf $REFRESH
mkdir -p $REFRESH
cp -p $ORACLE_HOME/*.env $REFRESH
cp -p $ORACLE_HOME/dbs/*.ora $REFRESH
cp -p $ORACLE_HOME/appsutil/*.xml $REFRESH
cp -p $TNS_ADMIN/*.ora $REFRESH
chmod 700 $REFRESH
=> as appltst user:
REFRESH=/clone/TST/REFRESH_FILES_APPS.`date +%m%d%y`
#rm -rf $REFRESH
mkdir -p $REFRESH/inst/apps
rsync -avH /d21/appltst/inst/apps/* $REFRESH/inst/apps/
chmod 700 $REFRESH
R12 config files backup commands
# more /vis2/clone/vis2.backup
echo su - applvis2
sleep 1
echo
echo backing up applications related config files .....................
sleep 2
echo
su - applvis2 -c /vis2/clone/vis2bkgenfiles/applvis2.bkp
sleep 1
echo su - oravis2
echo
sleep 2
echo backing up db related config files .....................
su - oravis2 -c /vis2/clone/vis2bkgenfiles/oravis2.bkp
sleep 1
echo
echo vis2 instance config files backup is complete and is located in /vis2/clone directory .
echo
# more /vis2/clone/vis2bkgenfiles/applvis2.bkp
REFRESH=/vis2/clone/REFRESH_FILES_APPS.`date +%m%d%y`
mkdir -p $REFRESH/inst/apps
rsync -avH /d11/applvis2/inst/apps/* $REFRESH/inst/apps/ --exclude "vis2_hostname1/logs/" --exclude "vis2_hostname2
/logs/" --exclude "vis2_hostname3/appltmp/" --exclude "vis2_hostname4/appltmp/"
chmod 700 $REFRESH
# more /vis2/clone/vis2bkgenfiles/oravis2.bkp
REFRESH=/vis2/clone/REFRESH_FILES_DB.`date +%m%d%y`
rm -rf $REFRESH
mkdir -p $REFRESH
cp -p $ORACLE_HOME/*.env $REFRESH
cp -p $ORACLE_HOME/dbs/*.ora $REFRESH
cp -p $ORACLE_HOME/appsutil/*.xml $REFRESH
cp -p $TNS_ADMIN/*.ora $REFRESH
chmod 700 $REFRESH
exp " '/ as sysdba' " file=$REFRESH/msdem_vis2_hostname.dmp log=$REFRESH/msdem_vis2_hostname.log direct=y feedbac
k=1000 owner=MSDEM
rsync
if refired it will pickup from where it failed
“rsync –exclude=ABc/ --log-file= -avH
Copy the specific things to ur file
ls -l > s1.sh
cat s1.sh |awk '{print $9}' > s2.sh
ls *.dbf|wc -l --- number of files in the PWD
t
ls -l /data/ora007/od08oadb01/*dbf|awk '{print $9}' > 007.txt
$9 is column number
column 9 is redirected to new file 007.txt
# echo "$userid"
system/xxxxx
# echo "$userid"|awk 'BEGIN { FS = "/" } {print $2}'
xxxxx
date=`date +%F`
# date +%F
2010-09-19
# date
Sun Sep 19 22:49:21 EDT 2010
ls -l *dbf|awk 'BEGIN {total=0} {total+=$5} END {print total/1024/1024/1024}'
$5 is the column number
column five size column in GB will be printed
ls -l [a-g]*dbf|awk 'BEGIN {total=0} {total+=$5} END {print total/1024/1024/1024}'
gives the total size of the DBF's starting from a to g
To find out the size of the datafiles
from [a-h]*dbf
du -sk [a-h]*dbf |awk 'BEGIN {total=0} {total+=$1} END {print total/1024/1024}'
du -sk [i]*dbf |awk 'BEGIN {total=0} {total+=$1} END {print total/1024/1024}'
du -sk [j-z]*dbf |awk 'BEGIN {total=0} {total+=$1} END {print total/1024/1024}'
ls -l | grep 'Jan 9' | awk '{print$9}' >one.sh -------in this Jan 9 is the day for which we are compressing and one.sh is the file name. Next open one.sh and do
:%s,^,compress ,g
OR
: %s,^,rm -rf ,g
save the file and run it also do a chmod 777 one.sh
To remove one month files at a time
ls -l|grep 'Aug '|awk '{print$9}' > one.sh
chmod 777 one.sh
vi one.sh
: %s,^,rm -f ,g
./one.sh
ls -l |grep 'Feb' |awk '{print$9}' > one.sh
ls -l |grep 'Feb' |grep *.req |awk '{print$9}' > one.sh
sdiff
sdiff -s -w 50 tb.txt sb.txt
50 is word length
diff $IAS_ORACLE_HOME/Apache/Apache/conf/httpd.conf $HOME/httpd.conf.orig > /tmp/httpddiff.txt
SIZEDIFF=`ls -lA /tmp/httpddiff.txt | awk -F' ' '{print $5}'`
if [ $SIZEDIFF -gt 0 ]
then
mailx -s "VIS irecruitment setup is missing in PROD" emaladdress < /tmp/httpddiff.txt
mailx -s "vis irecruitment setup is missing in PROD"" onemoreemailaddress < $HOME/irecmessage.txt
fi
eg of mailx command:
cat /tmp/mailbody | mail -s "Ignore Alerts: ABC PREPROD Apachebounce on hostname" abcd@abc.ice.com -- -r alertfromserver@abc.ice.com < a
to check any errors present in the file
cat /oracle/qa/qaappl/admin/QA/log/u2819091.log |grep -i ORA-
VNC SERVER
vncserver -geometry 1024x1024 -depth 8 -cc 3
vncserver :49 -nolisten local -httpd ${VNC}/classes -cc 3
vncserver :99 -geometry 1024x1024
You can kill sessions you started.
$ vncserver -kill :[display number]
$ vncserver -kill :0
Wait for 5 mins and delete the file as follows:
$ rm /tmp/X11-unix/X0
login as appl user for the instance refreshed and start the vnc services as follows:
$ vncserver :0 -localhost
$ xhost +
To set your password the first time (or reset it later) run this command:
$ /usr/bin/vncpasswd
strings
strings -a hrrunprc.sql | grep Header
To create a directory, change ownership, create symbolic link
mkdir -p /u20/app/KEYCS1/apkeycs1/applmgr/11510
mkdir -p /u20/app/KEYCS1/apkeycs1/applmgr/common
mkdir -p /u20/app/KEYCS1/apkeycs1/applmgr/CUSTOM
mkdir -p /u20/app/KEYCS1/apkeycs1/applmgr/product/806
mkdir -p /u20/app/KEYCS1/apkeycs1/applmgr/product/iAS
mkdir -p /u20/app/KEYCS1/orkeycs1/oracle/product
mkdir -p /u22/oradata/KEYCS1/oradata/data01
mkdir -p /u22/oradata/KEYCS1/oradata/data02
mkdir -p /u22/oradata/KEYCS1/oradata/data03
ln -s /u20/app/KEYCS1/apkeycss1/applmgr applmgr
ln -s /u20/app/KEYCSS1/orkeycss1/oracle oracle
ln -s /u22/oradata/KEYCSS1/oradata oradata
chown apkeycs1:oaa applmgr
chown orkeycs1:dba oracle
chown orkeycs1:dba oradata
Changing group for user
# usermod -g dba ora10g
Check the group created in “/etc/group”
# cat /etc/group |grep dba
dba::110:
useradd -c ""App user KEYCS1"" -d /u20/app/KEYCS1/apkeycs1 -g oaa apkeycs1
useradd -c ""Ora user KEYCS1"" -d /u20/app/KEYCS1/orkeycs1 -g dba orkeycs1
chown -R apkeycs1:oaa /u20/app/KEYCS1/apkeycs1
chown -R orkeycs1:dba /u20/app/KEYCS1/orkeycs1
chown -R orkeycs1:dba /u22/oradata/KEYCS1
Commands useful to clear space issues in mount points
bdf|awk '{print $5,$6}' > one.txt
find . -type f -mtime +5
find . -type f -mtime +5 -exec rm -f()
find . -type f -mtime +5|xargs rm -f
find . -name *.req -mtime +7 -exec compress -f {} \;
find . -name *.req -mtime +30 -exec rm -f {} \;
find . -name *.req -mtime +7 -exec gzip {} \;
find . -name "*" -mtime +20 -exec rm {} \;
find . -name "*" -mtime +30 -exec ls -la {} \;|awk '{print $5}'>a.sh
find . -size +500000000 -exec ls -al {} \;
find . -name "*" -mtime +20 -exec rm {} \;
find . -size +52428800 -exec ls -al {} \;
find . -size +100000000c -exec ls -l {} \;
$du -sh
$du -sk
find /eadvi4/applmgr/common/admin/log/EADVI4_hostname \( -name '*.*' \) -mtime +120 -exec rm -rf {} \;
delete files /u03/applmgr/prod/TEMPS and /u03/applmgr/qlty/TEMPS older than 15 days.
find /u03/applmgr/prod/TEMPS/ -mtime +15 -exec rm -f {} \;
find /u03/applmgr/qlty/TEMPS/ -mtime +15 -exec rm -f {} \;
find . -name "*.dbg" -mtime +1 -exec rm -f {} \;
Script to delete the dump files 21 days old need to be in cron
30 20 * * * find /ORACLE/eul10g/backup \( -name '*.dmp*' \) -mtime +21 -exec rm -f {} \; ---- As oracle user cron.
To find Symbolink links and replace to corrent ones
more rename_softlinks.sh
#!/bin/ksh
cd $ORACLE_HOME
find . -type l -exec ls -l {} \; | grep prod | awk '{gsub("prod","qa",$11); print "rm "$9; print "ln -s " $11 " " $9}'
To find links under any top
find . -type l -print | xargs ls -ld
Recreate soft links in $CUSTOM_TOP/bin
unlink visSQL
ln -s ../../../fnd/11.5.0/bin/fndcpesr visSQL
How to find the symbolic links that point to the old path in your oracle_home and appl_top.
This command is useful in cloning after restore from source to target that symbolic link are not pointing to source.
ls -al `find . -type l` | grep $OLD_PATH
To find all the text files that contains the old path at the UNIX level.
find . -type f -print|grep -v ":"|xargs file|grep "text"|awk '{FS=":"}{print $1}'|xargs grep -il $OLD_PATH
How to Sort files based on Size of file in a Directory
Useful in finding out spaces issues
ls -l | sort -nrk 5 | more
How to find a "word" or pattern in all files in a directory & subdirectories
find . -name "*" -exec grep -l
for example I want to search for word oracle
find . -name "*" -exec grep -l oracle {} \; -print
How to delete files older than N number of days , Useful to delete old log files
find . -name '*.*' -mtime +
for example if I want to delete all files older than 7 days
find . -name '*.*' -mtime +7 -exec rm {} \;
To edit window registry
regedit (on run prompt)
To view a Linux desktop remotely
1. SSH to Linux server
2. Start the VNC server by typing 'vncserver' at the command line. Remember the display number.
3. Connect to the VNC server using a VNC viewer. Enter host name, display number, and VNC password for the user who started the session.
4. When finished, don't log out of Linux, just close the VNC viewer window.
5. If you wish, kill the VNC server session when finished. Or keep it running and use it next time you need to connect.
To use VNC to control or view a computer behind a firewall
The official VNC FAQ has a good explanation of using VNC through a firewall
at http://www.realvnc.com/faq.html#firewall. You must either open port 5900+N where N equals the VNC port assigned when you created a connection, or you must use SSH tunneling to channel the VNC traffic through ports that normally allow traffic through a firewall.
More detailed information is available at the link above.
ps command
ps -fu apdev
sort, delete rows, columns
ls -lrt |sort -nr -k5 |head -15
ls -lart |head -200
ls -lrt > a.sh
vi a.sh
esc :3000,$d -- to delete all line after 3000 use ^d for before 3000 deletion
$cut -c 55-95 a.sh > b.sh
vi b.sh
esc :%s,compress,^,g
save
sh b.sh
To know port is free or not
netstat -na
lsof |grep
To know n/w IP
netstat -iv
tar commands
Simple tar commands
tar -cvf directory.tar direcotry
tar -xvf director.tar
tar cvfzp TEST_apps_inst_myserver.tar.gz apps inst
tar cvfzp TEST_dbhome_myserver.tar.gz db/tech_st
Extract each of the tar files with the command tar xvfzp
tar xvfzp TEST_apps_inst_myserver.tar.gz
nohup tar -zxvf ------.gzip or .gz &
tar cf - . | ssh 192.168.100.107 "cd /path/target_directory; tar xf - "
ssh servernmae "cd /ORACLE/prod/9.2.0;tar cf - ." | tar xf -
Remote tar and untaa eg:
ssh usaxsxx "cd /ORACLE/prod/9.2.0;tar cf - ." | tar xf -
ssh usaxsxx "cd /ORACLE/prod/8.0.6; tar cf - ." | tar xf -
ssh usaxsxx "cd /ORACLE/prod/9iAS; tar cf - ." | tar xf -
ssh usaxsxx "cd /ORACLE/apps/prod; tar cf - [a-b]*" | tar xf -
ssh usaxsxx "cd /ORACLE/apps/prod; tar cf - [d-s]*" | tar xf -
ssh usaxsxx "cd /ORACLE/apps/prod; tar cf - [u-z]*" | tar xf -
ssh usaxsxx "cd /ORACLE/apps/prod; tar cf - c[a-n]*" | tar xf -
ssh usaxsxx "cd /ORACLE/apps/prod; tar cf - c[p-z]*" | tar xf -
ssh usaxsxx "cd /ORACLE/apps/prod/common; tar cf - [a-j]*" | tar xf -
ssh usaxsxx "cd /ORACLE/apps/prod/common; tar cf - [p-z]*" | tar xf -
ssh usaxsxx "cd /ORACLE/apps/prod/common; tar cf - outbound" | tar xf -
tar cf - [a-b]* | ssh remotehost "cd /ORACLE/apps/narvis; tar xf - "
tar cf - [d-s]* | ssh remotehost "cd /ORACLE/apps/narvis; tar xf - "
tar cf - [u-z]* | ssh remotehost "cd /ORACLE/apps/narvis; tar xf - "
tar cf - c[a-n]* | ssh remotehost "cd /ORACLE/apps/narvis; tar xf - "
tar cf - c[p-z]* | ssh remotehost "cd /ORACLE/apps/narvis; tar xf - "
tar cf - [a-j]* | ssh remotehost "cd /ORACLE/apps/narvis/common; tar xf - "
tar cf - [p-z]* | ssh remotehost "cd /ORACLE/apps/narvis/common; tar xf - "
tar cf - . | ssh remotehost "cd /ORACLE/narvis/8.0.6.3; tar xf - "
tar cf - . | ssh remotehost "cd /ORACLE/narvis/9iAS; tar xf - "
tar cf - . | ssh remotehost "cd /ORACLE/narvis_conf/8.0.6.3; tar xf - "
tar cf - . | ssh remotehost "cd /ORACLE/narvis_conf/9iAS; tar xf - "
Tar split
nohup tar cvf - OraHome_1 2> /11iGold/backup/OraHome_1.log | gzip | split -b 2000m - /11iGold/backup/OraHome_1.tar.gz &
This can be restored using the follwing command if required:
nohup cat /11iGold/backup/OraHome_1.tar.gz?? | gunzip | tar xf - &
untar/tar syntax in a single set of options
nohup cat /backups/code_tree_backups/100906/920.gz | gunzip | tar xvf - > 920.log &
nohup cat /backups/code_tree_backups/100906/1159.gz | gunzip | tar xvf - > ApplTop.log &
nohup cat /backups/code_tree_backups/100906/806.gz | gunzip | tar xvf - > 806.log &
nohup cat /backups/code_tree_backups/100906/common.gz | gunzip | tar xvf - > common.log &
nohup cat /backups/code_tree_backups/100906/custom.gz | gunzip | tar xvf - > custom.log &
nohup cat /backups/code_tree_backups/100906/ias.gz | gunzip | tar xvf - > ias.log &
nohup tar -cvpf - /u01/oradata/ 2> /scratch/u01.log | gzip -c > /scratch/u01.gz &
nohup tar -cvpf - /u13/oradata/ 2>/scratch/u13.log|gzip -c>/scratch/u13.gz &
nohup tar -cvpf - PROD_LVL0_20070103_9ii6jsga_s16690_p11 PROD_LVL0_20070103_9ki6jsgb_s16692_p11 PROD_LVL0_20070103_9ni6jtlj_s16695_p11 PROD_LVL0_20070103_9ri6k05o_s16699_p11 PROD_LVL0_20070103_9si6k0ig_s16700_p11 PROD_LVL0_20070103_9ti6k0ku_s16701_p11 PROD_LVL0_20070103_9mi6jtl9_s16694_p11 c-620808606-20070104-00 snapcf_PROD.f 2>/backup/backup1.log|gzip - c >/backup/backup1.gzip &
nohup tar -zcvpf $BACKUP_DIR/u01.tar /u01/PROD/oradata/ >$BACKUP_DIR/u01.log 2>&1 &
nohup tar -zxvpf $BACKUP_DIR/u10.tar &
rcp command
rcp -pr applmgr@remoteserver1:/u03/oracle/tst/tstcomn/* . &
rcp -pr applmgr@remoteserver1:/u03/oracle/tst/tstora/8.0.6 . &
rcp -pr applmgr@remoteserver1:/u03/oracle/tst/tstora/iAS . &
To zip files several files at a time
zip -r /tmp/`uname -n`_`date +%m%d%y.%H%M`_iAS.zip \
$IAS_ORACLE_HOME/Apache/Apache/conf/* \
$IAS_ORACLE_HOME/Apache/Apache/logs/* \
$IAS_ORACLE_HOME/Apache/Jserv/etc/* \
$IAS_ORACLE_HOME/Apache/Jserv/logs/*
R12 config files backup commands
as oratst user :
REFRESH=/clone/TST/REFRESH_FILES_DB.`date +%m%d%y`
rm -rf $REFRESH
mkdir -p $REFRESH
cp -p $ORACLE_HOME/*.env $REFRESH
cp -p $ORACLE_HOME/dbs/*.ora $REFRESH
cp -p $ORACLE_HOME/appsutil/*.xml $REFRESH
cp -p $TNS_ADMIN/*.ora $REFRESH
chmod 700 $REFRESH
=> as appltst user:
REFRESH=/clone/TST/REFRESH_FILES_APPS.`date +%m%d%y`
#rm -rf $REFRESH
mkdir -p $REFRESH/inst/apps
rsync -avH /d21/appltst/inst/apps/* $REFRESH/inst/apps/
chmod 700 $REFRESH
R12 config files backup commands
# more /vis2/clone/vis2.backup
echo su - applvis2
sleep 1
echo
echo backing up applications related config files .....................
sleep 2
echo
su - applvis2 -c /vis2/clone/vis2bkgenfiles/applvis2.bkp
sleep 1
echo su - oravis2
echo
sleep 2
echo backing up db related config files .....................
su - oravis2 -c /vis2/clone/vis2bkgenfiles/oravis2.bkp
sleep 1
echo
echo vis2 instance config files backup is complete and is located in /vis2/clone directory .
echo
# more /vis2/clone/vis2bkgenfiles/applvis2.bkp
REFRESH=/vis2/clone/REFRESH_FILES_APPS.`date +%m%d%y`
mkdir -p $REFRESH/inst/apps
rsync -avH /d11/applvis2/inst/apps/* $REFRESH/inst/apps/ --exclude "vis2_hostname1/logs/" --exclude "vis2_hostname2
/logs/" --exclude "vis2_hostname3/appltmp/" --exclude "vis2_hostname4/appltmp/"
chmod 700 $REFRESH
# more /vis2/clone/vis2bkgenfiles/oravis2.bkp
REFRESH=/vis2/clone/REFRESH_FILES_DB.`date +%m%d%y`
rm -rf $REFRESH
mkdir -p $REFRESH
cp -p $ORACLE_HOME/*.env $REFRESH
cp -p $ORACLE_HOME/dbs/*.ora $REFRESH
cp -p $ORACLE_HOME/appsutil/*.xml $REFRESH
cp -p $TNS_ADMIN/*.ora $REFRESH
chmod 700 $REFRESH
exp " '/ as sysdba' " file=$REFRESH/msdem_vis2_hostname.dmp log=$REFRESH/msdem_vis2_hostname.log direct=y feedbac
k=1000 owner=MSDEM
rsync
if refired it will pickup from where it failed
“rsync –exclude=ABc/ --log-file=
Tuesday, September 14, 2010
OCM
Manual way of collection and upload
$ pwd
/d02/oravis/visdb/10.2.0/ccr/bin
$ ./emCCR collect
Oracle Configuration Manager - Release: 10.3.3.1.0 - Production
Copyright (c) 2005, 2010, Oracle and/or its affiliates. All rights reserved.
------------------------------------------------------------------
Collection and upload done.
Verify in the systems tab of metalink to see the upload is done
$ pwd
/d02/oravis/visdb/10.2.0/ccr/bin
$ ./emCCR collect
Oracle Configuration Manager - Release: 10.3.3.1.0 - Production
Copyright (c) 2005, 2010, Oracle and/or its affiliates. All rights reserved.
------------------------------------------------------------------
Collection and upload done.
Verify in the systems tab of metalink to see the upload is done
RDA
review, download and run the ./rda.sh script as described in Note: 250262.1
other notes for rda
Note: 314422.1 - Remote Diagnostic Agent (RDA) 4 - Getting Started
Note: 330363.1 - Remote Diagnostic Agent (RDA) 4 - FAQ
Note: 330344.1 - Remote Diagnostic Agent (RDA) 4 - Training
Note: 330362.1 - Remote Diagnostic Agent (RDA) 4 - Troubleshooting Guide
To simplify RDA configuration, use an RDA profile, and the best choice is the DB9i profile, even for 10g or 11g, because the DB10g profile is a superset of DB9i that adds stuff we don't care about (for DB Install). First set the ORA* environment variables for the DB of interest, then run the RDA config pass:
Unix:
cd {dir you unzipped RDA into}
export ORACLE_HOME={dir}
export ORACLE_SID={name}
./rda.sh -Syvp DB9i
Next, edit the resulting setup.cfg file, search for 'sysdba' (first hit will be a comment), and change the 2 lines directly above and below as follows:
for SQL_LOGIN= replace 'system' with '/' and for SQL_SYSDBA= replace 0 with 1
then save it, and run the (rda) command again, with no arguments this time:
Unix:
./rda.sh
The output subdirectory will contain a new RDA*.zip file for you to upload
other notes for rda
Note: 314422.1 - Remote Diagnostic Agent (RDA) 4 - Getting Started
Note: 330363.1 - Remote Diagnostic Agent (RDA) 4 - FAQ
Note: 330344.1 - Remote Diagnostic Agent (RDA) 4 - Training
Note: 330362.1 - Remote Diagnostic Agent (RDA) 4 - Troubleshooting Guide
To simplify RDA configuration, use an RDA profile, and the best choice is the DB9i profile, even for 10g or 11g, because the DB10g profile is a superset of DB9i that adds stuff we don't care about (for DB Install). First set the ORA* environment variables for the DB of interest, then run the RDA config pass:
Unix:
cd {dir you unzipped RDA into}
export ORACLE_HOME={dir}
export ORACLE_SID={name}
./rda.sh -Syvp DB9i
Next, edit the resulting setup.cfg file, search for 'sysdba' (first hit will be a comment), and change the 2 lines directly above and below as follows:
for SQL_LOGIN= replace 'system' with '/' and for SQL_SYSDBA= replace 0 with 1
then save it, and run the (rda) command again, with no arguments this time:
Unix:
./rda.sh
The output subdirectory will contain a new RDA*.zip file for you to upload
Subscribe to:
Posts (Atom)