tag:blogger.com,1999:blog-8118464380342497932024-03-19T09:34:12.330-07:00Oracle Apps DBA OceanOfKnowledgeRaghavendra Lakshmipathyhttp://www.blogger.com/profile/09047289715110089027noreply@blogger.comBlogger43125tag:blogger.com,1999:blog-811846438034249793.post-3262367860561604842011-10-24T05:29:00.000-07:002011-10-24T06:22:07.757-07:00Oracle Business Intelligence Applications (OBIA 7.9.6.3) overview<strong>Oracle Business Intelligence Applications (OBIA)</strong> are complete, prebuilt BI solutions that deliver intuitive, role-based intelligence for everyone in an organization from front line employees to senior management that enable better decisions, actions, and business processes. Designed for heterogeneous environments, these solutions enable organizations to gain insight from a range of data sources and applications including Siebel, Oracle E-Business Suite, PeopleSoft, and third party systems such as SAP. <br /><br />Oracle BI Applications are built on the Oracle BI Suite Enterprise Edition, a comprehensive, innovative, and leading BI platform. This enables organizations to realize the value of a packaged BI Application, such as rapid deployment, lower TCO, and built-in best practices, while also being able to very easily extend those solutions to meet their specific needs, or build completely custom BI applications, all on one common BI architecture.<br /><br />Oracle BI Applications includes the following:<br />■ Oracle Financial Analytics<br />■ Oracle Human Resources Analytics<br />■ Oracle Supply Chain and Order Management Analytics<br />■ Oracle Procurement and Spend Analytics<br />■ Oracle Project Analytics<br />■ Oracle Sales Analytics<br />■ Oracle Service Analytics<br />■ Oracle Contact Center Telephony Analytics<br />■ Oracle Marketing Analytics<br />■ Oracle Loyalty Analytics<br />■ Oracle Price Analytics<br />and more<br /><br />Oracle BI Applications is a prebuilt business intelligence solution.<br />Oracle BI Applications supports Oracle sources, such as Oracle E-Business Suite Applications, Oracle's Siebel Applications, Oracle's PeopleSoft Applications, Oracle's JD Edwards Applications, and non-Oracle sources, such as SAP Applications. If you already own one of the above applications, you can purchase Oracle Business Intelligence Enterprise Edition and Oracle BI Applications to work with the application.<br />Oracle BI Applications also provides complete support for enterprise data, including financial, supply chain, workforce, and procurement and spend sources. These enterprise applications typically source from both Oracle data sources, such as Oracle EBS and PeopleSoft and non-Oracle data sources, such as SAP<br /><br /><br /><strong>Topology for an Oracle BI Applications Deployment</strong> <br /><br />Oracle BI Applications and Informatica PowerCenter can be deployed flexibly across a wide range of topologies on different platforms and combinations of platforms.<br /><br /><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjpUPHmjlfgoKS1Cc9DhnypuCalRygNzeYHvP86gRE2ssNEil_Q-G9kqaY80x5bLPdTWk-onQQrkLa3R6nc6oMhT2O99Yu5hyphenhyphenBWUht7M8LdSaHWXRMw1oRv-nXrwZrYkt68LSowL5MPZ7k/s1600/Topology+of+OBIA+deployment.png"><img style="cursor:pointer; cursor:hand;width: 290px; height: 320px;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjpUPHmjlfgoKS1Cc9DhnypuCalRygNzeYHvP86gRE2ssNEil_Q-G9kqaY80x5bLPdTWk-onQQrkLa3R6nc6oMhT2O99Yu5hyphenhyphenBWUht7M8LdSaHWXRMw1oRv-nXrwZrYkt68LSowL5MPZ7k/s320/Topology+of+OBIA+deployment.png" border="0" alt=""id="BLOGGER_PHOTO_ID_5666982004073255250" /></a><br /><br />Machine A (Windows-only)<br />Machine A is a machine that has installed Oracle Business Intelligence Enterprise Edition, on which you run the Oracle BI Applications installer to install the Oracle BI Applications files.<br />Note: The instance of Oracle Business Intelligence Enterprise Edition does not need to be the functional version of Oracle Business Intelligence Enterprise Edition that you will use to deploy dashboards in your live system. This instance is only required to enable the Oracle BI Applications installer to install the Oracle BI Applications files on a machine<br />After the Oracle BI Applications files have been installed on Machine A, the DAC Client is installed on Machine B, and the DAC Server is installed on Machine C.<br />In addition, the following files are copied from the installation machine (Machine A) to the Business Intelligence Deployment Tier (Machine Group F) as follows:<br />- The OracleBI\Server\Repository\OracleBIAnalyticsApps.rpd file is copied from Machine A to the machine that runs the BI Server in Machine Group F.<br />- The OracleBIData\Web\Catalog\EnterpriseBusinessAnalytics\*.* files are copied from Machine A to the machine that runs the BI Presentation Services Catalog in Machine Group F.<br /><br /><br />• ETL Tier (Functional)<br />o Machine B (Windows-only)<br />Runs the DAC Client and Informatica PowerCenter Client Tools.<br />o Machine C (Windows, UNIX, Linux)<br />Runs the DAC Server and Informatica PowerCenter Services.<br />o Machine D (Windows, UNIX, Linux)<br />Hosts the transactional (OLTP) database.<br />o Machine E (Windows, UNIX, Linux)<br />Hosts the Oracle Business Analytics Warehouse database<br /><br /><br />• BI Deployment Tier (Functional)<br />The BI Deployment tier is used to deploy the business intelligence dashboards.<br />o Machine Group F (Windows, UNIX, Linux)<br />Machine Group F is a group of machines that runs the Oracle Business Intelligence Enterprise Edition components. For example, one machine might run the BI Server and another machine might run the BI Presentation Services<br /><br /><br /><strong>Oracle Business Analytics warehouse architecture when deployed with Informatica PowerCenter and DAC</strong> <br /><br />High-level analytical queries, like those commonly used in Oracle Business Intelligence, scan and analyze large volumes of data using complex formulas. This process can take a long time when querying a transactional database, which impacts overall system performance.<br />For this reason, the Oracle Business Analytics Warehouse was constructed using dimensional modeling techniques to allow for fast access to information required for decision making. The Oracle Business Analytics Warehouse derives its data from operational applications and uses Informatica PowerCenter to extract, transform, and load data from various supported transactional database systems (OLTP) into the Oracle Business Analytics Warehouse.<br /><br /><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiEtLRVXby7pwUxycRzJFMhlON3yMAV8_vo_lSYXk3QnnHA4FP-n5e06cVSVmc2polCM1EnIp0FpSakSNBlEWFAA6UNs8U0PEP5otZvzTCFkBKKeXynQctxOGLe07fHN3B3RcT5Z2Vr2aY/s1600/obia+acrchitecture.png"><img style="cursor:pointer; cursor:hand;width: 320px; height: 280px;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiEtLRVXby7pwUxycRzJFMhlON3yMAV8_vo_lSYXk3QnnHA4FP-n5e06cVSVmc2polCM1EnIp0FpSakSNBlEWFAA6UNs8U0PEP5otZvzTCFkBKKeXynQctxOGLe07fHN3B3RcT5Z2Vr2aY/s320/obia+acrchitecture.png" border="0" alt=""id="BLOGGER_PHOTO_ID_5666984341787512738" /></a><br /><br />• The Client tier contains the Informatica PowerCenter client tools and the Oracle BI Data Warehouse Administration Console (DAC). DAC is a command and control interface for the data warehouse to allow for set up, configuration, administration, and monitoring of data warehouse processes.<br /><br />• The Server tier contains the following:<br />o DAC Server. Executes the instructions from the DAC Client. The DAC Server manages data warehouse processes, including scheduling, loading of the ETL, and configuring the subject areas to be loaded. It dynamically adjusts its actions based on information in the DAC Repository. Depending on your business needs, you might incrementally refresh the Oracle Business Analytics Warehouse once a day, once a week, once a month, or on another similar schedule.<br />o DAC Repository. Stores the metadata (semantics of the Oracle Business Analytics Warehouse) that represents the data warehouse processes.<br />o Informatica PowerCenter Services:<br /> Integration Services - The Integration Service reads workflow information from the repository. The Integration Service connects to the repository through the Repository Service to fetch metadata from the repository.<br /> Repository Services - The Repository Service manages connections to the PowerCenter Repository from client applications. The Repository Service is a separate, multi-threaded process that retrieves, inserts, and updates metadata in the repository database tables.<br />o Informatica Repository. Stores the metadata related to Informatica workflows.<br />• The Database tier contains the OLTP and OLAP databases.<br /><br /><br />The Informatica Repository stores all of the Informatica object definitions for the ETL mappings that populate the Oracle Business Analytics Warehouse. It is a series of repository tables that are stored in a database, which can be a transactional, analytical, or separate database<br /><br /><strong>Oracle BI Applications Components (with Informatica/DAC)</strong><br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj6YK2kCo0tClgEbT1a_-ernDgVvcPxlULkpFtHcTV2pYiEoJ2GNHgqyFzuXK5pMtF5O2Sq5oJrEWADiIELsEWj5bHwKfeEZhU7Hcu71pRNlDYC6tqhIl9Xo8nRBaEse2jYg4lEuOks3jE/s1600/tabimage.JPG"><img style="cursor:pointer; cursor:hand;width: 320px; height: 128px;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj6YK2kCo0tClgEbT1a_-ernDgVvcPxlULkpFtHcTV2pYiEoJ2GNHgqyFzuXK5pMtF5O2Sq5oJrEWADiIELsEWj5bHwKfeEZhU7Hcu71pRNlDYC6tqhIl9Xo8nRBaEse2jYg4lEuOks3jE/s320/tabimage.JPG" border="0" alt=""id="BLOGGER_PHOTO_ID_5667037704159569970" /></a><br /><br /><br /><span style="font-weight:bold;">Oracle Business Analytics Warehouse Overview</span><br />The Oracle Business Analytics Warehouse is a unified data repository for all customer-centric data, which supports the analytical requirements of the supported source systems.<br />The Oracle Business Analytics Warehouse includes the following:<br />• A complete relational enterprise data warehouse data model with numerous prebuilt star schemas encompassing many conformed dimensions and several hundred fact tables.<br /><br />• An open architecture to allow organizations to use third-party analytical tools in conjunction with the Oracle Business Analytics Warehouse using the Oracle Business Intelligence Server<br /><br />• A set of ETL (extract-transform-load) processes that takes data from a wide range of source systems and creates the Oracle Business Analytics Warehouse tables.<br /><br />• The Oracle Business Intelligence Data Warehouse Administration Console (DAC), a centralized console for the set up, configuration, administration, loading, and monitoring of the Oracle Business Analytics Warehouse<br /><br /><span style="font-weight:bold;">Important points</span><br /><br />The metadata for a source system is held in a container<br />The PowerCenter Services can be installed on UNIX or on Windows. <br /><br />The PowerCenter Client Tools must be installed on Windows.<br />You must co-locate the DAC Client with the Informatica PowerCenter Client Tools.<br />■ You must co-locate the DAC Server with Informatica PowerCenter Services.<br />DAC produces parameter files that are used by Informatica. If an execution plan<br />fails in DAC and you want to debug the workflow by running it directly from<br />Informatica, then the parameter file produced by DAC should be visible to<br />Informatica. This is one reason for the requirement to co-locate the DAC and<br />Informatica components as stated above<br />The DAC installer installs the DAC Client and DAC Server on Windows. <br /><br />DAC Client only runs on Windows. <br />DAC Servers runs on Windows, UNIX, and Linux.<br />The DAC Server can run on Linux, but it must first be installed on a Windows<br />machine, then copied over to a Linux machine. Oracle does not provides an installer<br />for DAC on UNIX<br /><br />The DAC Client can only be installed and run on Windows.<br />■ The DAC Client must be installed on the machine where Informatica PowerCenter<br />Client Tools was installed.<br />■ The DAC Server must be installed on the machine where Informatica PowerCenter<br />Services was installed.<br />■ You must install Informatica PowerCenter Services before you install DAC.<br />■ The correct version of the JDK is installed by the DAC installer.<br />■ The DAC installer installs DAC in the DAC_HOME\bifoundation\dac directory.<br /><br /><br />The DAC Client uses the Informatica pmrep and pmcmd command line programs<br />when communicating with Informatica PowerCenter. The DAC Client uses pmrep to<br />synchronize DAC tasks with Informatica workflows and to keep the DAC task source<br />and target tables information up to date.<br />In order for the DAC Client to be able to use the pmrep and pmcmd programs, the<br />path of the Informatica Domain file 'domains.infa' must be defined in the environment<br />variables on the DAC Client machine.<br />When you install DAC using the DAC installer, the Informatica Domain file is defined<br />in the environment variables on the DAC Client machine.<br /><br />INFA_DOMAINS_FILE<br />C:\Informatica\9.0.1\clients\PowerCenterClient\domains.infa<br />The DAC Server uses the following command line programs to communicate with<br />Informatica PowerCenter:<br />■ pmrep is used to communicate with PowerCenter Repository Services.<br />■ pmcmd is used to communicate with PowerCenter Integration Services to run the<br />Informatica workflows.<br /><br />The pmrep and pmcmd programs are installed during the PowerCenter Services<br />installation in the INFA_HOME\server\bin directory on the Informatica PowerCenter<br />Services machine<br /><br /><br />Oracle Business Intelligence Applications V7.9.6.3 requires Oracle Business<br />Intelligence Enterprise Edition V11.1.1.5.0.<br />■ Oracle Business Intelligence Applications V7.9.6.3 requires Informatica<br />PowerCenter V9.0.1. Hotfix 2.<br />■ Oracle Business Intelligence Applications V7.9.6.3 requires Oracle Data Warehouse<br />Console V10.1.3.4.1.<br /><br />The OBIEE server uses the metadata to generate the SQL queries. It is stored in the<br />Repository or often referred to as .rpd. The repository has three layer as shown below, the physical layer, logical layer and the presentation layer.<br /><br />Metadata maps the OBAW physical tables to a generic business model and includes more than 100 presentation catalogs (aka subject areas) to allow queries using Oracle BI clients such as Answers, Dashboards <br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh1YTQevkkY83LG7FN6vuyDZINqnzYPQxYSrSdf_39ClkbVMxDghQyxeN47L360-XI7sSX_C6JOw41fxXh4Vu7z2UszSLNomgflipHqcL9_XAKBXkNnlhnwpfRhDrUoYPSQdqQ8hw6ZSSI/s1600/layers.gif"><img style="cursor:pointer; cursor:hand;width: 320px; height: 191px;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh1YTQevkkY83LG7FN6vuyDZINqnzYPQxYSrSdf_39ClkbVMxDghQyxeN47L360-XI7sSX_C6JOw41fxXh4Vu7z2UszSLNomgflipHqcL9_XAKBXkNnlhnwpfRhDrUoYPSQdqQ8hw6ZSSI/s320/layers.gif" border="0" alt=""id="BLOGGER_PHOTO_ID_5667041830416430802" /></a><br /><br /><br /><span style="font-weight:bold;">High Level Data Flow</span><br />• Source – eBS (Raw Data)<br />• ETL – Extraction Transform and Load (Informatica – PowerCenter or<br />ODI)<br />• OBAW - Business Analytics Warehouse<br />• OBIEE Metadata<br />• OBIEE Content – Reports and Dashboards<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi1Xez-nOzLvQUrKW6oDwRig6lZe3WkPUuwAfSr1j5uk7QuRYe1hw3sPx-EoVrS6zesFClR5ZItVtyButtrtgZLICJXgyCgPTlmZjOHSkUKjnzY59sA43-rNgPJ_fAcEaxR_WLE2Q8dzeg/s1600/flow.gif"><img style="cursor:pointer; cursor:hand;width: 320px; height: 122px;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi1Xez-nOzLvQUrKW6oDwRig6lZe3WkPUuwAfSr1j5uk7QuRYe1hw3sPx-EoVrS6zesFClR5ZItVtyButtrtgZLICJXgyCgPTlmZjOHSkUKjnzY59sA43-rNgPJ_fAcEaxR_WLE2Q8dzeg/s320/flow.gif" border="0" alt=""id="BLOGGER_PHOTO_ID_5667045525087044386" /></a><br /><br /><br />• A prebuilt Informatica repository which includes mappings (wrapped in workflows) to extract data from the supported source systems (various versions of standard applications such as Siebel CRM - yes it was first - , Oracle eBusiness Suite, Peoplesoft, JDEdwards and SAP (BI Apps version 7.8.4) ) and load the data into the Oracle Business Analysis Warehouse. (Note: Oracle BI Applications version 7.9.5.2 includes a repository for the Oracle Data Integrator instead of Informatica but supports only Oracle Financials).<br /><br /><br />• The Oracle Business Analysis Warehouse (OBAW), a prebuilt schema (stars that is) which serves as a turnkey data warehouse including dozens of stars and accompanying indexes, aggregates, time dimensions and slowly changing dimension handling (and yes, it can be optimized).<br />• A prebuilt repository for the Data Warehouse Administration Console (DAC) server which is the orchestration engine behind the ETL process. The DAC client and server were included until version 7.9.5. Since then it is a separate installer.<br />• A prebuilt Oracle BI Server repository (rpd file) which maps the OBAW physical tables to a generic business model and includes more than 100 presentation catalogs (aka subject areas) to allow queries and segmentation using Oracle BI clients such as Answers, Dashboards and Segment Designer. Did I mention that it also takes care of authentication and data security?<br />• A prebuilt presentation catalog (repository) containing hundreds of requests and ready-to-use dashboards which enable tight integration between the source applications and the BI infrastructure (example: Click the Service Analytics screen in Siebel CRM, work with the dashboard data, drill down to details and click an action button to navigate back to the Siebel CRM record).<br /><br />Oracle BI EE 10g is the successor of Siebel Analytics and Oracle BI Applications is the successor of Siebel Analytics Applications.<br /><br /><br /><span style="font-weight:bold;">Detailed Data flow</span><br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhJYzwLhfGdHWGQe69agV2bONu21w4lheEHdyT8E_l9x_A1BEbgsrc_nhzOmrsTt2DkrrE1IksUzALkdHVuo9iQe6tpLtKDK-H3rf38MPsGB3hKc8X-6TUG4wWx5L0zYfIAkt0FjHbiWr4/s1600/dataflow.gif"><img style="cursor:pointer; cursor:hand;width: 320px; height: 320px;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhJYzwLhfGdHWGQe69agV2bONu21w4lheEHdyT8E_l9x_A1BEbgsrc_nhzOmrsTt2DkrrE1IksUzALkdHVuo9iQe6tpLtKDK-H3rf38MPsGB3hKc8X-6TUG4wWx5L0zYfIAkt0FjHbiWr4/s320/dataflow.gif" border="0" alt=""id="BLOGGER_PHOTO_ID_5667046866758657154" /></a><br /><br />First, the DAC scheduler kicks off jobs to loadrefresh the OBAW at regular intervals or alternatively, these jobs could be kicked off manually from the DAC client.<br />The DAC server uses the object and configuration data stored in the DAC repository to issue commands to the informatica Server.<br />The informatica server executes the commands issued from DAC, and uses the objects and configuration data stored in the informatica repository.<br />Then the data are extracted, transferred and loaded from the transactional databases into the OBAW target tables.<br />After the ETL is complete and OBAW is online, an OBIEE end user runs a dashboard or report in the Answers or Interactive dashboard.<br />The request comes through the web server and interacts with the presentation server.<br />The presentation server interacts with OBI Server and OBI server will understand the requests and hits the OBAW if it’s not cashed and extracts the data and provides it to the presentation server.<br />The presentation server formats the data into the required format and through web server, the request is presented to the end user<br /><br /><br /><span style="font-weight:bold;"><br />ETL process</span><br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhCidWB6f3Q3vmhq1K-s3vDijX9FYkOE5uS5uWzo7rFztvpzgPcvvqPglaROpGXm_VKC6uxHlDN2IPkxPPqT_Evf9cIfo-j1ecBvjPjceSO6X9Uy48HBvtv1bMRjL0G7uZdh5-VFsXiXWU/s1600/etl.gif"><img style="cursor:pointer; cursor:hand;width: 230px; height: 320px;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhCidWB6f3Q3vmhq1K-s3vDijX9FYkOE5uS5uWzo7rFztvpzgPcvvqPglaROpGXm_VKC6uxHlDN2IPkxPPqT_Evf9cIfo-j1ecBvjPjceSO6X9Uy48HBvtv1bMRjL0G7uZdh5-VFsXiXWU/s320/etl.gif" border="0" alt=""id="BLOGGER_PHOTO_ID_5667047945632657826" /></a><br /><br /><br />ETL mappings are split into two main mappings, SDE mappings and SIL mappings.<br />2 SDE mappings load the staging tables,then SIL mappings (SILOS, SIL_Vert, PLP) load the final physical warehouse tables<br />3 SILOS SIL mappings are for all sources except Siebel Verticals. SIL_Vert SIL mappings are for Siebel Verticals only<br />4 Staging tables are suffixed with S, so W_AP_XACTS_F is the final table and the staging table is W_AP_XACTS_FSRaghavendra Lakshmipathyhttp://www.blogger.com/profile/09047289715110089027noreply@blogger.com1tag:blogger.com,1999:blog-811846438034249793.post-37671369739151777412010-12-06T07:09:00.000-08:002010-12-06T07:14:45.126-08:00DB Recovery<strong><strong>incomplete recovery of database until CANCLE after reaching some time from hot snap backup and archives</strong></strong><br /><br /><br /><br />SQL>startup mount<br /><br />SQL> recover database using backup controlfile until CANCEL<br />ORA-00279: change 5965295809608 generated at 12/01/2010 17:54:21 needed for<br />thread 1<br />ORA-00289: suggestion :<br />/oraarchive/vis/abcd001k_VIS_0001_0000006134_697765707.arc<br />ORA-00280: change 5965295809608 for thread 1 is in sequence #6134<br /><br /><br />Specify log: {<RET>=suggested | filename | AUTO | CANCEL}<br />/oraarchive/vis/abcd001k_VIS_0001_0000006134_697765707.arc<br />ORA-00279: change 5965295965197 generated at 12/02/2010 00:00:16 needed for<br />thread 1<br />ORA-00289: suggestion :<br />/oraarchive/vis/abcd001k_VIS_0001_0000006135_697765707.arc<br />ORA-00280: change 5965295965197 for thread 1 is in sequence #6135<br />ORA-00278: log file<br />'/oraarchive/vis/abcd001k_VIS_0001_0000006134_697765707.arc' no longer<br />needed for this recovery<br /><br /><br />Specify log: {<RET>=suggested | filename | AUTO | CANCEL}<br />/oraarchive/vis/abcd001k_VIS_0001_0000006135_697765707.arc<br />ORA-00279: change 5965295965276 generated at 12/02/2010 00:00:36 needed for<br />thread 1<br />ORA-00289: suggestion :<br />/oraarchive/vis/abcd001k_VIS_0001_0000006136_697765707.arc<br />ORA-00280: change 5965295965276 for thread 1 is in sequence #6136<br />ORA-00278: log file<br />'/oraarchive/vis/abcd001k_VIS_0001_0000006135_697765707.arc' no longer<br />needed for this recovery<br /><br /><br />Specify log: {<RET>=suggested | filename | AUTO | CANCEL}<br />/oraarchive/vis/abcd001k_VIS_0001_0000006136_697765707.arc<br /><br /><br />ORA-00279: change 5965296032263 generated at 12/02/2010 04:00:15 needed for<br />thread 1<br />ORA-00289: suggestion :<br />/oraarchive/vis/abcd001k_VIS_0001_0000006137_697765707.arc<br />ORA-00280: change 5965296032263 for thread 1 is in sequence #6137<br />ORA-00278: log file<br />'/oraarchive/vis/abcd001k_VIS_0001_0000006136_697765707.arc' no longer<br />needed for this recovery<br /><br /><br />Specify log: {<RET>=suggested | filename | AUTO | CANCEL}<br /><br /><br />/oraarchive/vis/abcd001k_VIS_0001_0000006137_697765707.arc<br />ORA-00279: change 5965296032275 generated at 12/02/2010 04:00:19 needed for<br />thread 1<br />ORA-00289: suggestion :<br />/oraarchive/vis/abcd001k_VIS_0001_0000006138_697765707.arc<br />ORA-00280: change 5965296032275 for thread 1 is in sequence #6138<br />ORA-00278: log file<br />'/oraarchive/vis/abcd001k_VIS_0001_0000006137_697765707.arc' no longer<br />needed for this recovery<br /><br /><br />Specify log: {<RET>=suggested | filename | AUTO | CANCEL}<br /><br />/oraarchive/vis/abcd001k_VIS_0001_0000006138_697765707.arc<br />ORA-00279: change 5965296081988 generated at 12/02/2010 08:00:15 needed for<br />thread 1<br />ORA-00289: suggestion :<br />/oraarchive/vis/abcd001k_VIS_0001_0000006139_697765707.arc<br />ORA-00280: change 5965296081988 for thread 1 is in sequence #6139<br />ORA-00278: log file<br />'/oraarchive/vis/abcd001k_VIS_0001_0000006138_697765707.arc' no longer<br />needed for this recovery<br /><br /><br />Specify log: {<RET>=suggested | filename | AUTO | CANCEL}<br />/oraarchive/vis/abcd001k_VIS_0001_0000006139_697765707.arc<br />ORA-00279: change 5965296081997 generated at 12/02/2010 08:00:19 needed for<br />thread 1<br />ORA-00289: suggestion :<br />/oraarchive/vis/abcd001k_VIS_0001_0000006140_697765707.arc<br />ORA-00280: change 5965296081997 for thread 1 is in sequence #6140<br />ORA-00278: log file<br />'/oraarchive/vis/abcd001k_VIS_0001_0000006139_697765707.arc' no longer<br />needed for this recovery<br /><br /><br />Specify log: {<RET>=suggested | filename | AUTO | CANCEL}<br />/oraarchive/vis/abcd001k_VIS_0001_0000006140_697765707.arc<br />ORA-00279: change 5965296141979 generated at 12/02/2010 12:00:15 needed for<br />thread 1<br />ORA-00289: suggestion :<br />/oraarchive/vis/abcd001k_VIS_0001_0000006141_697765707.arc<br />ORA-00280: change 5965296141979 for thread 1 is in sequence #6141<br />ORA-00278: log file<br />'/oraarchive/vis/abcd001k_VIS_0001_0000006140_697765707.arc' no longer<br />needed for this recovery<br /><br /><br />Specify log: {<RET>=suggested | filename | AUTO | CANCEL}<br />CANCEL<br />ORA-01547: warning: RECOVER succeeded but OPEN RESETLOGS would get error below<br />ORA-01194: file 1 needs more recovery to be consistent<br />ORA-01110: data file 1: '/d03/oraVIS/vis/db/apps_st/data/system01.dbf'<br /><br /><br />To open the database with foruce, do below:<br /><br />1) Set _ALLOW_RESETLOGS_CORRUPTION=TRUE in init.ora file.<br />2) Startup Mount<br />3) Alter database open resetlogs.<br />4) reset undo_management to “manual” in init.ora file.<br />5) startup database<br />6) Create new undo tablespace<br />changed undo_management to “AUTO” and undo_tablespace to “NewTablespace”<br />7) Bounce database.Raghavendra Lakshmipathyhttp://www.blogger.com/profile/09047289715110089027noreply@blogger.com1tag:blogger.com,1999:blog-811846438034249793.post-23973100800369384022010-09-17T08:35:00.000-07:002010-09-17T08:43:56.098-07:00ASM<strong><strong>Adding datafile</strong></strong><br /><br />ALTER TABLESPACE TDW_PROD ADD DATAFILE '+DATA_DISKGRP/bivis/datafile/bivis_28.dbf' SIZE 1024M AUTOEXTEND ON NEXT<br /> 1024M MAXSIZE UNLIMITED<br /><br /><strong><strong>queries</strong></strong><br />select NAME,TOTAL_MB,FREE_MB,FREE_MB/TOTAL_MB*100 from v$ASM_DISK;<br /><br />select NAME,TOTAL_MB,FREE_MB,FREE_MB/TOTAL_MB*100 from v$ASM_DISKGROUP;<br /><br />select NAME,TOTAL_MB,FREE_MB,TOTAL_MB-FREE_MB,FREE_MB/TOTAL_MB*100 from v$ASM_DISK<br /><br />select NAME, REDUNDANCY, FAILGROUP from V$ASM_DISK ;<br /><br />select operation from v$ASM_OPERATION;<br /><br />select name,path,MOUNT_STATUS,HEADER_STATUS,MODE_STATUS,STATE,total_mb,free_mb from v$asm_disk<br /><br /><strong>dropping/adding/cheking disk</strong>ALTER DISKGROUP <disk group name> CHECK ALL;<br />ALTER DISKGROUP <disk group name> DROP DISK <disk name>; This command drops the disk and also initiates a REBAL operation<br />We can monitor the status of REBAL operation from v$asm_operation<br />Run select * from v$asm_disk and get different statuses of hdisk8. <br />eg:<br />ALTER DISKGROUP DATA_DISKGRP DROP DISK DATA_DISKGRP_0004;<br /><br />DATA_DISKGRP_004 is alias for /dev/rhdisk8. <br /><br />SQL> select OPERATION from v$asm_operation;<br /> <br />OPERA<br />-----<br />REBAL<br /><br /><br />ALTER DISKGROUP DATA_DISKGRP ADD DISK ‘mention the disk path’ NAME DATA_DISKGRP_004 REBALANCE POWER 5;<br /><br /><br /><strong><strong>This query will give more details about disk usage.</strong></strong> <br />SET LINESIZE 145 <br />SET PAGESIZE 9999 <br />SET VERIFY off <br />COLUMN disk_group_name FORMAT a15 HEAD 'Disk Group Name' <br />COLUMN disk_file_path FORMAT a17 HEAD 'Path' <br />COLUMN disk_file_name FORMAT a20 HEAD 'File Name' <br />COLUMN disk_file_fail_group FORMAT a20 HEAD 'Fail Group' <br />COLUMN total_mb FORMAT 999,999,999 HEAD 'File Size (MB)' <br />COLUMN used_mb FORMAT 999,999,999 HEAD 'Used Size (MB)' <br />COLUMN pct_used FORMAT 999.99 HEAD 'Pct. Used' <br />break on report on disk_group_name skip 1 <br />compute sum label """" of total_mb used_mb on disk_group_name <br />compute sum label ""Grand Total: "" of total_mb used_mb on report <br />SELECT <br /> NVL(a.name, '[CANDIDATE]') disk_group_name <br /> , b.path disk_file_path <br /> , b.name disk_file_name <br /> , b.failgroup disk_file_fail_group <br /> , b.total_mb total_mb <br /> , (b.total_mb - b.free_mb) used_mb <br /> , ROUND((1- (b.free_mb / b.total_mb))*100, 2) pct_used <br />FROM <br /> v$asm_diskgroup a RIGHT OUTER JOIN v$asm_disk b USING (group_number) <br />ORDER BY <br /> a.name <br />/Raghavendra Lakshmipathyhttp://www.blogger.com/profile/09047289715110089027noreply@blogger.com0tag:blogger.com,1999:blog-811846438034249793.post-84224885709276487102010-09-17T08:05:00.000-07:002010-09-17T08:06:59.298-07:00Issues -- Forms<strong><strong>Fixing issues of Custom Forms in Forms Servlet using Doc id 269634.1 </strong></strong><br />You need to put your CUSTOM TOP variable (where the form resides) path information into Apache/Jserv/etc/formservlet.ini file <br />So open formservlet.ini and add this entry at the end <br /># Begin customizations <br />XXTAC_TOP=/d01/app/applmgr/prodappl/xxtac/11.5.0 <br /># End customizationsRaghavendra Lakshmipathyhttp://www.blogger.com/profile/09047289715110089027noreply@blogger.com1tag:blogger.com,1999:blog-811846438034249793.post-80799511735520417972010-09-15T19:54:00.000-07:002010-09-20T19:51:34.232-07:00UNIX<strong><strong>ls commnad</strong></strong>b<br /><br /><strong><strong>Copy the specific things to ur file</strong></strong> <br />ls -l > s1.sh<br /> cat s1.sh |awk '{print $9}' > s2.sh<br /><br /><br />ls *.dbf|wc -l --- number of files in the PWD<br />t<br />ls -l /data/ora007/od08oadb01/*dbf|awk '{print $9}' > 007.txt <br />$9 is column number<br />column 9 is redirected to new file 007.txt<br /><br /># echo "$userid"<br /> system/xxxxx<br /># echo "$userid"|awk 'BEGIN { FS = "/" } {print $2}'<br />xxxxx<br /><br />date=`date +%F`<br /># date +%F<br />2010-09-19<br /># date<br />Sun Sep 19 22:49:21 EDT 2010<br /><br />ls -l *dbf|awk 'BEGIN {total=0} {total+=$5} END {print total/1024/1024/1024}' <br />$5 is the column number<br />column five size column in GB will be printed<br /><br />ls -l [a-g]*dbf|awk 'BEGIN {total=0} {total+=$5} END {print total/1024/1024/1024}' <br />gives the total size of the DBF's starting from a to g<br /><br /><strong><strong>To find out the size of the datafiles</strong></strong> <br />from [a-h]*dbf<br /><br />du -sk [a-h]*dbf |awk 'BEGIN {total=0} {total+=$1} END {print total/1024/1024}'<br />du -sk [i]*dbf |awk 'BEGIN {total=0} {total+=$1} END {print total/1024/1024}'<br />du -sk [j-z]*dbf |awk 'BEGIN {total=0} {total+=$1} END {print total/1024/1024}'<br /><br /><br /> ls -l | grep 'Jan 9' | awk '{print$9}' >one.sh -------in this Jan 9 is the day for which we are compressing and one.sh is the file name. Next open one.sh and do <br /> :%s,^,compress ,g <br /> OR <br /> : %s,^,rm -rf ,g <br /> save the file and run it also do a chmod 777 one.sh <br /><br /><br /> <strong><strong>To remove one month files at a time</strong></strong> <br /> ls -l|grep 'Aug '|awk '{print$9}' > one.sh <br /> chmod 777 one.sh <br /> vi one.sh <br /> : %s,^,rm -f ,g <br /> ./one.sh<br /><br />ls -l |grep 'Feb' |awk '{print$9}' > one.sh<br /><br />ls -l |grep 'Feb' |grep *.req |awk '{print$9}' > one.sh<br /><br /><br /><strong><strong>sdiff</strong></strong><br />sdiff -s -w 50 tb.txt sb.txt<br />50 is word length<br /><br />diff $IAS_ORACLE_HOME/Apache/Apache/conf/httpd.conf $HOME/httpd.conf.orig > /tmp/httpddiff.txt<br />SIZEDIFF=`ls -lA /tmp/httpddiff.txt | awk -F' ' '{print $5}'`<br />if [ $SIZEDIFF -gt 0 ]<br />then<br />mailx -s "VIS irecruitment setup is missing in PROD" emaladdress < /tmp/httpddiff.txt<br />mailx -s "vis irecruitment setup is missing in PROD"" onemoreemailaddress < $HOME/irecmessage.txt<br />fi<br /><br />eg of mailx command:<br />cat /tmp/mailbody | mail -s "Ignore Alerts: ABC PREPROD Apachebounce on hostname" abcd@abc.ice.com -- -r alertfromserver@abc.ice.com < a<br /><br /><br /><strong><strong> to check any errors present in the file</strong></strong><br />cat /oracle/qa/qaappl/admin/QA/log/u2819091.log |grep -i ORA- <br /><br /><strong><strong>VNC SERVER</strong></strong><br />vncserver -geometry 1024x1024 -depth 8 -cc 3<br />vncserver :49 -nolisten local -httpd ${VNC}/classes -cc 3<br />vncserver :99 -geometry 1024x1024<br /><br />You can kill sessions you started.<br />$ vncserver -kill :[display number]<br /><br />$ vncserver -kill :0<br />Wait for 5 mins and delete the file as follows:<br />$ rm /tmp/X11-unix/X0<br />login as appl user for the instance refreshed and start the vnc services as follows:<br />$ vncserver :0 -localhost<br />$ xhost +<br /><br />To set your password the first time (or reset it later) run this command:<br />$ /usr/bin/vncpasswd<br /><br /><br /><br /><strong><strong>strings</strong></strong><br />strings -a hrrunprc.sql | grep Header<br /><br /><br /><strong><strong>To create a directory, change ownership, create symbolic link</strong></strong><br />mkdir -p /u20/app/KEYCS1/apkeycs1/applmgr/11510<br />mkdir -p /u20/app/KEYCS1/apkeycs1/applmgr/common<br />mkdir -p /u20/app/KEYCS1/apkeycs1/applmgr/CUSTOM<br />mkdir -p /u20/app/KEYCS1/apkeycs1/applmgr/product/806<br />mkdir -p /u20/app/KEYCS1/apkeycs1/applmgr/product/iAS<br />mkdir -p /u20/app/KEYCS1/orkeycs1/oracle/product<br />mkdir -p /u22/oradata/KEYCS1/oradata/data01<br />mkdir -p /u22/oradata/KEYCS1/oradata/data02<br />mkdir -p /u22/oradata/KEYCS1/oradata/data03<br /><br />ln -s /u20/app/KEYCS1/apkeycss1/applmgr applmgr<br />ln -s /u20/app/KEYCSS1/orkeycss1/oracle oracle<br />ln -s /u22/oradata/KEYCSS1/oradata oradata<br /><br />chown apkeycs1:oaa applmgr<br />chown orkeycs1:dba oracle<br />chown orkeycs1:dba oradata<br /><br /><br /><strong><strong>Changing group for user</strong></strong><br /># usermod -g dba ora10g<br />Check the group created in “/etc/group”<br /># cat /etc/group |grep dba<br />dba::110:<br /><br /><br />useradd -c ""App user KEYCS1"" -d /u20/app/KEYCS1/apkeycs1 -g oaa apkeycs1<br />useradd -c ""Ora user KEYCS1"" -d /u20/app/KEYCS1/orkeycs1 -g dba orkeycs1<br /><br />chown -R apkeycs1:oaa /u20/app/KEYCS1/apkeycs1<br />chown -R orkeycs1:dba /u20/app/KEYCS1/orkeycs1<br />chown -R orkeycs1:dba /u22/oradata/KEYCS1<br /><br /><br /><strong><strong>Commands useful to clear space issues in mount points</strong></strong><br /><br />bdf|awk '{print $5,$6}' > one.txt<br /><br />find . -type f -mtime +5<br />find . -type f -mtime +5 -exec rm -f()<br />find . -type f -mtime +5|xargs rm -f<br />find . -name *.req -mtime +7 -exec compress -f {} \;<br />find . -name *.req -mtime +30 -exec rm -f {} \;<br />find . -name *.req -mtime +7 -exec gzip {} \;<br />find . -name "*" -mtime +20 -exec rm {} \;<br />find . -name "*" -mtime +30 -exec ls -la {} \;|awk '{print $5}'>a.sh<br />find . -size +500000000 -exec ls -al {} \;<br />find . -name "*" -mtime +20 -exec rm {} \;<br />find . -size +52428800 -exec ls -al {} \;<br />find . -size +100000000c -exec ls -l {} \; <br />$du -sh <br />$du -sk<br />find /eadvi4/applmgr/common/admin/log/EADVI4_hostname \( -name '*.*' \) -mtime +120 -exec rm -rf {} \;<br /><br />delete files /u03/applmgr/prod/TEMPS and /u03/applmgr/qlty/TEMPS older than 15 days. <br /> find /u03/applmgr/prod/TEMPS/ -mtime +15 -exec rm -f {} \;<br /> find /u03/applmgr/qlty/TEMPS/ -mtime +15 -exec rm -f {} \;<br /><br />find . -name "*.dbg" -mtime +1 -exec rm -f {} \;<br /><br />Script to delete the dump files 21 days old need to be in cron <br />30 20 * * * find /ORACLE/eul10g/backup \( -name '*.dmp*' \) -mtime +21 -exec rm -f {} \; ---- As oracle user cron.<br /><br /><strong><strong>To find Symbolink links and replace to corrent ones</strong></strong><br />more rename_softlinks.sh<br />#!/bin/ksh<br />cd $ORACLE_HOME<br />find . -type l -exec ls -l {} \; | grep prod | awk '{gsub("prod","qa",$11); print "rm "$9; print "ln -s " $11 " " $9}'<br /><br /><strong><strong>To find links under any top</strong></strong><br />find . -type l -print | xargs ls -ld<br /><br /><strong><strong>Recreate soft links in $CUSTOM_TOP/bin</strong></strong><br />unlink visSQL <br />ln -s ../../../fnd/11.5.0/bin/fndcpesr visSQL <br /><br /><br /><strong><strong>How to find the symbolic links that point to the old path in your oracle_home and appl_top.</strong></strong><br />This command is useful in cloning after restore from source to target that symbolic link are not pointing to source. <br /><br />ls -al `find . -type l` | grep $OLD_PATH <br /><br /><strong><strong>To find all the text files that contains the old path at the UNIX level.</strong></strong> <br /><br />find . -type f -print|grep -v ":"|xargs file|grep "text"|awk '{FS=":"}{print $1}'|xargs grep -il $OLD_PATH <br /><br /><strong><strong>How to Sort files based on Size of file in a Directory</strong></strong><br />Useful in finding out spaces issues <br />ls -l | sort -nrk 5 | more <br /><br /><strong><strong>How to find a "word" or pattern in all files in a directory & subdirectories</strong></strong><br />find . -name "*" -exec grep -l <pattern> {} \; -print <br />for example I want to search for word oracle <br />find . -name "*" -exec grep -l oracle {} \; -print <br /><br /><br /><strong><strong>How to delete files older than N number of days , Useful to delete old log files</strong></strong><br />find . -name '*.*' -mtime +<T in days> -exec rm {} \; <br />for example if I want to delete all files older than 7 days <br />find . -name '*.*' -mtime +7 -exec rm {} \;<br /><br /><br /><br /><strong><strong>To edit window registry</strong></strong><br />regedit (on run prompt)<br /><br /><br /><strong><strong>To view a Linux desktop remotely</strong></strong><br />1. SSH to Linux server<br />2. Start the VNC server by typing 'vncserver' at the command line. Remember the display number.<br />3. Connect to the VNC server using a VNC viewer. Enter host name, display number, and VNC password for the user who started the session.<br />4. When finished, don't log out of Linux, just close the VNC viewer window. <br />5. If you wish, kill the VNC server session when finished. Or keep it running and use it next time you need to connect. <br /><br /><br /><br /><strong><strong>To use VNC to control or view a computer behind a firewall</strong></strong><br />The official VNC FAQ has a good explanation of using VNC through a firewall <br />at http://www.realvnc.com/faq.html#firewall. You must either open port 5900+N where N equals the VNC port assigned when you created a connection, or you must use SSH tunneling to channel the VNC traffic through ports that normally allow traffic through a firewall.<br />More detailed information is available at the link above. <br /><br /><br /><strong><strong>ps command</strong></strong><br />ps -fu apdev<br /><br /><br /><strong><strong>sort, delete rows, columns</strong></strong><br />ls -lrt |sort -nr -k5 |head -15<br />ls -lart |head -200<br />ls -lrt > a.sh<br />vi a.sh<br />esc :3000,$d -- to delete all line after 3000 use ^d for before 3000 deletion<br />$cut -c 55-95 a.sh > b.sh<br />vi b.sh<br />esc :%s,compress,^,g<br />save <br />sh b.sh<br /><br /><strong><strong>To know port is free or not</strong></strong><br />netstat -na <port no><br />lsof |grep <port no><br /><br /><br /><strong><strong>To know n/w IP</strong></strong><br />netstat -iv<br /><br /><strong><strong>tar commands</strong></strong><br />Simple tar commands <br />tar -cvf directory.tar direcotry<br />tar -xvf director.tar<br /><br /><br />tar cvfzp TEST_apps_inst_myserver.tar.gz apps inst<br />tar cvfzp TEST_dbhome_myserver.tar.gz db/tech_st<br />Extract each of the tar files with the command tar xvfzp <br />tar xvfzp TEST_apps_inst_myserver.tar.gz<br /><br />nohup tar -zxvf ------.gzip or .gz &<br /><br />tar cf - . | ssh 192.168.100.107 "cd /path/target_directory; tar xf - "<br /><br />ssh servernmae "cd /ORACLE/prod/9.2.0;tar cf - ." | tar xf -<br /><br />Remote tar and untaa eg:<br />ssh usaxsxx "cd /ORACLE/prod/9.2.0;tar cf - ." | tar xf -<br />ssh usaxsxx "cd /ORACLE/prod/8.0.6; tar cf - ." | tar xf -<br />ssh usaxsxx "cd /ORACLE/prod/9iAS; tar cf - ." | tar xf -<br />ssh usaxsxx "cd /ORACLE/apps/prod; tar cf - [a-b]*" | tar xf -<br />ssh usaxsxx "cd /ORACLE/apps/prod; tar cf - [d-s]*" | tar xf -<br />ssh usaxsxx "cd /ORACLE/apps/prod; tar cf - [u-z]*" | tar xf -<br />ssh usaxsxx "cd /ORACLE/apps/prod; tar cf - c[a-n]*" | tar xf -<br />ssh usaxsxx "cd /ORACLE/apps/prod; tar cf - c[p-z]*" | tar xf -<br />ssh usaxsxx "cd /ORACLE/apps/prod/common; tar cf - [a-j]*" | tar xf -<br />ssh usaxsxx "cd /ORACLE/apps/prod/common; tar cf - [p-z]*" | tar xf -<br />ssh usaxsxx "cd /ORACLE/apps/prod/common; tar cf - outbound" | tar xf -<br /><br />tar cf - [a-b]* | ssh remotehost "cd /ORACLE/apps/narvis; tar xf - "<br />tar cf - [d-s]* | ssh remotehost "cd /ORACLE/apps/narvis; tar xf - "<br />tar cf - [u-z]* | ssh remotehost "cd /ORACLE/apps/narvis; tar xf - "<br />tar cf - c[a-n]* | ssh remotehost "cd /ORACLE/apps/narvis; tar xf - "<br />tar cf - c[p-z]* | ssh remotehost "cd /ORACLE/apps/narvis; tar xf - "<br />tar cf - [a-j]* | ssh remotehost "cd /ORACLE/apps/narvis/common; tar xf - "<br />tar cf - [p-z]* | ssh remotehost "cd /ORACLE/apps/narvis/common; tar xf - "<br /><br />tar cf - . | ssh remotehost "cd /ORACLE/narvis/8.0.6.3; tar xf - "<br />tar cf - . | ssh remotehost "cd /ORACLE/narvis/9iAS; tar xf - "<br /><br />tar cf - . | ssh remotehost "cd /ORACLE/narvis_conf/8.0.6.3; tar xf - "<br />tar cf - . | ssh remotehost "cd /ORACLE/narvis_conf/9iAS; tar xf - "<br /><br /><br /><br />Tar split<br />nohup tar cvf - OraHome_1 2> /11iGold/backup/OraHome_1.log | gzip | split -b 2000m - /11iGold/backup/OraHome_1.tar.gz &<br />This can be restored using the follwing command if required:<br />nohup cat /11iGold/backup/OraHome_1.tar.gz?? | gunzip | tar xf - &<br /><br /><br /><strong><strong>untar/tar syntax in a single set of options</strong></strong><br /> <br />nohup cat /backups/code_tree_backups/100906/920.gz | gunzip | tar xvf - > 920.log &<br />nohup cat /backups/code_tree_backups/100906/1159.gz | gunzip | tar xvf - > ApplTop.log &<br />nohup cat /backups/code_tree_backups/100906/806.gz | gunzip | tar xvf - > 806.log &<br />nohup cat /backups/code_tree_backups/100906/common.gz | gunzip | tar xvf - > common.log &<br />nohup cat /backups/code_tree_backups/100906/custom.gz | gunzip | tar xvf - > custom.log &<br />nohup cat /backups/code_tree_backups/100906/ias.gz | gunzip | tar xvf - > ias.log &<br /><br />nohup tar -cvpf - /u01/oradata/ 2> /scratch/u01.log | gzip -c > /scratch/u01.gz &<br />nohup tar -cvpf - /u13/oradata/ 2>/scratch/u13.log|gzip -c>/scratch/u13.gz &<br /><br />nohup tar -cvpf - PROD_LVL0_20070103_9ii6jsga_s16690_p11 PROD_LVL0_20070103_9ki6jsgb_s16692_p11 PROD_LVL0_20070103_9ni6jtlj_s16695_p11 PROD_LVL0_20070103_9ri6k05o_s16699_p11 PROD_LVL0_20070103_9si6k0ig_s16700_p11 PROD_LVL0_20070103_9ti6k0ku_s16701_p11 PROD_LVL0_20070103_9mi6jtl9_s16694_p11 c-620808606-20070104-00 snapcf_PROD.f 2>/backup/backup1.log|gzip - c >/backup/backup1.gzip &<br /><br /><br />nohup tar -zcvpf $BACKUP_DIR/u01.tar /u01/PROD/oradata/ >$BACKUP_DIR/u01.log 2>&1 &<br />nohup tar -zxvpf $BACKUP_DIR/u10.tar & <br /><br /><strong><strong>rcp command</strong></strong><br />rcp -pr applmgr@remoteserver1:/u03/oracle/tst/tstcomn/* . & <br />rcp -pr applmgr@remoteserver1:/u03/oracle/tst/tstora/8.0.6 . &<br />rcp -pr applmgr@remoteserver1:/u03/oracle/tst/tstora/iAS . &<br /><br /><strong><strong>To zip files several files at a time</strong></strong><br />zip -r /tmp/`uname -n`_`date +%m%d%y.%H%M`_iAS.zip \<br /> $IAS_ORACLE_HOME/Apache/Apache/conf/* \<br /> $IAS_ORACLE_HOME/Apache/Apache/logs/* \<br /> $IAS_ORACLE_HOME/Apache/Jserv/etc/* \<br /> $IAS_ORACLE_HOME/Apache/Jserv/logs/*<br /><br /><br /><strong><strong>R12 config files backup commands</strong></strong><br />as oratst user :<br />REFRESH=/clone/TST/REFRESH_FILES_DB.`date +%m%d%y`<br />rm -rf $REFRESH<br />mkdir -p $REFRESH<br />cp -p $ORACLE_HOME/*.env $REFRESH<br />cp -p $ORACLE_HOME/dbs/*.ora $REFRESH<br />cp -p $ORACLE_HOME/appsutil/*.xml $REFRESH<br />cp -p $TNS_ADMIN/*.ora $REFRESH<br />chmod 700 $REFRESH<br /><br />=> as appltst user:<br />REFRESH=/clone/TST/REFRESH_FILES_APPS.`date +%m%d%y`<br />#rm -rf $REFRESH<br />mkdir -p $REFRESH/inst/apps<br />rsync -avH /d21/appltst/inst/apps/* $REFRESH/inst/apps/<br />chmod 700 $REFRESH<br /><br /><strong><strong>R12 config files backup commands</strong></strong><br /># more /vis2/clone/vis2.backup<br />echo su - applvis2<br />sleep 1<br />echo<br />echo backing up applications related config files .....................<br />sleep 2<br />echo<br />su - applvis2 -c /vis2/clone/vis2bkgenfiles/applvis2.bkp<br />sleep 1<br />echo su - oravis2<br />echo<br />sleep 2<br />echo backing up db related config files .....................<br />su - oravis2 -c /vis2/clone/vis2bkgenfiles/oravis2.bkp<br />sleep 1<br />echo<br />echo vis2 instance config files backup is complete and is located in /vis2/clone directory .<br />echo<br /># more /vis2/clone/vis2bkgenfiles/applvis2.bkp<br />REFRESH=/vis2/clone/REFRESH_FILES_APPS.`date +%m%d%y`<br />mkdir -p $REFRESH/inst/apps<br />rsync -avH /d11/applvis2/inst/apps/* $REFRESH/inst/apps/ --exclude "vis2_hostname1/logs/" --exclude "vis2_hostname2<br />/logs/" --exclude "vis2_hostname3/appltmp/" --exclude "vis2_hostname4/appltmp/"<br />chmod 700 $REFRESH<br /><br /># more /vis2/clone/vis2bkgenfiles/oravis2.bkp<br />REFRESH=/vis2/clone/REFRESH_FILES_DB.`date +%m%d%y`<br />rm -rf $REFRESH<br />mkdir -p $REFRESH<br />cp -p $ORACLE_HOME/*.env $REFRESH<br />cp -p $ORACLE_HOME/dbs/*.ora $REFRESH<br />cp -p $ORACLE_HOME/appsutil/*.xml $REFRESH<br />cp -p $TNS_ADMIN/*.ora $REFRESH<br />chmod 700 $REFRESH<br />exp " '/ as sysdba' " file=$REFRESH/msdem_vis2_hostname.dmp log=$REFRESH/msdem_vis2_hostname.log direct=y feedbac<br />k=1000 owner=MSDEM<br /><br /><br /><br /><strong><strong>rsync</strong></strong><br />if refired it will pickup from where it failed<br />“rsync –exclude=ABc/ --log-file=<log file name> -avH <source from snap> <destination dir>” <br /><br /><br />RSYNC failed and restarted it. Please monitor the rsync and restart in case of failure.<br />Location: /d33/oravis/visdata<br />To Monitor: tail –f nohup.out (and) ps -ef | grep rsync<br />Command used to rsync: <br />nohup rsync -avH /snap/oravis/visdata/ /d33/oravis/visdata/ &<br /><br /><br />Some more rsync commands<br />nohup rsync -a -v -H /snaps/d01/applmgr/ /d01/appltst/ > /d01/appltst/rsync.log &<br />nohup rsync -a –v -H /cosxaor0o_d02oracle/ /d02/oratst/ > /d02/oratst/rsync.log & <br />nohup rsync -a -v -H /snaps/d03/oracle/ /d03/oratst/ > /d03/oratst/rsync.log &<br /><br /><br />One more rsync example<br />rsync -avH /d01/applcrp2/inst/apps/* $REFRESH/inst/apps/ --exclude "CRP2_vcosxaor33/logs/" --exclude "CRP2_vcosxawo14/logs/" --exclude "CRP2_vcosxaor33/appltmp/"<br />--exclude "CRP2_vcosxawo14/appltmp/"<br /><br />some more rsync eg:<br />rsync commands and they can be submitted as background jobs using the nohup command.<br /><br />Applications tier files:<br /><br />cd /d01/oradev<br />nohup rsync --update --verbose --progress --stats --compress --recursive --times --owner --group --hard-links --perms --links /d01_snap_922385/oraprod/prodappl /d01/oradev/ > d01_devaappl_rsync.log &<br /><br />nohup rsync --update --verbose --progress --stats --compress --recursive --times --owner --group --hard-links --perms --links /d01_snap_922385/oraprod/prodcomn /d01/oradev/ > d01_devacomn_rsync.log &<br /><br />nohup rsync --update --verbose --progress --stats --compress --recursive --times --owner --group --hard-links --perms --links /d01_snap_922385/oraprod/prodora /d01/oradev/ > d01_devaora_rsync.log &<br /><br />Database tier files:<br /><br />cd /d03/oradev<br />nohup rsync --update --verbose --progress --stats --compress --recursive --times --owner --group --hard-links --perms --links /d03_snap/oraprod/proddata /d03/oradev/ > d03_devadata_rsync.log &<br /><br />cd /d02/oradev<br />nohup rsync --update --verbose --progress --stats --compress --recursive --times --owner --group --hard-links --perms --links /d02_snap/oraprod/prodracdb /d02/oradev/ > d02_devadb_rsync.log &<br /><br /><br />one more rsync eg:<br /> <br />nohup rsync --log-file=d03_oradata_a.log -avH /NEW/d03/oracle/proddata/[a-d]* /d03/oracle/proddata &<br /><br />nohup rsync --log-file=d03_oradata_e.log -avH /NEW/d03/oracle/proddata/[e-g]* /d03/oracle/proddata & <br /><br />nohup rsync --log-file=d03_oradata_h.log -avH /NEW/d03/oracle/proddata/[h-k]* /d03/oracle/proddata & <br /> <br />nohup rsync --log-file=d03_oradata_l.log -avH /NEW/d03/oracle/proddata/[l-o]* /d03/oracle/proddata & <br /> <br />nohup rsync --log-file=d03_oradata_p.log -avH /NEW/d03/oracle/proddata/[p-s]* /d03/oracle/proddata & <br /> <br />nohup rsync --log-file=d03_oradata_t.log -avH /NEW/d03/oracle/proddata/[t-w]* /d03/oracle/proddata & <br /> <br />nohup rsync --log-file=d03_oradata_x.log -avH /NEW/d03/oracle/proddata/[x-z]* /d03/oracle/proddata & <br /> <br /> <br />nohup rsync --log-file=d02_oracle.log -avH /NEW/d02/oracle/* /d02/oracle & <br /> <br />nohup rsync --log-file=d01_prodappl.log -avH /NEW/d01/oracle/prodappl/* /d01/oracle/prodappl & <br /> <br />nohup rsync --log-file=d01_prodcomn.log /NEW/d01/oracle/prodcomn/* /d01/oracle/prodcomn/ & <br /> <br />nohup rsync --log-file=d01_prodora.log -avH /NEW/d01/oracle/prodora/* /d01/oracle/prodora & <br /> <br /> <br />nohup rsync --log-file=d01_final_log -avH /NEW/d01/* /d01/ & <br />nohup rsync --log-file=d02_final_log -avH /NEW/d02/* /d02/ & <br />nohup rsync --log-file=d03_final_log -avH /NEW/d03/* /d03/ & <br /><br /><br /><strong><strong>Config files backup commands for apps</strong></strong>[applvis@hostname]$ more /home/appltest/clone/config_backup_apps.sh<br />REFRESH=/home/appltest/REFRESH_FILES.`date +%m%d%y`<br />mkdir -p $REFRESH/appltop/admin $REFRESH/oraclehome $REFRESH/iasoraclehome $REFRESH/fndtop/resource $REFRESH/fndtop/secure $REFRESH/oahtml/bin/cfg $REFRESH/iasoraclehome/Apache/<br />modplsql $REFRESH/iasoraclehome/Apache/Apache $REFRESH/iasoraclehome/Apache/Jserv $REFRESH/comntop/admin/scripts $REFRESH/adtop/admin/template $REFRESH/config_top $REFRESH/iasco<br />nfighome/Apache<br />cp -rp $APPL_TOP/*.env $REFRESH/appltop<br />cp -rp $APPL_TOP/admin/*.env $REFRESH/appltop/admin<br />cp -rp $APPL_TOP/admin/*.txt $REFRESH/appltop/admin<br />cp -rp $APPL_TOP/admin/*.xml $REFRESH/appltop/admin<br />cp -rp $APPL_TOP/admin/*.sql $REFRESH/appltop/admin<br />cp -rp $APPL_TOP/admin/*.pls $REFRESH/appltop/admin<br />cp -rp $APPL_TOP/admin/*.drv $REFRESH/appltop/admin<br />cp -rp $ORACLE_HOME/*env $REFRESH/oraclehome<br />cp -rp $IAS_ORACLE_HOME/*env $REFRESH/iasoraclehome<br />cp -rp $IAS_CONFIG_HOME/*env $REFRESH/config_top<br />cp -rp $FND_TOP/resource/* $REFRESH/fndtop/resource<br />cp -rp $FND_TOP/secure/* $REFRESH/fndtop/secure<br />cp -rp $OA_HTML/bin/*cfg $REFRESH/oahtml/bin/cfg<br />cp -rp $IAS_ORACLE_HOME/Apache/modplsql/cfg $REFRESH/iasoraclehome/Apache/modplsql<br />cp -rp $IAS_ORACLE_HOME/Apache/Apache/conf $REFRESH/iasoraclehome/Apache/Apache<br />cp -rp $IAS_ORACLE_HOME/Apache/Jserv/etc $REFRESH/iasoraclehome/Apache/Jserv<br />cp -rp $APPLCSF/scripts/* $REFRESH/comntop/admin/scripts<br />cp -rp $AD_TOP/admin/template $REFRESH/adtop/admin/template<br />cp -rp $IAS_CONFIG_HOME/Apache/modplsql/cfg $REFRESH/iasconfighome/Apache/modplsql<br />cp -rp $IAS_CONFIG_HOME/Apache/Apache/conf $REFRESH/iasconfighome/Apache/Apache<br />cp -rp $IAS_CONFIG_HOME/Apache/Jserv/etc $REFRESH/iasconfighome/Apache/Jserv<br /><br /><br />-----------------------<br /><strong><strong>Config file backup commands from DB side</strong></strong><br />[oravis@hostname]$ more db_bkp.sh<br />REFRESH=$HOME/REFRESH_FILES_DB.`date +%m%d%y`<br />mkdir -p $REFRESH/ORACLE_HOME/appsutil $REFRESH/TNS_ADMIN<br />cp -rp $ORACLE_HOME/dbs $REFRESH/ORACLE_HOME<br />cp -rp $ORACLE_HOME/*env $REFRESH/ORACLE_HOME<br />cp -rp $ORACLE_HOME/appsutil/*.xml $REFRESH/ORACLE_HOME/appsutil<br />cp -rp $TNS_ADMIN $REFRESH/TNS_ADMIN<br /><br /><strong><strong>To run alert commands repeatedly</strong></strong><br />while true<br />do<br />date<br />sqlplus "/as sysdba" @temp_alert1.sql<br />sleep 10<br />clear<br />done<br /><br /><strong><strong>To know number of CPU's on AIX m/c</strong></strong><br />lsdev -Cc processor<br /><br /><strong><strong>titles of each xterm window</strong></strong><br />If we add this to $HOME/.kshrc (assuming KSH is the shell being used), the information that is constant throughout the lifetime of the shell. <br />echo ""\033]0;${USER}@${HOST}\007"" <br />The answer is available freely on the internet: http://www.faqs.org/docs/Linux-mini/Xterm-Title.html <br /><br /><strong><strong>How to run sibclean</strong></strong><br />As any of these users rlakshmi<br />$ /usr/local/bin/sudo /usr/sbin/slibclean <br />This will clean up any unused modules in kernel and library memory which may cause installation or runtime problems<br /><br /><br /><strong><strong>Setup the SSH Key between machine1 and machine2</strong></strong><br /><br />Scp machine1:/home/applmgr/.ssh/id_dsa.pub to machine2:/home/applmgr/.ssh/authorized_keys2<br /><br />Verify that the ssh key is working from usellx08 to the config/middle tiers of the cloned instance<br /><br />machine1> ssh machine2<br /><br />machine2>exit<br /><br />Connection to machine2 closed<br /><br />machine1><br /><br />We can also generate the dsa key if not working as :<br />ssh-kegen -t dsa<br /><br /><br /><strong><strong>Control file creation script</strong></strong><br />cat create_controlfile_script_for_newdb.sh<br />#!/bin/ksh<br /><br />newdbname=$1<br />[ -z "$newdbname" ] && echo "Usage: {script_name}.sh <new_db_name>" && exit 1<br /><br />cd $DATA_DIR/admin/udump<br />latest_tracefile=`ls -lt *.trc | head -5 | awk '{print $9}' | xargs grep -l "CREATE CONTROLFILE" | head -1`<br /><br />awk '/# Set #2. RESETLOGS case/ {<br /> while ($0 !~ /^;$/) {<br /> getline;<br /> if ($0 ~ /^#/) continue;<br /> sub("CONTROLFILE REUSE","CONTROLFILE SET");<br /> sub("PROD",s_dbname);<br /> sub("/prod/","/"s_dbname"/");<br /> sub("/prod","/redo");<br /> print<br /> }<br />}' s_dbname=$newdbname $latest_tracefile > create_controlfile_${newdbname}.sql<br /><br /><br /><strong><strong>To check if package/rpm is installed</strong></strong><br />rpm –qa grep {rpm_name} (From root)<br />eg: rpm -q binutils <br />To install package/rpm<br />rpm –ivh {rpm_name} (From root)<br /><br />If you execute rpm with the -qa options (q for query mode, a for all installed packages) you will see the following if it is installed:<br /><br />[oravis@myserver~]$ rpm -qa | grep libstdc++-devellibstdc++-devel-3.4.6-10<br />If your on a 64bit OS how do you know if this is the 32bit or 64bit version.<br />You need specify a query format in the rpm command. following example extract the name, version, release and architecture information for the libstdc++-devel package.<br /><br />[oravis@myserver ~]$ rpm -qa --qf "%{NAME}-%{VERSION}-%{RELEASE}_%{ARCH}.rpm \ %{INSTALLTIME:date}\n" | grep libstdc++libstdc++-devel-3.4.6-10_x86_64.rpm <br />Mon 06 Oct 2008 07:46:50 PM EDTlibstdc++-devel-3.4.6-10_i386.rpm Mon 06 Oct 2008 07:46:50 PM EDT<br /><br /><br /><strong><strong>To check the amount of RAM and swap space available, run this:</strong></strong> <br /><br />grep MemTotal /proc/meminfo<br />grep SwapTotal /proc/meminfo<br /><br />Ex:<br /># grep MemTotal /proc/meminfo<br />MemTotal: 1034680 kB<br /># grep SwapTotal /proc/meminfo<br />SwapTotal: 1534196 kB<br /><br />The minimum RAM required is 1024MB, and the minimum required swap space is 1GB. Swap space should be twice the amount of RAM for systems with 2GB of RAM or less and between one and two times the amount of RAM for systems with more than 2GB. <br /><br />You also need 2.5GB of available disk space for the Oracle Database 10g Release 2 software and another 1.2GB for the database. The /tmp directory needs at least 400MB of free space. To check the available disk space on your system, run the following command: <br /><br />df -h<br />Ex:<br /># df -h<br />Filesystem Size Used Avail Use% Mounted on<br />/dev/sda3 6.8G 1.3G 5.2G 20% /<br />/dev/sda1 99M 17M 77M 18% /boot<br /><br /><strong><strong>Kernel Parameter</strong></strong><br />Make sure you add following entry in /etc/sysctl.conf<br /><br />kernel.core_users_pid = 1<br />kernel.sem = 250 32000 100 128<br />kernel.shmmax = 65536<br />fs.file-max = 65536<br />net.ipv4.ip_local_port_range = 1024 65000<br />net.core.rrem_default = 262144<br />net.core.rmem_max = 262144<br />net.core.wmem_default = 262144<br />net.core.wmem_mzx = 262144<br /><br />After adding these entries execute below command to take these parameters in to effect<br /><br />sysctl –p /etc/sysctl.conf # sysctl -p<br />net.ipv4.ip_forward = 0<br />net.ipv4.conf.default.rp_filter = 1<br />net.ipv4.conf.default.accept_source_route = 0<br />kernel.sysrq = 0<br />kernel.core_uses_pid = 1<br />kernel.panic = 10<br />vm.dirty_background_ratio = 1<br />vm.dirty_ratio = 5<br />vm.lower_zone_protection = 100<br />vm.oom-kill = 0<br />kernel.shmmax = 34359738368<br />kernel.shmall = 8388608<br />kernel.sem = 2000 500000 100 250<br />kernel.shmmni = 4096<br />kernel.msgmnb = 65535<br />kernel.msgmni = 2878<br />kernel.msgmax = 8192<br />fs.file-max = 131072<br />net.ipv4.ip_local_port_range = 20000 65000<br />net.core.rmem_default = 262144<br />net.core.rmem_max = 262144<br />net.core.wmem_default = 262144<br />net.core.wmem_max = 262144<br />vm.nr_hugepages = 16384<br /><br /><br /><br />$/sbin/sysctl -a | grep sem <br />kernel.sem = 250 32000 100 128<br /><br /><br /><br /><strong><strong>Create the Oracle Groups and User Account </strong></strong><br />Next, create the Linux groups and user account that will be used to install and maintain the Oracle Database 10g Release 2 software. The user account will be called oracle, and the groups will be oinstall and dba. Execute the following commands as root: <br /><br />Ex:<br /># /usr/sbin/groupadd oinstall<br /># /usr/sbin/groupadd dba<br /># /usr/sbin/useradd -m -g oinstall -G dba oracle<br /># id <br />oracleuid=501(oracle) gid=502(oinstall) groups=502(oinstall),503(dba)<br /><br />Set the password on the oracle account: <br /><br />passwd oracle<br />Ex:<br /># passwd oracle<br />Changing password for user oracle.<br />New password:<br />Retype new password:<br />all authentication tokens updated successfully.<br /><br /># groupadd dba<br /># useradd –d "/home/oracle" –m –g dba oracle<br /># passwd oracle<br /><br /><strong><strong>useradd </strong></strong><br />The general syntax for the useradd command is :<br /><br />useradd -d home_directory -e expire_date -g initial_group -p password login_name<br /><br />home_directory : Is the main directory within which the user is expected to store all his files and subdirectories.<br />For a user named ‘neha’ the home directory generally is /home/neha<br />expire_date : This is the date on which the user’s account expires and he/she cannot access the computer anymore unless the account is renewed.<br />initial_group : Every user in Linux belongs to a group which affects his file permissions. The initial group must be a group which already exists.<br />Password : This will be the user’s password to access his account<br />login_name : This will be the user name with which the user will access his account.<br /><br />Eg :<br /><br />useradd -d /home/neha -e 2009-12-03 -g root -p talent123 neha<br /><br />creates a user named neha on my computer.<br /><br />Home directory for user Neha is /home/neha<br />Expirty date is 3rd december,2009<br />Belongs to the ‘root’ group<br />Password is talent123<br /><br />Incase you do not enter one of the parameters group, home, expire or shell they are replaced by their default values. These default values can be viewed using the “ useradd -D” command and can also be changed.<br /><br /><br /><strong><strong>Configuring Kernel Parameters</strong></strong><br /><br />Check and edit the /etc/sysctl.conf file with the following lines.<br /><br />kernel.shmall = 2097152<br />kernel.shmmax = 2147483648<br />kernel.shmmni = 4096<br />kernel.sem = 250 32000 100 128<br />fs.file-max = 65536<br />net.ipv4.ip_local_port_range = 1024 65000<br />net.core.rmem_default = 1048576<br />net.core.rmem_max = 1048576<br />net.core.wmem_default = 262144<br />net.core.wmem_max = 262144<br /><br />Note: Make sure to reboot the system or run the command /sbin/sysctl –p to change the kernel parameters.<br /><br />Add the following lines to the /etc/security/limits.conf file:<br /><br />oracle soft nproc 2047<br />oracle hard nproc 16384<br />oracle soft nofile 1024<br />oracle hard nofile 65536<br /><br /><br />Add or edit the following line in the /etc/pam.d/login file, if it does not already exist:<br /><br />session required /lib/security/pam_limits.so<br /><br />Make sure that SELINUX=disabled in the the /etc/selinux/config file<br /><br />Edit the /etc/redhat-release file replacing the current release information (Fedora Core release 6 (Zod)) with redhat-4<br /><br /><strong><strong>Create Directories</strong></strong><br />mkdir -p /u01/app/oracle <br />chown -R oracle:oinstall /u01/app/oracle <br />chmod -R 775 /u01/app/oracle <br /><br />Ex<br />#mkdir -p /u01/app/oracle <br /># chown -R oracle:oinstall /u01/app/oracle <br /># chmod -R 775 /u01/app/oracle<br /><br />The Linux kernel is a wonderful thing. Unlike most other *NIX systems, Linux allows modification of most kernel parameters while the system is up and running. There's no need to reboot the system after making kernel parameter changes. Oracle Database 10g Release 2 requires the kernel parameter settings shown below. The values given are minimums, so if your system uses a larger value, don't change it. <br />kernel.shmall = 2097152<br />kernel.shmmax = 536870912<br />kernel.shmmni = 4096<br />kernel.sem = 250 32000 100 128<br />fs.file-max = 65536<br />net.ipv4.ip_local_port_range = 1024 65000<br />net.core.rmem_default=262144<br />net.core.wmem_default=262144<br />net.core.rmem_max=262144<br />net.core.wmem_max=262144<br /><br />If you're following along and have just installed Linux, the kernel parameters will all be at their default values and you can just cut and paste the following commands while logged in as root. <br />cat /etc/sysctl.conf EOF<br />kernel.shmall = 2097152<br />kernel.shmmax = 536870912<br />kernel.shmmni = 4096<br />kernel.sem = 250 32000 100 128<br />fs.file-max = 65536<br />net.ipv4.ip_local_port_range = 1024 65000<br />net.core.rmem_default=262144<br />net.core.wmem_default=262144<br />net.core.rmem_max=262144<br />net.core.wmem_max=262144<br />EOF<br />/sbin/sysctl -p<br /><br />Ex:<br /># cat /etc/sysctl.conf EOF<br />kernel.shmall = 2097152<br /> kernel.shmmax = 536870912<br /> kernel.shmmni = 4096<br /> kernel.sem = 250 32000 100 128<br /> fs.file-max = 65536<br /> net.ipv4.ip_local_port_range = 1024 65000<br /> EOF<br /># /sbin/sysctl -p<br />net.ipv4.ip_forward = 0<br />net.ipv4.conf.default.rp_filter = 1<br />net.ipv4.conf.default.accept_source_route = 0<br />kernel.sysrq = 0<br />kernel.core_uses_pid = 1<br />kernel.shmall = 2097152<br />kernel.shmmax = 536870912<br />kernel.shmmni = 4096<br />kernel.sem = 250 32000 100 128<br />fs.file-max = 65536<br />net.ipv4.ip_local_port_range = 1024 65000<br />net.core.rmem_default = 262144<br />net.core.wmem_default = 262144<br />net.core.rmem_max = 262144<br />net.core.wmem_max = 262144<br /><br />Run the following commands as root to verify your settings: <br />/sbin/sysctl -a | grep shm<br />/sbin/sysctl -a | grep sem<br />/sbin/sysctl -a | grep file-max<br />/sbin/sysctl -a | grep ip_local_port_range<br />/sbin/sysctl -a | grep rmem_default<br />/sbin/sysctl -a | grep rmem_max<br />/sbin/sysctl -a | grep wmem_default<br />/sbin/sysctl -a | grep wmem_max<br /><br />Ex:<br /># /sbin/sysctl -a | grep shm<br />kernel.shmmni = 4096<br />kernel.shmall = 2097152<br />kernel.shmmax = 536870912<br />kernel.shm-use-bigpages = 0<br /># /sbin/sysctl -a | grep sem<br />kernel.sem = 250 32000 100 128<br /># /sbin/sysctl -a | grep file-max<br />fs.file-max = 65536<br /># /sbin/sysctl -a | grep ip_local_port_range<br />net.ipv4.ip_local_port_range = 1024 65000<br /># /sbin/sysctl -a | grep rmem_default<br />net.core.rmem_default = 262144<br /># /sbin/sysctl -a | grep rmem_max<br />net.core.rmem_max = 262144<br /># /sbin/sysctl -a | grep wmem_default<br />net.core.wmem_default = 262144<br /># /sbin/sysctl -a | grep wmem_max<br />net.core.wmem_max = 262144<br /><br />For Novell SUSE Linux releases, use the following to ensure that the system reads the /etc/sysctl.conf file at boot time: <br />/sbin/chkconfig boot.sysctl on<br /><br /><strong><strong>Setting Shell Limits for the oracle User </strong></strong><br />Oracle recommends setting limits on the number of processes and open files each Linux account may use. To make these changes, cut and paste the following commands as root: <br />cat /etc/security/limits.conf EOF<br />oracle soft nproc 2047<br />oracle hard nproc 16384<br />oracle soft nofile 1024<br />oracle hard nofile 65536<br />EOF<br /><br />cat /etc/pam.d/login EOF<br />session required /lib/security/pam_limits.so<br />EOF<br />For RHEL4, use the following: <br />cat /etc/profile EOF<br />if [ \$USER = ""oracle"" ]; then <br /> if [ \$SHELL = ""/bin/ksh"" ]; then<br /> ulimit -p 16384<br /> ulimit -n 65536<br /> else<br /> ulimit -u 16384 -n 65536<br /> fi<br /> umask 022<br />fi<br />EOF<br /><br />cat /etc/csh.login EOF<br />if ( \$USER == ""oracle"" ) then<br /> limit maxproc 16384<br /> limit descriptors 65536<br /> umask 022<br />endif<br />EOF<br />For SLES 9, use the following: <br />cat /etc/profile.local EOF<br />if [ \$USER = ""oracle"" ]; then <br /> if [ \$SHELL = ""/bin/ksh"" ]; then<br /> ulimit -p 16384<br /> ulimit -n 65536<br /> else<br /> ulimit -u 16384 -n 65536<br /> fi<br /> umask 022<br />fi<br />EOF<br /><br />cat /etc/csh.login.local EOF<br />if ( \$USER == ""oracle"" ) then<br /> limit maxproc 16384<br /> limit descriptors 65536<br /> umask 022<br />endif<br />EOF<br /><br /><br /><strong><strong>Enable Trace on any Executable to find out whats happening at O.S. Level </strong></strong><br />truss -eafl -o output.trc -o truss.txt <executable> <br />for example for Apache<br />truss -eafl -o output.trc -o truss.txt apachectl <br /><br />This command will trace any system calls and will help you to find out errors. <br /><br /><br /><strong><strong>How to schedule a Job in Unix </strong></strong><br />Use cronjob<br />crontab -l ( list current jobs in cron) <br />crontab -e ( edit current jobs in cron )<br />_1_ _2_ _3_ _4_ _5_ $Job_Name <br />1 - Minutes (0-59) <br />2 - Hours ( 0-24) <br />3 - day of month ( 1- 31 ) <br />4 - Month ( 1-12) <br />5 - A day of week ( 0- 6 ) 0 -> sunday 1-> monday <br />e.g. 0 0 1 * 5 Means run job at Midnight on 1st of month & every friday <br />crontabs are in /var/spool/cron. The date/time of the users file is the last time they modified /var/cron/log <br /><br /><br /><strong><strong>How to extract cpio file</strong></strong><br />cpio -idmv < file_name (Don’t forget to use sign < before file name)<br /><br /><strong><strong>How to find CPU & Memory detail of linux</strong></strong><br />cat /proc/cpuinfo (CPU)<br />cat /proc/meminfo (Memory)<br /><br /><br /><strong><strong>How to find if Operating system in 32 bit or 64 bit ?</strong></strong><br />For solaris use command<br />isainfo -v<br />If you see out put like<br />32-bit sparc applications<br />That means your O.S. is only 32 bit<br />but if you see output like<br />64-bit sparcv9 applications<br />32-bit sparc applications<br />above means your o.s. is 64 bit & can support both 32 & 64 bit applications<br /><br /><br /><strong><strong>How to find Process ID (PID) associated with any port ?</strong></strong><br />This command is useful if any service is running on a particular port (389, 1521..) and that is run away process which you wish to terminate using kill command<br />lsof | grep {port no.} (lsof should be installed and in path)<br /><br /><br /><strong><strong>How to change a particular pattern in a file ?</strong></strong><br />Open file using vi or any other editor, go in escape mode (by pressing escape) and use<br /><br />:1,$s/old_pattern/new_parameter/gc ( g will change globally, c will ask for confirmation before changing )<br /><br /><strong><strong>How to create symbolic link to a file ?</strong></strong><br />ln -s pointing_to symbolic_name<br />e.g. If you want to create symbolic link from a -> b<br />ln -s b a <br />(Condition:you should have file b in that directory & there should not be any file with name a)<br /><br /><br /><strong><strong>Grep for word in a zip file</strong></strong><br />unzip -t xmlparserv2.zip |grep HandlerBase<br />unzip -l appsborg2.zip | grep 9.0.4<br /> 0 04-18-03 17:10 .xdkjava_version_9.0.4.0.0_production<br /><br /><br /><strong><strong>Redirecting standard error to standard output</strong></strong><br />sh patch.sh 2>&1 | tee patch.log<br /><br /><br /><strong><strong>Printer check</strong></strong><br />lpstat -p printername <br />printer printername is idle. enabled since Mon Sep 15 14:50:53 2008. available. <br /><br />lpstat -t | grep cac-check-bak<br /><br /># ls -l test <br />-rw-r--r-- 1 root root 727 Apr 9 17:50 test <br /># lp -d cac-check-bak test <br />request id is cac-check-bak-54 (1 file(s)) <br /><br /># lpstat -o cac-check-bak <br /><br />STEPS:<br />1. Verify whether the Printer IP is pingable from the server. <br /> root# ping <Printer IP> <br />2. Verify whether we are able to telnet to port 515 (or 9100 if HP) of the Printer IP. <br /><br />1. To Add Network Printers: <br />Step: 1 <br />Command: <br />lpadmin -p <printername> -o nobanner,dest=<printer IP> ,protocol=bsd -v /dev/null -I simple,postscript -T PS -i <br /><br />/usr/lib/lp/model/netstandard <br />Step: 2 <br />Enable and Accept The Printer: <br />enable <printername> <br />accept <printername> <br />Step: 3 <br />Check the printer status. <br />lpstat -t | grep <printername> <br />Step: 4 <br />Fire a test print and check the queue. <br />lp -d <printername> <filename> ( firing printjob) <br /><br />-----------------------------APPS--------------------------------- <br />SEE NOTE ONE ABOVE <br /><br />The below are the steps for adding printer on Apps: <br />1. Login to Application as SYSADMIN. <br />2. Select System Administrator responsibility. <br />3. Open Printer -> Register form <br />4. Enter the Printer name, Type and Description <br />5. Save <br /><br /><br /><strong><strong>To know hostnae from IP address</strong></strong><br />nslookup IP<br /><br /><strong>Verify hostnames are fully qualified with the domain</strong><br />-server: login as root<br />hostname<br />server.domainname<br /><br /><br />If the hostname is not fully qualified (with domain name) for a server, perform the following check:<br />cat /etc/sysconfig/network<br />NETWORKING=yes<br />HOSTNAME=server.domainnmae<br />GATEWAY=xx.xxx.xx.xx<br />If the HOSTNAME shows the fully qualified name in this file, then issue a reboot command for the settings to take effect.<br /><br /><br /><strong><strong>To kill LOCAL=NO processes on DB </strong></strong><br />ps -ef | grep 'LOCAL=NO' | grep orad1 | awk '{ print $2 }' | xargs -i -t kill -9 {}<br /><br /><br /><strong><strong>Embedding SQL in Shell</strong></strong><br />#!/bin/sh<br />VAR1=`sqlplus -s username/password end<br />set pagesize 0 feedback off ver off heading off echo off<br />select sysdate from dual;<br />exit;<br />end`<br />echo “system date is ” $VAR1<br />#end of shell script<br /><br /><br /><br /><strong><strong>Checking the Sun Operating System</strong></strong><br />System and hardware Configuration:<br />Hardware Status: sun4 (UNIX command; ‘arch’)<br />Operating System: SunOS 5.11 Generic sun4u sparc SUNW, Sun-Fire-V240 (UNIX command; ‘uname –a’).<br />CPU: 2 CPU’s. Each having the sparcv9 processor operates at 1503 MHz and has a sparcv9 floating point processor (UNIX command; ‘psrinfo –v’).<br />Memory size: 5120 Megabytes (UNIX command; ‘prtconf |grep size’).<br />Swap Total: (45440k bytes allocated + 4032k reserved = 49472k used) 14468936k available, i.e., 14GB (UNIX command; ‘swap –s’).<br />Check file System mounted and its respective size (UNIX command; ‘df -h’)<br /><br /><br /><strong><strong>Screen commands</strong></strong> <br />which screen<br />/usr/local/bin/screen<br />screen -list<br />screen -RR -DD screenname<br /><br /><br /><strong><strong>To extract jar files</strong></strong><br />jar -xfv timesheet.jar<br />jar xvf $ORACLE_HOME/8402746/ewt3.jar<br /><br /><br /><strong><strong>To split large files</strong></strong><br />split -l 2000 FILE<br /># split -l 50 book sect<br /><br />This example splits book into 50-line segments named sectaa, sectab, sectac, and so forth.<br /><br /># split book<br />This example splits book into 1000-line segments named xaa, xab, xac, and so forth.<br /><br /><strong><strong>Following are the most requred OS commands while installing and configuring Oracle Software on Unix/Linux Operating Systems.</strong></strong><br />Operating System: AIX 5L Based Systems (64-Bit)<br />Physical RAM: # /usr/sbin/lsattr -E -l sys0 -a realmem<br />Swap Space: # /usr/sbin/lsps -a<br /><br />Operating System: HP Tru64 UNIX<br />PPhysical RAM: # /bin/vmstat -P grep "Total Physical Memory"<br />Swap Space: # /sbin/swapon -s<br /><br />Operating System: HP-UX Itanium<br />Physical RAM: # /usr/contrib/bin/machinfo grep -i Memory<br />Swap Space: # /usr/sbin/swapinfo -a<br /><br />Operating System: HP-UX PA-RISC (64-Bit)<br />Physical RAM: # grep "Physical:" /var/adm/syslog/syslog.log<br />Swap Space: # /usr/sbin/swapinfo -a<br /><br />Operating System: IBM zSeries Based Linux, LinuxItanium/POWER/x86/x86-64<br />Physical RAM: # grep MemTotal /proc/meminfo<br />Swap Space: # grep SwapTotal /proc/meminfo<br /><br />Operating System: Solaris SPARC 64-Bit/x86/x86-64<br />Physical RAM: # /usr/sbin/prtconf grep "Memory size"<br />Swap Space: # /usr/sbin/swap -s<br /><br /><br /><br /><strong><strong>Group and User Creation in Solairs.</strong></strong> <br />This post is for creating an Unix Group and a User.<br /><br />Solaris 5.10<br />--------------------------------------<br />Creating a Group:<br />--------------------------------------<br /><br />For instance: Group - dba<br /><br />Login as ROOT User<br /><br />login: root<br />Password:<br />Sun Microsystems Inc. SunOS 5.10 Generic January 2005<br />#<br /><br />To Check whther the group is already created.<br /><br /># grep dba /etc/group<br />or<br /># cat /etc/groupgrep dba<br /><br />If dba group info is not displayed, then<br /><br />To create a Group:<br /><br /># groupadd dba<br /><br />Syntax: groupadd -g <br /><br /># cat /etc/groupgrep dba<br /><br />dba:103:<br /><br />Note: The gid, which needs to be changed, should be not being the gid of other group.<br />Example:<br /># usermod -g 333 dba<br />UX: groupmod: ERROR: 103 is already in use. Choose another.<br /><br />To remove a Group:<br /><br /># groupdel dba<br /><br /># cat /etc/groupgrep dba<br /><br />Commands:<br />---------<br /><br />Create Group - groupadd<br />Modify Group - groupmod<br />Remove Group - groupdel<br /><br />For more details, check the man pages i.e. man groupadd, man groupmod, and man groupdel<br /><br />--------------------------------------<br />Creating a User Account:<br />--------------------------------------<br /><br />For instance: User - Oracle<br /><br />Login as ROOT User<br /><br />login: root<br />Password:<br />Sun Microsystems Inc. SunOS 5.10 Generic January 2005<br />#<br /><br />To Check whther the oracle user is already created.<br /><br /># grep oracle /etc/passwd<br />Or<br /># id –a oracle<br /><br />If Oracle group info is not displayed, then<br /><br />To create a User:<br /><br /># useradd –d /export/home/oracle –m -g dba -s /bin/sh oracle<br /><br /><br />Note: In the above command<br />Option -Discription -Value<br />------------- -------------------- --------------------<br />-d -Home for the Oracle user -/export/home/oracle<br />-m -For making directory<br />-g -Primary Group dba<br />-s -Default Shell for the User -/bin/sh<br />Login -Name -Oracle<br /><br /># grep oracle /etc/passwd<br />oracle:x:100:100::/export/home0/oracle:/bin/sh<br /><br />To modify a User:<br /><br />For example: To Comment the user and assing secondray group to the user<br /><br /># usermod -c "Owner of the Oracle Software 10g" -G oinstall oracle<br /><br />Note: In the above command<br />Option -Discription -Value<br />------------- -------------------- --------------------<br />-c -Comment -"Owner of the Oracle Software"<br />-G -econdary -Group oinstall<br /><br /><br /># grep oracle /etc/passwd<br />oracle:x:100:100:Owner of the Oracle Software 10g:/export/home0/oracle:/bin/sh<br /><br />To set a password:<br /><br /># passwd -r files oracle<br />New Password: XXXXXXXX<br />Re-enter new Password: XXXXXXXX<br />passwd: password successfully changed for oracle<br /><br />Note: Where XXXXXXXX is the new password for oracle User.<br /><br />To remove a user:<br /><br /># userdel -r oracle<br /><br />Note: The "-r" option is used to remove the user's home directory from the system.<br />And the Oracle user account should not be removed until and unles required, and<br />this should not be done on Production Servers.<br /><br /># grep oracle /etc/passwd<br /><br /><br />Commands:<br />---------<br />Create a user - useradd<br />Modify a user - usermod<br />Rmove a user - userdel<br /><br />seta password to the user- passwd<br /><br />For more details, please check the man pages in UNIX i.e. man useradd, man usermod, and man userdel<br /><br /><br /><strong><strong>UnZip can be downloaded from the following URLs:</strong></strong><br />Metalink : http://updates.oracle.com/unzips/unzips.html<br />Info-Zip : http://www.info-zip.org.<br /><br /><strong><strong>Perl 5.005 </strong></strong><br />Use the Perl shipped with iAS1022 and RDBMS 9i if available or download it from <br />Perl.com<br /><br />"Obtain and Install JDK 1.3 on All Applications Middle Tier Server Nodes<br /><br /><strong><strong>Download JDK 1.3</strong></strong><br />Operating System JDK 1.3 Download Location <br />Sun Solaris (SPARC) http://java.sun.com/j2se/1.3/download.html <br />Windows NT/2000 http://java.sun.com/j2se/1.3/download.html <br />HP HP/UX http://www.hp.com/products1/unix/java/ <br />IBM AIX http://www.ibm.com/developerworks/java/jdk/aix/index.html <br />HP Tru64 UNIX http://www.compaq.com/java/download/index.html <br />Linux http://java.sun.com/j2se/1.3/download.html <br /><br />follow the installation instructions provided in the package to install the JDK for each Applications middle tier <br /><br />server node. Take a note for the new JDK 1.3 installation top-level directory with the full path as [JDK13_TOP]<br />[JDK13_TOP]/bin/java -version ---> confirm downloaded version<br /><br /><br /><strong><strong>Script to kill services</strong></strong><br /># Kill leftover Apache processes<br />kill_processes=`ps -ef | grep ""/iAS/Apache/Apache/bin/"" | awk '{print $2}'`<br />echo $kill_processes<br />kill -9 $kill_processes<br /><br /># Kill leftover Discoverer processes<br />kill_processes=`ps -ef | grep ""/discwb4/bin/"" | awk '{print $2}'`<br />echo $kill_processes<br />kill -9 $kill_processes<br /><br /><br /># Kill leftover JVM processes - THIS IS NOW APPLICABLE ONLY FOR OC4J<br /># kill_processes=`ps -ef | grep ""/native_threads/"" | awk '{print $2}'`<br /># echo $kill_processes<br /># kill -9 $kill_processes<br /><br /># kill_processes=`ps -ef | grep ""/usr/j2se/"" | awk '{print $2}'`<br /># echo $kill_processes<br /># kill -9 $kill_processes"<br /><br /><br />Convert .sh file to unix format<br />“:set fileformat=unix”<br /><br /> <br /><br /><strong><strong>Unix Operating System release</strong></strong><br />Solaris -> cat /etc/release<br />Red Hat Linux -> cat /etc/redhat-release<br /><br /><strong><strong>system log file</strong></strong><br />/var/adm/messages<br />service is /usr/sbin/syslogd<br />ps -ef|grep syslo<br />root 27893 7373 0 00:02:27 pts/4 0:00 grep syslo<br />root 9222 1 0 23:48:53 ? 0:00 /usr/sbin/syslogd<br /><br />"How to find hostname on Linux/OEL ?<br />Use command “hostname” to find host name of Linux<br /><br /><strong><strong>How to change hostname on Linux/OEL ?</strong></strong><br /><br />Open file /etc/sysconfig/network using editor like vi , nano or ed <br />Change entry HOSTNAME=XXXXXX to<br />HOSTNAME=New_Host_Name <br />Restart xinetd service or reboot Linux/OEL<br />Restart xinetd service xinetd restart<br />Reboot Linux reboot <br />If you have defined hostname resolution in hosts file then change entry in /etc/hosts<br /><br /><strong><strong>Starting xvfb</strong></strong><br />run this as applvis <br />nohup /usr/bin/X11/Xvfb :0 & <br />and then run this as applvis <br />nohup /usr/bin/X11/mwm & <br /><br /><br /><strong><strong>O/s monitoring</strong></strong><br />use top/iostat/sar/vmstat<br /><br /><strong><strong>Most of the times while viewing big log files in vi editor, you would get "Line too Long" error</strong></strong><br />A simple solution to this problem is to use the fold command:<br /><br />fold -80 yourfile.log > folded.log<br /><br /><br /><strong><strong>Program to convert plain text files in DOS format to UNIX format.</strong></strong><br />dos2unix *.*<br /><br /><br /><strong><strong>Below is the script which we used, - where certain files in non-prod were pointing to PROD. </strong></strong><br />#! /bin/bash<br /><br />find /d11/applprod/PROD/apps/tech_st/10.1.2 -exec grep -l '/d11/applprod/GOLD' {} \; > ./files<br /><br />for i in `cat ./files`<br />do<br />sed -e 's:d11/applprod/GOLD: d11/applprod/PROD:g' $i > temp_file<br />cp -rp temp_file $i<br />done<br /><br /><br /><strong><strong>To get sql o/p in xls format</strong></strong><br />set markup html on spool on<br />spool emp.xlsRaghavendra Lakshmipathyhttp://www.blogger.com/profile/09047289715110089027noreply@blogger.com0tag:blogger.com,1999:blog-811846438034249793.post-42124389223022618342010-09-14T22:39:00.000-07:002010-09-14T22:42:05.506-07:00OCM<strong><strong>Manual way of collection and upload</strong></strong><br />$ pwd <br />/d02/oravis/visdb/10.2.0/ccr/bin<br />$ ./emCCR collect<br />Oracle Configuration Manager - Release: 10.3.3.1.0 - Production<br />Copyright (c) 2005, 2010, Oracle and/or its affiliates. All rights reserved.<br />------------------------------------------------------------------<br />Collection and upload done.<br /><br />Verify in the systems tab of metalink to see the upload is doneRaghavendra Lakshmipathyhttp://www.blogger.com/profile/09047289715110089027noreply@blogger.com0tag:blogger.com,1999:blog-811846438034249793.post-57843645607030764042010-09-14T21:54:00.000-07:002010-09-14T21:58:41.042-07:00RDA<strong><strong>review, download and run the ./rda.sh script as described in Note: 250262.1</strong></strong><br /><br />other notes for rda<br />Note: 314422.1 - Remote Diagnostic Agent (RDA) 4 - Getting Started <br />Note: 330363.1 - Remote Diagnostic Agent (RDA) 4 - FAQ <br />Note: 330344.1 - Remote Diagnostic Agent (RDA) 4 - Training <br />Note: 330362.1 - Remote Diagnostic Agent (RDA) 4 - Troubleshooting Guide <br /><br />To simplify RDA configuration, use an RDA profile, and the best choice is the DB9i profile, even for 10g or 11g, because the DB10g profile is a superset of DB9i that adds stuff we don't care about (for DB Install). First set the ORA* environment variables for the DB of interest, then run the RDA config pass: <br /><br />Unix: <br />cd {dir you unzipped RDA into} <br />export ORACLE_HOME={dir} <br />export ORACLE_SID={name} <br />./rda.sh -Syvp DB9i <br /><br />Next, edit the resulting setup.cfg file, search for 'sysdba' (first hit will be a comment), and change the 2 lines directly above and below as follows: <br />for SQL_LOGIN= replace 'system' with '/' and for SQL_SYSDBA= replace 0 with 1 <br />then save it, and run the (rda) command again, with no arguments this time: <br />Unix: <br />./rda.sh <br />The output subdirectory will contain a new RDA*.zip file for you to uploadRaghavendra Lakshmipathyhttp://www.blogger.com/profile/09047289715110089027noreply@blogger.com1tag:blogger.com,1999:blog-811846438034249793.post-58581502449998106172010-09-14T21:34:00.001-07:002010-09-14T21:34:44.033-07:00Quantum<strong><strong>Installing Quantum for Oracle Payroll [ID 224273.1]</strong></strong><br /><br />How to perform this section is as below:<br /><br />create a directory:$PAY_TOP/vendor/quantum_2.9.5<br /><br />cp this file $PAY_TOP/vendor/lib/pyvendor.zip to $PAY_TOP/vendor/quantum_2.9.5<br /><br />cd $PAY_TOP/vendor/quantum_2.9.5<br /><br />unzip pyvendor.zip<br /><br />unzip REDHAT40.zip it will create 2.9.5 directory<br /><br />cp -pr $PAY_TOP/vendor/quantum_2.9.5/2.9.5 $PAY_TOP/vendor/<br /><br />mkdir /d11/appldev/devappl/pay/11.5.0/vendor/data<br /><br />cp qfpt.dat file which should be latest 2.9.5 data release from any of the other instnaces to $PAY_TOP/vendor/2.9.5/data<br /><br /><br /><br />as , US Super HRMS Manger responsibility --> Submit Processes and Reports<br />Quantum Program Update Installer<br />Platform:REDHAT40<br />Install in Directory:$PAY_TOP/vendor/quantum<br /><br />Quantum Data Update Installer<br />Data Source: $PAY_TOPvendor/quantum/data<br />Data File Location: $PAY_TOP/quantum/dataRaghavendra Lakshmipathyhttp://www.blogger.com/profile/09047289715110089027noreply@blogger.com1tag:blogger.com,1999:blog-811846438034249793.post-65242034461392951712010-09-14T21:02:00.000-07:002010-09-14T21:21:08.798-07:00PCP<strong><strong>To disable PCP</strong></strong><br /><br />Capture required information before updating rows. <br />Sample Output<br />sqlplus apps<br />select NODE_NAME ,count(*) from FND_CONCURRENT_QUEUES<br />group by NODE_NAME<br />order by NODE_NAME; <br /><br />NODE_NAME COUNT(*)<br />------------------------------ ----------<br />a 2<br />b 9<br />c 4<br />d 41<br /> 2<br /><br />select NODE_NAME2,count(*) from FND_CONCURRENT_QUEUES<br />group by NODE_NAME2 order by NODE_NAME2; <br /><br />NODE_NAME2 COUNT(*)<br />------------------------------ ----------<br />c 15<br />d 4<br /> 39<br /><br />select TARGET_NODE,count(*) from FND_CONCURRENT_QUEUES<br />group by TARGET_NODE<br />order by TARGET_NODE; <br /><br />TARGET_NODE COUNT(*)<br />------------------------------ ----------<br /> 58<br /><br />Execute below DML statements as apps user<br />create table FND_CONCURRENT_QUEUES_BKP as select * from FND_CONCURRENT_QUEUES;<br /> <br />update fnd_concurrent_queues<br />set node_name='z'<br />where node_name in ('c','d');<br /><br /> 45 rows updated.<br /><br />Commit;<br /><br />update fnd_concurrent_queues<br />set target_node='z'<br />where CONCURRENT_QUEUE_NAME not like '%QUEUE2%'<br />and MAX_PROCESSES >0<br /><br /> 14 rows updated.<br />update fnd_concurrent_queues<br />set NODE_NAME2=''<br />where NODE_NAME2 in ('c','d');<br /><br /> 19 rows updated.<br /><br />Commit;<br /><br />Disable all Concurrent queues ending with ""- Queue 2""<br /><br />update FND_CONCURRENT_QUEUES<br />set ENABLED_FLAG='N'<br />where CONCURRENT_QUEUE_NAME like '%QUEUE2%'<br /><br />4 rows updated.<br />Commit;"Raghavendra Lakshmipathyhttp://www.blogger.com/profile/09047289715110089027noreply@blogger.com0tag:blogger.com,1999:blog-811846438034249793.post-22535383906973501422010-09-13T22:43:00.000-07:002010-09-13T23:41:11.031-07:00SSLSSL<br /><br /><br />Start/stop apache as root( if default ports 80 and 443 are used, ROOT user <br />must execute commands )<br />more /d01/oracle/prodcomn/admin/scripts/VIS_hostname/Apache_SSL_Stop_root<br /># Please Stop Apache in SSL mode as root #<br />. /d01/oracle/prodappl/APPSORA.env<br />/d01/oracle/prodora/iAS/Apache/Apache/bin/httpdsctl stop<br />as applmgr<br />$ more $SCRIPT_TOP/STOP_ALL<br />adalnctl.sh stop<br />jtffmctl.sh stop<br />#addisctl.sh stop<br /><br />as root<br />more /d01/oracle/prodcomn/admin/scripts/VIS_hostname/Apache_SSL_Start_root<br /># Please Start Apache in SSL mode as root #<br />. /d01/oracle/prodappl/APPSORA.env<br />/d01/oracle/prodora/iAS/Apache/Apache/bin/httpdsctl startssl<br /><br />as applmgr:<br />$ more $SCRIPT_TOP/START_ALL<br />adalnctl.sh start<br />jtffmctl.sh start<br />#addisctl.sh start<br /><br />https://hostname.domainname/OA_HTML/jtfavald.jsp<br /><br /><strong><strong>implement new SSL certificate</strong></strong><br />1) shutdown services on web<br />2) backup the current SSL certificate and implement the new one<br />cd /d01/oracle/prodcomn/admin/certs/apache/ssl.crt<br />mv server.crt server.crt.20090222<br />mv ca.crt ca.crt.20090222<br />mv ca.crt.new ca.crt<br />cp ca.crt server.crt<br />3) start the services on web<br />4) verify the following URL<br />https://hostname.domainname/OA_HTML/jtflogin.jsp<br /><br /><strong><strong>SSL renew</strong></strong><br />Login to server webnode1<br /> <br />cd /d01/oratrn/viscomn/conf/VIS_webnode1/iAS/Apache/Apache/conf/ssl.crt<br />mv server.crt.deploy server.crt<br />mv ca.crt.deploy ca.crt<br />mv ca-bundle.crt.deploy ca-bundle.crt<br /> <br />cd /d01/oratrn/viscomn/conf/VIS_webnode1/iAS/Apache/Apache/conf/ssl.key<br />mv server.key.deploy server.key<br /> <br />Bounce the Apache services for VIS<br />Test the loginRaghavendra Lakshmipathyhttp://www.blogger.com/profile/09047289715110089027noreply@blogger.com0tag:blogger.com,1999:blog-811846438034249793.post-76415398115519666742010-09-13T20:43:00.000-07:002010-09-17T06:45:58.931-07:00R12<strong><strong>To start MWA on R12</strong></strong><br />cd $ADMIN_SCRIPTS_HOME <br />./mwactlwrpr.sh start sysadmin/pwd <br /><br />STOP <br />cd $ADMIN_SCRIPTS_HOME <br />./mwactlwrpr.sh stop sysadmin/pwd<br /><br />Log Directories: <br />mwa.logdir= $INST_TOP/logs <br /><br />CFG file: <br />$INST_TOP/admin/install/mwa.cfg<br /><br />telnet hostname 10803<br /><br /><br /><strong><strong>Parameter for auto compilation of jsp in R12</strong></strong><br />change s_jsp_main_mode justrun to recompile<br /><br /><strong><strong>Default env file in R12</strong></strong><br />$ORA_CONFIG_HOME/10.1.2/forms/server/default.env<br />on 11i <br />$ORACLE_HOME/forms60/server/default.env<br /><br /><br /><strong><strong>R12 version checks</strong></strong><br /><br />## Collect the Apache version <br />echo ""*****Apache Version*****"" <br />$IAS_ORACLE_HOME/Apache/Apache/bin/httpd -v <br />echo "" "" <br />## Collect perl version <br />echo ""*****perl version******"" <br />$IAS_ORACLE_HOME/perl/bin/perl -v|grep built <br />echo "" "" <br />## Collect Java version <br />echo ""******Java Version******"" <br />sh -c ""`awk -F= '$1 ~ /^JSERVJAVA.*$/ {print $2}' $ADMIN_SCRIPTS_HOME/java.sh` -version;"" <br />echo "" "" <br />## Collect client JRE version <br />echo ""*****Client JRE version*****"" <br />cat $FORMS_WEB_CONFIG_FILE|grep sun_plugin_version| cut -c 1-35 <br />echo "" "" <br />## Collect Forms version <br />echo ""*****Forms Version*****"" <br />$ORACLE_HOME/bin/frmcmp_batch|grep Forms| grep Version <br />echo "" "" <br />## Collect PL/SQL Version <br />echo ""*****PL/SQL Version****"" <br />$ORACLE_HOME/bin/frmcmp_batch|grep PL/SQL|grep Version <br />echo "" "" <br />## Collect Forms communication mode <br />echo ""****Forms Communication Mode****"" <br />cat $FORMS_WEB_CONFIG_FILE|grep serverURL= <br />echo ""If the serverURL parameter has no value then Forms is implemented in socketmode else it is servlet<br /><br /><br /><strong><strong>R12 form and pll compilation</strong></strong><br />$ORACLE_HOME/bin/frmcmp_batch module=$AU_TOP/forms/US/WMSLABEL.fmb userid=APPS/***** output_file=$WMS_TOP/forms/US/WMSLABEL.fmx module_type=form batch=yes compile_all=special<br /><br />$ORACLE_HOME/bin/frmcmp_batch module=$AU_TOP/resource/CSCHTAB.pll userid=APPS/***** output_file=$AU_TOP/resource/CSCHTAB.plx module_type=library batch=yes compile_all=special<br /><br /><br /><strong><strong>Version checks in 11i and r12</strong></strong><br /><br /><strong><strong>How to find Apps Version (11i/R12/12i)</strong></strong><br />Connect to database as user apps<br />SQL> select release_name from apps.fnd_product_groups;<br />Output like 12.0.4 or 11.5.10.2<br /><br /><strong><strong>Web Server/Apache or Application Server in Apps 11i/R12 </strong></strong> <br />Log in as Application user, set environment variable and run below query $IAS_ORACLE_HOME/Apache/Apache/bin/httpd -version<br />Output for 11i should be like<br />Server version: Oracle HTTP Server Powered by Apache/1.3.19 (Unix)<br />Server built: Jan 26 2005 11:06:44 (iAS 1.0.2.2.2 rollup 5)<br /><br />Output for R12 should be like<br />Server version: Oracle-Application-Server-10g/10.1.3.0.0Oracle-HTTP-Server<br />Server built: Dec 4 2006 14:44:38<br /><br /><strong><strong>Forms & Report version</strong></strong> (aka developer 6i) in 11i >> Log in as Application user, set environment variable and run below query<br />$ORACLE_HOME/bin/f60run | grep Version | grep Forms<br /><br />output like<br />Forms 6.0 (Forms Runtime) Version 6.0.8.25.2 (Production)<br />Check fourth character in version 25 which means Forms 6i patchset 16 (25-9)<br /><br /><strong><strong>Forms & Report version in R12/12i</strong></strong> Log in as Application user, set environment variable and run below query<br />$ORACLE_HOME/bin/rwrun | grep Release<br />or rwrun help=yes | grep Release<br />Output should be like<br />Report Builder: Release 10.1.2.2.0 <br />You can safely ignore warnings<br /><br /><strong><strong>Oracle Jinitiator</strong></strong> in 11i/R12/12i >><br /><br />Log in as Application user, set environment variable and run below query <br />grep jinit_ver_comma $CONTEXT_FILE <br /> <br />(Default is Java Plug-In for R12/12i )<br /><br />Oracle Java Plug-in in 11i/R12/12i >><br /><br />Log in as Application user, set environment variable and run below query <br />grep plugin $CONTEXT_FILE<br /> <br /><br /><strong><strong>File Version on file system </strong></strong><br />adident Header filename<br />or<br />strings file_name | grep Header<br /><br />Here adident is AD Utility (Oracle Apps) and strings is Unix utility<br /><br /><strong><strong>Version of pld file</strong></strong><br />*.pld are source code of *.pll which are inturn source of *.plx. *.pll is in $AU_TOP/resource and to find its version check<br /><br />adident Header $AU_TOP/resource/<filename>.pll<br />IGSAU012.pll:<br />$Header IGSAU012.pld 115.1.115100.1 2004/04/01 05:40:18 appldev ship $<br /><br />or<br />strings $AU_TOP/resource/<filename>.pll | grep -i header<br /><br />FDRCSID(’$Header: IGSAU012.pld 115.1.115100.1 2004/04/01 05:40:18 appldev ship $’);<br /><br /><strong><strong>OA Framework Version</strong></strong> <br /> http:// hostname.domainName:port/OA_HTML/OAInfo.jsp (Only for 11i); Log in as Application user, set environment variable and run below query<br /><br />adident Header $FND_TOP/html/OA.jsp<br />adident Header $OA_HTML/OA.jsp<br /><br />output for both should look like<br />$Header OA.jsp 115.60 2006/03/31 00:47:28 atgops1 noship $<br /><br />120.21 means OA Framework Version (coming soon..)<br />115.60 means OA Framework Version (coming soon..)<br />115.56 means OA Framework Version (coming soon..)<br />115.36 means OA Framework Version 5.7<br />115.27 means OA Framework Version 5.6E<br />115.26 means OA Framework Version 5.5.2E <br /><br />OA.jsp Version OA Framework Version ARU Patch Number <br />115.56 5.10 Patch:3875569. <br />115.36 5.7 Patch:2771817. <br />115.27 5.6 Patch:2278688. <br /><br /><br /><strong><strong>Discoverer Version for 11i</strong></strong> (3i or 4i) <br /> Log in as Application user, set environment variable and run below query<br />$ORACLE_HOME/bin/disc4ws | grep -i Version<br /><br />Discoverer Version for 11i or R12 (10g AS) <br /> Check under Application Server Section as 10g AS Discoverer is on standalone<br /><br /><strong><strong>Workflow Version with Apps</strong></strong> <br /> Connect to Database as apps user<br />SQL> select TEXT Version from WF_RESOURCES where NAME = ‘WF_VERSION’;<br />Output like 2.6.0 means workflow version 2.6.0<br /><br /><strong>Oracle Single Sign On</strong><br />Connect to database which holds SSO repository<br />SQL>select version from orasso.wwc_version$;<br /><br /><strong><strong>Oracle Internet Directory </strong></strong><br />There are two component in OID (Software/binaries & Schema/database)<br /><br /> To find software/binary version<br /><br />$ORACLE_HOME/bin/oidldapd -version<br />output should look like<br /><br />oidldapd: Release 10.1.4.0.1 - Production on thu sep 11 11:08:12 2008<br />Copyright (c) 1982, 2006 Oracle. All rights reserved.<br /><br />To find Schema Version/ database use<br /><br />ldapsearch -h <hostname> -p <port> -D “cn=orcladmin” -w “<password>” -b “” \<br />-s base “objectclass=*” orcldirectoryversion<br /><br />and output should be like<br /><br />version: 1<br />dn:<br />orcldirectoryversion: OID 10.1.4.0.1<br /><br />or run following query in database<br />SQL> select attrval from ods.ds_attrstore where entryid = 1 and attrname = ‘orcldirectoryversion’;<br /><br />Output should be like OID 10.1.4.0.1<br /><br /><strong><strong>Application Server</strong></strong> <br /> Oracle Application Server (Prior to Oracle WebLogic Server)<br />If application server is registered in database (Portal, Discoverer) check from database<br />SQL> select * from ias_versions;<br />or<br />SQL>select * from INTERNET_APPSERVER_REGISTRY.SCHEMA_VERSIONS;<br /><br /><strong><strong>OC4J</strong></strong> (Oracle Container for J2EE)<br />Set ORACLE_HOME<br />cd $ORACLE_HOME/j2ee/home<br />java -jar oc4j.jar -version<br /><br /><strong><strong>Oracle Portal </strong></strong><br />SQL> select version from portal.wwc_version$;<br /><br /><strong><strong>Database Component</strong></strong><br /><br />I) Oracle Database<br /><br />To find database version<br />SQL> select * from v$version;<br />or<br />All component version in database<br /><br />$ORACLE_HOME/OPatch/opatch lsinventory -detail<br /><br /><strong><strong>Oracle Enterprise Manager</strong></strong><br />Metalink Note 605398.1 How to to find the version of the main EM components<br /><br /><strong><strong>Unix Operating System</strong></strong><br /><br />Solaris -> cat /etc/release<br />Red Hat Linux -> cat /etc/redhat-release<br /><br /><strong><strong>Checking whether Forms Socket or Forms Servlets are used</strong></strong><br />Look for the line in jserv.conf file<br />#ApJServGroup FormsGroup 1 1 /u02/oracle/visora/iAS/Apache/Jserv/etc/forms.properties <br />If using servlets this line will be uncommented. <br />Check the appsweb.cfg under OA_HTML/bin for the following line<br />if using Servlets this line will be uncommented.<br />; serverURL=/forms/formservlet<br />If the serverURL is uncommented and set, then you are using Forms Servlets. <br />Otherwise, you are using the Socket connection mode.<br /><br /><br /><strong><strong>Middle/Application Tier Scripts in R12</strong></strong><br />-------------------------------------------------<br /><br />Scripts for Application Tier services in R12 are located in "Install_base/inst/apps/$CONTEXT_NAME/admin/scripts"<br />where CONTEXT_NAME is of format SID_HOSTNAME<br /><br />i) adstrtal.sh <br />Master script to start all components/services of middle tier or application tier. This script will use Service Control API to start all services which are enabled after checking them in context file (SID_HOSTNAME.xml or CONTEXT_NAME.xml)<br /><br />ii) adstpall.sh<br />Master script to stop all components/services of middle tier or application tier.<br /><br />iii) adalnctl.sh <br />Script to start / stop apps listener (FNDFS and FNDFS). This listener will file will be in 10.1.2 ORACLE_HOME (i.e. Forms & Reports Home)<br />listener.ora file will be in $INST_TOP/apps/$CONTEXT_NAME/ora/10.1.2/network/admin directory<br />(Mostly similar to one in 11i with only change in ORACLE_HOME i.e. from 8.0.6 to 10.1.2 )<br /><br />iv) adapcctl.sh<br />Script to start/stop Web Server or Oracle HTTP Server. This script uses opmn (Oracle Process Manager and Notification Server) with syntax similar to opmnctl [startstop]proc ohs<br />like opmnctl stopproc ohs .<br /><br /><br /><br />(In 11i this script directly used to call apachectl executable but now calls opmnctl which in turn calls apachectl. In 11i web server oracle home was 1.0.2.2.2 but in R12 its 10.1.3)<br /><br />v) adcmctl.sh <br />Script to start / stop concurrent manager, Similar to one in 11i. (This script in turn calls startmgr.sh )<br /><br />vi) adformsctl.sh<br />Script to start / stop Forms OC4J from 10.1.3 Oracle_Home. This script will also use opmnctl to start/stop Forms OC4J like<br />opmnctl stopproc type=oc4j instancename=forms<br /><br />vii) adformsrvctl.sh<br />This script is used only if you wish to start forms in socket mode. Default forms connect method in R12 is servlet.<br />If started this will start frmsrv executable from 10.1.2 Oracle_Home in Apps R12<br /><br /><br />viii) adoacorectl.sh<br />This script will start/stop oacore OC4J in 10.1.3 Oracle_Home. This scripts will also use opmnctl (similar to adapcctl & adformsctl) to start oacore instance of OC4J like<br />opmnctl startproc type=oc4j instancename=oacore<br /><br />ix) adoafmctl.sh<br />This script will start/stop oafm OC4J in 10.1.3 Oracle_Home. This scripts will also use opmnctl (similar to above) to start oacore instance of OC4J like<br />opmnctl startproc type=oc4j instancename=oafm<br /><br />x) adopmnctl.sh<br />This script will start/stop opmn service in 10.1.3 Oracle_Home. opmn will control all services in 10.1.3 Oracle_Home like web server or various oc4j instances. If any services are stopped abnormally opmn will/should start them automatically.<br /><br />xi) jtffmctl.sh<br />This script will be used to start/stop one to one fulfilment server.<br /><br />xii) mwactl.sh<br />To start / stop mwa telnet server where mwa is mobile application.<br /><br /><br /><strong><strong>R12 - log files information</strong></strong><br />A. Startup/Shutdown Log files for Application Tier in R12<br /><br />Instance Top is new TOP added in R12 (to read more click here)<br /><br />–Startup/Shutdown error message text files like adapcctl.txt, adcmctl.txt… <br />$INST_TOP/apps/$CONTEXT_NAME/logs/appl/admin/log <br /><br />–Startup/Shutdown error message related to tech stack (10.1.2, 10.1.3 forms/reports/web)<br />$INST_TOP/apps/$CONTEXT_NAME/logs/ora/ (10.1.2 & 10.1.3)<br />$INST_TOP/apps/$CONTEXT_NAME/logs/ora/10.1.3/Apache/error_log[timestamp]<br />$INST_TOP/apps/$CONTEXT_NAME/logs/ora/10.1.3/opmn/ (OC4J~…, oa*, opmn.log)$INST_TOP/apps/$CONTEXT_NAME/logs/ora/10.1.2/network/ (listener log)<br />$INST_TOP/apps/$CONTEXT_NAME/logs/appl/conc/log (CM log files)<br /><br />B. Log files related to cloning in R12<br /><br />Preclone log files in source instance<br />i) Database Tier - /$ORACLE_HOME/appsutil/log/$CONTEXT_NAME/(StageDBTier_MMDDHHMM.log)<br /><br />ii) Application Tier - $INST_TOP/apps/$CONTEXT_NAME/admin/log/ (StageAppsTier_MMDDHHMM.log)<br /><br />Clone log files in target instance<br /><br />Database Tier - $ORACLE_HOME/appsutil/log/$CONTEXT_NAME/ApplyDBTier_.log<br />Apps Tier - $INST_TOP/apps/$CONTEXT_NAME/admin/log/ApplyAppsTier_.log<br /><br />—–<br />If your clone on DB Tier fails while running txkConfigDBOcm.pl (Check metalink note - 415020.1)<br />During clone step on DB Tier it prompts for “Target System base directory for source homes” and during this you have to give like /base_install_dir like ../../r12 and not oracle home like ../../r12/db/tech_st_10.2.0<br />—–<br /><br />C. Patching related log files in R12<br /><br />i) Application Tier adpatch log - $APPL_TOP/admin/$SID/log/ <br />ii) Developer (Developer/Forms & Reports 10.1.2) Patch - $ORACLE_HOME/.patch_storage <br />iii) Web Server (Apache) patch - $IAS_ORACLE_HOME/.patch_storage<br />iv) Database Tier opatch log - $ORACLE_HOME/.patch_storage <br /><br />D. Autoconfig related log files in R12<br />i) Database Tier Autoconfig log :<br />$ORACLE_HOME/appsutil/log/$CONTEXT_NAME/MMDDHHMM/adconfig.log<br />$ORACLE_HOME/appsutil/log/$CONTEXT_NAME/MMDDHHMM/NetServiceHandler.log<br /><br />ii) Application Tier Autoconfig log - $INST_TOP/apps/$CONTEXT_NAME/admin/log/$MMDDHHMM/adconfig.log<br /><br />Autoconfig context file location in R12 - $INST_TOP/apps/$CONTEXT_NAME/appl/admin/$CONTEXT_NAME.xml<br /><br />E. Other log files in R12<br />1) Database Tier<br />1.1) Relink Log files :<br />$ORACLE_HOME/appsutil/log/$CONTEXT_NAME /MMDDHHMM/ make_$MMDDHHMM.log<br /><br />1.2) Alert Log Files :<br />$ORACLE_HOME/admin/$CONTEXT_NAME/bdump/alert_$SID.log<br /><br />1.3) Network Logs :<br />$ORACLE_HOME/network/admin/$SID.log<br /><br />1.4) OUI Logs :<br />OUI Inventory Logs :<br />$ORACLE_HOME/admin/oui/$CONTEXT_NAME/oraInventory/logs<br /><br />2) Application Tier<br />$ORACLE_HOME/j2ee/DevSuite/log<br />$ORACLE_HOME/opmn/logs<br />$ORACLE_HOME/network/logs<br /><br />Tech Stack Patch 10.1.3 (Web/HTTP Server)<br />$IAS_ORACLE_HOME/j2ee/forms/logs<br />$IAS_ORACLE_HOME/j2ee/oafm/logs<br />$IAS_ORACLE_HOME/j2ee/oacore/logs<br />$IAS_ORACLE_HOME/opmn/logs<br />$IAS_ORACLE_HOME/network/log<br />$INST_TOP/logs/ora/10.1.2<br />$INST_TOP/logs/ora/10.1.3<br />$INST_TOP/logs/appl/conc/log<br />$INST_TOP/logs/appl/admin/log<br /><br /><br /><strong><strong>Instance Home Directory Structure</strong></strong><br /><br />$INST_TOP/admin/scripts ($ADMIN_SCRIPTS_HOME): All AD scripts are located here <br />$INST_TOP/appl ($APPL_CONFIG_HOME) : For standalone envs, this is set to $APPL_TOP<br />$INST_TOP/appl/fnd/12.0.0/secure FND_SECURE: The dbc file is located here<br />$INST_TOP/appl/admin All the env config files are located here <br />$INST_TOP/certs SSL Certificates go here <br />$INST_TOP/logs LOG_HOME: Central log file location. All log files are placed here (except adconfig)<br />$INST_TOP/logs/ora ($ORA_CONFIG_HOME)<br />$INST_TOP/logs/ora/10.1.2 ‘C’ Oracle home config, Contains tnsnames and forms listener servlet config files<br />$INST_TOP/logs/ora/10.1.3 Apache & OC4J config home, Apache, OC4J and opmn. This is the ‘Java’ oracle home configuration for OPMN, Apache and OC4J <br />$INST_TOP/pids Apache/Forms server PID files are located here <br />$INST_TOP/portal Apache’s DocumentRoot folder <br /><br /><br /><strong><strong>Admin node parameters</strong></strong><br />Provide the values required for creation of the new APPL_TOP Context file.<br /> <br />Target hostname (virtual or normal) [servername1]:<br /> <br />Target system database SID : TEST<br /> <br />Target system database server node [servername1] :<br /> <br />Target system base directory for source homes : /d01/oratst<br /> <br />Target system base directory for instance homes [/d01/oratst/inst] :<br /> <br />Target system service groups - Enable Root Service [disabled] :<br /> <br />Target system service groups - Enable Web Entry Point Services [disabled] :<br /> <br />Target system service groups - Enable Web Application Services [disabled] :<br /> <br />Target system service groups - Enable Batch Processing Services [enabled] :<br /> <br />Target system service groups - Enable Other Services [disabled] :<br /> <br />Do you want to preserve the Display set to cosxaor0k:0.0 (y/n) [y] ? :<br /> <br />Clone Context uses the same port pool mechanism as the Rapid Install.<br />Once you choose a port pool, Clone Context will validate the port availability.<br /> <br />Enter the port pool to be used for the target system node [0-99] :<br />1<br /> <br />Checking the port pool 1<br />done: Port Pool 1 is free<br />Report file located at /d01/oratst/inst/apps/TEST_servername1/admin/out/portpool.lst<br />Complete port information available at /d01/oratst/inst/apps/TEST_servername1/admin/out/portpool.lst<br /> <br />UTL_FILE_DIR on database tier consists of the following directories.<br /> <br />1. /d01/oratst/inst/apps/TEST_servername1/appltmp<br />2. /usr/tmp<br />3. /d02/oratst/db/tech_st/10.2.0/appsutil/outbound/TEST_servername1<br />4. /d02/oratst/db/tech_st/10.2.0/appsutil/outbound/TEST_servername1<br />5. /usr/tmp<br />Choose a value which will be set as APPLPTMP value on the target node [1] : 2<br /><br /><br /><strong><strong>Web node parameters</strong></strong><br />Provide the values required for creation of the new APPL_TOP Context file.<br /> <br />Target hostname (virtual or normal) [servername1] :<br /> <br />Target system domain name : domainname<br /> <br />It is recommended that your inputs are validated by the program.<br />However you might choose not to validate your inputs under following circumstances:<br /> -If cloning a context on source system for a remote system.<br /> -If cloning a context on a machine where the ports are taken and you do not want to shutdown the services at this point.<br /> -If cloning a context but the database it needs to connect is not available.<br /> <br />Do you want the inputs to be validated (y/n) [n] ? : y<br /> <br />Target system service groups - Enable Root Service [disabled] : enabled<br /> <br />Target system service groups - Enable Web Entry Point Services [disabled] : enabled<br /> <br />Target system service groups - Enable Web Application Services [disabled] : enabled<br /> <br />Target system service groups - Enable Batch Processing Services [enabled] : disabled<br /> <br />Target system service groups - Enable Other Services [disabled] : enabled<br /> <br />Do you want to preserve the Display set to cosxaor0k:0.0 (y/n) [y] ? : n<br /> <br />Target system Display [servername1:0.0] :<br /> <br />RC-00217: Warning: Configuration home directory (s_config_home) evaluates to /d01/oratst/inst/apps/TEST_servername1. A directory with this name already exists and is not empty.<br /> <br />Do you want to continue (y/n) ? Type y to continue with current value of Configuration home. Type n to abort. : y<br />Database port is 1522<br /> <br />Do you want the the target system to have the same port values as the source system (y/n) [y] ? :<br />Complete port information available at /d01/oratst/apps/apps_st/comn/clone/bin/out/TEST_servername1/portpool.lst<br /> <br />UTL_FILE_DIR on database tier consists of the following directories.<br /> <br />1. /d01/oratst/inst/apps/TEST_servername1cosxaor0k/appltmp<br />2. /usr/tmp<br />3. /d02/oratst/db/tech_st/10.2.0/appsutil/outbound/TEST_servername1<br />4. /d02/oratst/db/tech_st/10.2.0/appsutil/outbound/TEST_servername1<br />5. /usr/tmp<br />Choose a value which will be set as APPLPTMP value on the target node [1] : 2<br /><br /><br /><strong><strong>Services</strong></strong> <br />(new concept introduced in R12) define the set of processes that will be started on each application node. These services can be activated or deactivated on a specific application tier/node. <br /><br /><br /><strong><strong>Type of Application Services</strong></strong><br />Application tier processes are broadlyclassified in to five type of Services<br /><br />i) Root Service - Oracle Process Manager and Notification (OPMN) is part of Oracle Application Server 10G R2/R3. OPMN is used for inter process communication (between OC4J, HTTP Server), death detection and start/stop of Services for 10g R3.<br /><br />ii) Web Entry Point Services - HTTP Server is part of Oracle Application Server 10G R3 and built on Apache (aka Oracle Apache. All web request to R12 first hit HTTP Server.<br /><br />iii) Web Application Services - OACORE OC4J, Forms OC4J, OAFM OC4J - These are set of 3 OC4J (Oracle Container for J2EE) to run application.<br /><br />iv) Batch Processing Services - This set of service consist of Apps Listener (FNDFS & FNDSM), Concurrent Manager, Fulfillment Server and Interaction Server<br /><br />v) Other Services - Oracle Forms Service, Forms Metric Client/Server and Mobile Web Application (MWA)Service<br /><br /><br /><strong><strong>Process—> Services –> StartupScript –>Context parameter </strong></strong><br />a) Oracle Process Manager - Root Service - adopmnctl.sh- s_root_status, s_opmnstatus<br />b) Oracle HTTP Server - Web Entry Point Service - adapcctl.sh- s_web_entry_status, s_oacorestatus<br />c) OACORE OC4J - Web Application Services - adoacorectl.sh- s_web_applications_status s_oacorectrl<br />d) FORMS OC4J - Web Application Services - adformsctl.sh- s_web_applications_status, s_formsstatus<br />e) OAFM OC4J - Web Application Services - adoafmctl.sh- s_web_applications_status, s_oafmstatus<br />f) Concurrent Manager - Batch Processing Service - adcmctl.sh- s_batch_status, s_concstatus<br />g) Apps Listener - Batch Processing Service - adalnctl.sh- s_batch_status, s_tnsstatus<br />h) Interaction Center (ICSM) - Batch Processing Service - ieoicsm.sh- s_batch_status, s_icsmctrl<br />i) Fulfillment Server - Batch Processing Service - jtffmctl.sh- s_batch_status, s_jtffsstatus<br />j) Forms Server - Other Service - adformsrvctl.sh- s_other_service_group_status , s_formsserver_status<br />k) Metric Client - Other Service - adfmcctl.sh- s_other_service_group_status , s_metcstatus<br />l) Metric Server - Other Service - adfmsctl.sh- s_other_service_group_status , s_metsstatus<br />m) Mobile Web Application (MWA) - Other Service - mwactlwrpr.sh- s_other_service_group_status , s_mwastatus<br /><br /><br />Important:<br />1) When you select Root services, Web Services are automatically selected i.e. First three Services Root, Web Entry Point and Web Application Services can only be enabled or disabled together.<br />If you want Web Entry Point Services, Web Application Services, and Root Services to be installed on different nodes then use autoconfig after installation.<br /><br />2. In R12 with introduction of Services there is NO concept of Forms node or Web Node.<br /><br />3. With default installation all services Root, Web Entry Point, Web Application Services, Batch Processing and Otherservices are enabled.<br /><br />406558.1 Configuring Applications Node Services in Oracle Applications Release 12 (Metalink Note)<br />Chapter2 of "Oracle® Applications Installation Guide: Using Rapid Install Release 12" at http://download.oracle.com/docs/cd/B40089_10/current/acrobat/120oaig.pdf<br /><br /><br /><strong><strong>JSP in R12</strong></strong><br />Here is the explaination why we need to compile jsp when ever we clear JCache.Here clearing JCache means removing class files from JSP pages which will be there under the directory called $COMMON_TOP/_pages.<br /><br />So in Release 11i when ever we clear the JCache and after bouncing apache, new class files are created automatically when the JSP page is called from a browser session and JSP was rendered fine. <br />But incase of Release 12 it's observed that when calling the JSP no class file is created in the diretciry $COMMON_TOP/_pages so we need to create it manually. We have an alternative method, but it is not recommended.<br /><br />An alternative is to change the value for the main_mode parameter to recompile (instead of justrun)<br />This can be achieved with the following steps<br /><br />Use the Context editor to change the value for "s_jsp_main_mode" in the <SID>.xml file used by autoconfig and change value from justrun to recompile <br />Run Autoconfig to propagate the changes to the configuration files <br />Verify that now the $INST_TOP/ora/10.1.3/j2ee/oacore/application-deployments/oacore/html/orion-web.xml has<br /> <br /> <init-param><br /> <param-name>main_mode</param-name><br /> <param-value>recompile</param-value><br /> </init-param><br /> <br />Test the scenario failing before. <br />See that now a new _<jspname>.class is created when the JSP page is called. <br />For production environment the manual 'ojspcompile.pl' method is recommended for the following reasons <br /><br />With 'justrun' a fixed set of JSP is used which can not change due to maintenance actions. With 'recompile' the JSP pages replacing exsiting ones will recompile automatically while environment is up-and-running. This may lead to errors for the users when compilation fails or different version of the JSP being used within a single session. <br />Using 'justrun' skips the check if compilation is needed and therefor improves performanceRaghavendra Lakshmipathyhttp://www.blogger.com/profile/09047289715110089027noreply@blogger.com0tag:blogger.com,1999:blog-811846438034249793.post-57207127990382890852010-09-11T06:59:00.000-07:002010-09-11T08:39:24.453-07:00Datapump and export/importCreated user xxi and given necessaru priveliges.<br />And also created DIRECTORY=exp_dp_dir which is set to /d23/TEST/TEST_DVD_RESTORE/ICE. ( reqires for datapump)<br /><br />Dmp files under /d23/TEST/TEST_DVD_RESTORE/ICE have been imported using data pump utility as below.<br /><br /> <br />impdp system/xxx DIRECTORY=exp_dp_dir DUMPFILE=IGL_POS852_D012DMP.dmp logfile=imp_IGL_POS852_D012DMP.log REMAP_SCHEMA=xxi:xxz REMAP_TABLESPACE=APPS_TS_TX_DATA:USERS,APPS_TS_TX_IDX:USERS<br /> <br />impdp system/xxx DIRECTORY=exp_dp_dir DUMPFILE=IGL_POS852_D010DMP.dmp logfile=imp_IGL_POS852_D010DMP.log REMAP_SCHEMA=xxi:xxz REMAP_TABLESPACE=APPS_TS_TX_DATA:USERS,APPS_TS_TX_IDX:USERS<br /><br />impdp system/xxx DIRECTORY=exp_dp_dir DUMPFILE=IGL_POS852_D001DMP.dmp logfile=imp_IGL_POS852_D001DMP.log REMAP_SCHEMA=xxi:xxz REMAP_TABLESPACE=APPS_TS_TX_DATA:USERS,APPS_TS_TX_IDX:USERS<br /><br />impdp system/xxx DIRECTORY=exp_dp_dir DUMPFILE=IGL_POS852_H001.dmp logfile=imp_IGL_POS852_H001.log REMAP_SCHEMA=xxi:xxz REMAP_TABLESPACE=APPS_TS_TX_DATA:USERS,APPS_TS_TX_IDX:USERS <br /><br /><br />Dmp files under /d23/TEST/TEST_DVD_RESTORE/LEG have to be imported using imp utility.<br /><br />Using below command,objects from WMS_LABEL_REQUESTS_HIST_WIP.dmp file have been imported.<br /><br />$ imp userid=system/WEMXzX2X file=WMS_LABEL_REQUESTS_HIST_WIP.dmp log=imp_WMS_LABEL_REQUESTS_HIST.log fromuser=wms touser=xxz<br /><br />one more eg:<br />"SQL> create or replace directory expdp_vrtx_dir as '/d01/VERTEXBKP/DUMPBKP';<br /><br />Directory created.<br /><br />SQL> grant read,write on directory expdp_vrtx_dir to system;<br /><br />Grant succeeded.<br /><br />$ expdp system/xxx DIRECTORY=expdp_vrtx_dir DUMPFILE=vertexo_expdp.dmp logfile=vertexo_expdp.log SCHEMAS=VERTEXO<br /><br />Restore/Import VERTEXO schema<br /> Note: VERTEXO schema will be imported only if its not there in SOURCE instance.<br /> No need to import if below query returns rows in TARGET instance.<br />sqlplus ""/ as sysdba""<br />select object_type,count(*) from dba_objects where owner='VERTEXO'<br />GROUP BY OBJECT_TYPE;<br /><br />OBJECT_TYPE COUNT(*)<br />------------------- ----------<br />SEQUENCE 12<br />TRIGGER 12<br />TABLE 325<br />INDEX 535<br />VIEW 7<br /><br />Perform below steps to import VERTEXO schema only if no rows returned in above output.<br />$ sqlplus ""/as sysdba""<br /><br />SQL> create or replace directory expdp_vrtx_dir as '/d01/VERTEXBKP/DUMPBKP';<br />Directory created.<br /><br />SQL> grant read,write on directory expdp_vrtx_dir to system;<br /><br />Grant succeeded.<br /><br />CREATE TABLESPACE vertex_data datafile '/d03/ptch/db/apps_st/data/xxvertex_data01.dbf' SIZE 1000m <br />AUTOEXTEND ON <br />NEXT 500m <br />MAXSIZE 4096m;<br /><br />CREATE TABLESPACE vertex_idx datafile '/d03/patch/db/apps_st/data/xxvertex_ind01.dbf' SIZE 1000m <br />AUTOEXTEND ON <br />NEXT 500m <br />MAXSIZE 4096m;<br />exit<br /> <br /># cd /d01/VERTEXBKP/DUMPBKP<br />impdp system/systempasswd directory=expdp_vrtx_dir dumpfile=vertexo_expdp.dmp SCHEMAS=VERTEXO<br /><br /><br /><br />EXP/IMP examples:<br /><br />exp eg:<br />$ exp transport_tablespace=y tablespaces=XXJDE_TS_DATA, XXJDE_TS_INDEX triggers=y constraints=y grants=y file=XXJDE.dmp log=XXJDE.log<br /><br />exp and imp eg:<br /># more expLPSReviewTables.par<br /># Export of LPS Employee Review tables for support transition.<br />#<br /># Perform ""export NLS_LANG=AMERICAN_AMERICA.WE8ISO8859P15"" prior to running.<br />#<br /># 20100603 BKWilson Recreated export After changing VARCHAR2(4000) columns<br /># into CLOB columns.<br />#<br />userid=fismim<br />FILE=exportLPSReviewTables.dmp<br />FULL=n<br />COMPRESS=N<br />GRANTS=N<br />INDEXES=N<br />CONSTRAINTS=N<br />CONSISTENT=Y<br />ROWS=Y<br />STATISTICS=NONE<br />TABLES=(FIS_MIM_RV_CONTROL,<br />FIS_MIM_RV_EMPL,<br />FIS_MIM_RV_OBJ,<br />FIS_MIM_RV_OBJ_TYPE,<br />FIS_MIM_RV_OBJ_NBR,<br />FIS_MIM_RV_RATING_TBL)<br />LOG=exportLPSReviewTables.log"<br />"$ more impReviewTable.par_bkp<br /># Import a FISMIM Review table into SYSADM schema as a test.<br />#<br /># ""export NLS_LANG=AMERICAN_AMERICA.WE8ISO8859P15"", prior to running.<br />#<br /># 20100601 BKWilson Re-executed test with CLOB columns.<br />#<br />userid=mim<br />full=n<br />tables=(FIS_MIM_RV_CONTROL,FIS_MIM_RV_EMPL,FIS_MIM_RV_OBJ,FIS_MIM_RV_OBJ_TYPE,FIS_MIM_RV_OBJ_NBR,FIS_MIM_RV_RATING_TBL)<br />ignore=y<br />commit=y<br />statistics=none<br />buffer=50000000<br />file=exportLPSReviewTables.dmp<br />log=importLPSReviewTables.log"<br /><br /><br />to grant all privileges for all tables of a schema to another schema: SQL> begin<br /> 2 for i in (select distinct table_name from all_tables where owner='MIM')<br /> 3 loop<br /> 4 execute immediate 'grant all on '||i.table_name||' to apps';<br /> 5 end loop;<br /> 6 end;<br /> 7 /<br /><br />PL/SQL procedure successfully completed."<br /><br />Exporting a table and importing which belongs to apps schema<br /><br />exp USERID=apps/apps file=CLX_value_set_update_temp_exp.dmp GRANTS=Y TABLE=CLX_VALUE_SET_UPDATE_TEMP<br /><br />imp apps/st4ng file=CLX_value_set_update_temp_exp.dmp fromuser=apps touser=apps TABLES=CLX_VALUE_SET_UPDATE_TEMP<br /><br />$ imp system/PcT3X45K FILE=expdat.dmp FULL=Y log=poimp.log<br /><br />"Take export dump of concurrent tables<br />exp userid=applsys/qatapps09 file=conc.dmp log=exp.log <br />tables=FND_CONCURRENT_QUEUES,FND_CONCURRENT_QUEUES_TL,<br />FND_CONCURRENT_QUEUE_SIZE, FND_CONCURRENT_QUEUE_CONTENT<br /><br /><br /><br />imp system/manager file=/d12/oradev/mine/XX_CASTLE.dmp fromuser=XX_CASTLE touser=XX_CASTLE log=XX_CASTLE_imp.log<br /><br />imp sys/manager file=xxtm1.dmp fromuser=XXTM1 touser=XXTM1 log=xxtm1.imp.log<br /><br />exp userid=system/password file=schema_name.dmp log=logfile owner=required schema<br /><br />eg: How to Export a schema like MSDEM<br /><br />exp userid=system/(passwd) file=msdem_CRP2.dmp log=msdem_CRP2.log direct=y feedback=1000 owner=MSDEM <br />exp " '/ as sysdba' " file=$REFRESH/msdem_CRP2.dmp log=$REFRESH/msdem_CRP2.log direct=y feedback=1000 owner=MSDEMRaghavendra Lakshmipathyhttp://www.blogger.com/profile/09047289715110089027noreply@blogger.com0tag:blogger.com,1999:blog-811846438034249793.post-72788268005728946332010-09-11T06:30:00.000-07:002010-09-14T22:36:42.000-07:00Application server<strong><strong>Start/Stop 10g AS</strong></strong><br /><br />0g AS architecture consist of<br />infrastructure Services (Database & Identity Management)<br />& <br />Middle tier Services (Portal & Wireless or BI & Forms)<br />[j2ee & webcache type (which is third middle tier Installation type)]<br /><br />Startup Sequence<br /> <br />--Start Database Listener<br />-- Start Database<br />-- Start Infrastructure Services<br />-- Start Middle Tier Services<br /><br />Shutdown Sequence<br />-- Shutdown Middle Tier Services<br />-- Shutdown Infrastructure Services<br />-- Stop Database<br />-- Stop database listener<br /><br />What utility I should use to startup / shutdown (Bouncing) for 10g AS ?<br />You can use iasconsole (Browser based console listening in default port 1156) to startup shutdown services<br />You can also use opmnctl ; command line tool (Oracle Process Management & Notification) control to startup shutdown services<br /><br />dcmctl utility to start/stop services is now deprecated & should not be used<br /><br />Startup <br />Start listener as - lsnrctl start [listener_name]<br />Start Database as - sqlplus ""/as sysdba""; SQL> startup<br />Start Infrastructure Services as - opmnctl startall (Make sure your are in Infra Home)<br />Start Middle Tier Services as - opmnctl startall (Make sure your are in Middle Tier Home)<br /><br />Shutdown<br />Stop Middle Tier Services as - opmnctl stopall (Make sure your are in Middle Tier Home)<br />Stop Infrastructure Services as - opmnctl stopall (Make sure your are in Infra Home)<br />Stop Database as - sqlplus ""/as sysdba""; SQL> shutdown immediate<br />Stop listener as - lsnrctl stop [listener_name]<br /><br />How to start/stop iasconsole ?<br />you can start/stop services using iasconsole but before that you have to start / stop iasconsole itself which you can do by emctl (em - enterprise manager)<br /><br />emctl start iasconsole<br />emctl stop iasconsole<br /><br />You can access ias console screen from browser by url<br />http://host.domain:em_port/emd (Default em port is 1156)<br />http://host.domain:1156/emd (This will prompt username password , use ias_admin)<br /><br />How to start individual component ?<br />Step mentioned above are to stop/start all components in Middle Tier or Infrastructure Tier , what if I want to start/stop/restart only few components of a Tier ? You will use<br /><br />opmnctl [startproc stopproc restartproc] ias-component=[component_name] <br />or<br />opmnctl [startproc stopproc restartproc] process-type=[process_name] <br /><br />Where Component Name like - OC4J, OID, HTTP_Server, WebCache and<br />Process Type like - WebCacheAdmin, OC4J_Portal, OC4J_Security<br /><br /><br /><strong><strong>How to start / stop webcache ?</strong></strong><br />To start/stop webcache with 10g AS, use<br />opmnctl startproc/stopproc ias-component=WebCache<br />For standalone J2ee & Webcache Installation use<br />webcachectl start/stop<br /><br /><strong><strong>How to troubleshoot Webcache Issues ?</strong></strong><br />Webcache logs are at $ORACLE_HOME/logs event_logs & access_log , all issues errors are recorded in event_logs so check this file any issues related to Webcache <br /><br /><strong><strong>OID</strong></strong><br />If you don't know which ports your OID is using refer<br />portlist.ini in $ORACLE_HOME/install (Note that this file will not list updated port if you change OID ports after Installation)<br />You should an entry like<br />Oracle Internet Directory port = 389<br />Oracle Internet Directory (SSL) port = 636<br /><br /><br /><strong><strong>How to start OID ?</strong></strong><br />OID process is controlled by opmn (Oracle process monitor & notification server) so you by default use opmnctl command<br />To Start OID opmnctl startproc ias-component=OID<br />To Stop OID opmnctl stopproc ias-component=OID <br /><br />OID can also be started without OPMN by<br />First start oidmon (OID Monitor Process)<br />Then use oidctl (OID control)<br />To stop OID without OPMN<br />First stop oid process using oidctl then stop oidmon (OID monitoring process)<br /><br />When you start services using opmnctl , it inturn start oidmon & oidctl<br /><br /><br /><strong><strong>SSO login URL</strong></strong><br />User can login to E-Business Suite Locally (NO SSO, directly from FND_USER) or to SSO (authentication via SSO) or BOTH. Set profile option "Applications SSO Login Types" to LOCAL or BOTH at userlevel and use<br />http(s)://(hostname).(domainname):(port)/ OA_HTML/ AppsLocalLogin.jsp<br />For SSO authentication use URL<br />http(s)://(hostname).(domainname):(port)/oa_servlets/AppsLogin<br /><br /><strong><strong>10g as version:</strong></strong><br />/d01/oradev/OraHomeAS/config<br />"$ grep -i version ias.properties<br />Version=10.1.2.2.0<br /><br /><br /><strong><strong>Discoverer values setup in context file</strong></strong><br />1. Check the following are set <br /><br />oa_disco_server><br /> disco_portrange oa_var="s_disco_servlet_portrange" oa_type="PORT">17011-17020</disco_portrange><br /> disco_ver_comma oa_var="s_disco_ver_comma">10,1,2</disco_ver_comma><br /> oem_web_port oa_var="s_oemweb_port" oa_type="PORT">10307</oem_web_port><br /> osagent_port oa_var="s_osagent_port" oa_type="PORT">10308</osagent_port><br /> disco_nprocs oa_var="s_disco_nprocs" osd="Linux">0</disco_nprocs><br /> disco_eul_prefix oa_var="s_disco_eul_prefix">EUL5</disco_eul_prefix><br /> disco_node_weight oa_var="s_disco_node_weight" osd="Linux">0</disco_node_weight><br /> discoinstancename oa_var="s_discoinstanc">hostname.domainname_10315</discoinstancename><br /> disco_standalone oa_var="s_disco_standalone">true</disco_standalone><br /> disco_protocol oa_var="s_disco_protocol">http</disco_protocol><br /> disco_machine oa_var="s_disco_machine">discovererhostname.domainname</disco_machine><br /> disco_port oa_var="s_disco_port" oa_type="PORT">7778</disco_port><br /> disco_ipaddress oa_var="s_disco_ip_addr">"xxx.xx.xx.xxx"</disco_ipaddress><br /> disco_ORBalwaysProxy oa_var="s_disco_ORBalwaysProxy">no</disco_ORBalwaysProxy><br /> disco_jretop oa_var="s_disco_jretop" osd="Linux">/d01/oravis/visora/806/jre1183o</disco_jretop><br /> disco_jvm_options oa_var="s_disco_jvm_options" osd="unix"/><br /> disco_oad_executable oa_var="s_disco_oad_executable">oad</disco_oad_executable><br /> oa_disco_server><br /><br />appserverid_authentication oa_var="s_appserverid_authentication">OFF</appserverid_authentication><br /><br />Run autoconfig.<br /><br />Check the following profile options<br /><br />ICX: Discoverer Launcher http://discovererhostname.domainname:7778/discoverer/plus?Connect=[APPS_SECURE]<br />ICX: Discoverer Parameters ICX: Discoverer Release 10<br />ICX: Discoverer Viewer Launcher http://discovererhostname.domainname:7778/discoverer/viewer?Connect=[APPS_SECURE]<br />ICX: Discoverer use Viewer Yes<br /><br /><br /><strong><strong>To check current Oracle Application Server Patchset version i.e. if 10.1.3.0 or 10.1.3.1 or 10.1.3.2</strong></strong><br />a) Using runInstaller <br />–Invoke runInstaller from 1013_ORACLE_HOME/oui/bin<br />–Click on Installed Products.<br />–Select the 10.1.3 Oracle Home and expand the tree. You should see Oracle Application Server Patchset 10.1.3.X patch in the list.<br /><br />b) Using Opatch <br />Run the command ‘opatch lsinventory -detail‘<br />Look for ‘Oracle Application Server PatchSet‘ in the ‘Product‘ column.<br /><br /><br /><strong><strong>OC4JADMIN password</strong></strong>. <br />If your source oracle home version is 10.1.3.0 then password for oc4jadmin is secret else (for 10.1.3.1/2/3) password is oafm<br /><br /><strong><strong>To know 10g discoverer version</strong></strong><br />To get the existing version name of Discoverer Server on Unix, go to $ORACLE_HOME/bin and run the command:<br />strings -a dis51ws | grep ""Discoverer Version:""<br />Otherwise you can start Discoverer Plus/Viewer and check the version.<br /><br /><br /><strong><strong>To unlock ias_admina nd orcladmin</strong></strong> a/c ---> 280116.1<br />Just go to the directory <br /><br />$ORACLE_HOME/sysman/j2ee/conf. In this directory you'll find the configuration file <br /><br />jazn-data.xml. Inside this configuration file you'll find the definition of the <br /><br />ias_admin user including his encrypted password:<br /><br />* snippet from jazn-data.xml<br /><users><br />...<br /><user><br /><name>ias_admin</name><br /><credentials>{903}YdvTMA...</credentials><br /></user><br /><br />Inside the tag <credentials> you see the encrypted password of the ias_admin. To reset the password, just change the value inside the tag <credentials> as follows:<br /><br />snippet from jazn-data.xml<br /><users><br />...<br /><user><br /><name>ias_admin</name><br /><credentials>!mynewpassword1</credentials><br /></user><br /><br />The new password you have to provide in clear text has to start with a ""!"". Be aware that the new password must match the password rules: minimum 6 signs and one sign must be a number. After your changes, stop the ASC and restart it with following commands:<br /><br />* emctl stop iasconsole<br />* emctl start iasconsole<br /><br />Now you can login with your new password and inside the jazn-data.xml the provided <br />password in clear text is automatically changed into an encrypted password. <br /><br /><br />for orcladmin <br />How to Unlock/Reset Super User cn=orcladmin When the ODS's Password Has Been Forgotten or is Unknown? [ID 472752.1]<br />Unlocked the orcladmin account using following command.<br />oidpasswd connect=dtst unlock_su_acct=true<br /><br />check if orcladmin account is locked. Use the following command to check if account is locked.<br />$ORACLE_HOME/bin/ldapbind -p 389 -D cn=orcladmin -w welcome1<br /><br />OC4jADMIN password reset<br />466246.1 How To Reset The Password Of The oc4jadmin Account For Enterprise Manager<br />576587.1 How to change the OC4Jadmin Password<br />556031.1 How To Have The Password In Clear Text In Jazn-data.xml?<br />403678.1 How To Change The Passwords For BPELADMIN / OC4JADMIN / SOA Suite<br /><br />http://download.oracle.com/docs/cd/B25221_04/core.1013/b25209/trouble_asc.htm#BCEDHFEI<br />http://onlineappsdba.com/index.php/2008/06/03/changereset-oc4jadmin-password/<br /><br /><br /><strong><strong>Rotating oc4j and opmn log on SOA nodes</strong></strong><br /><br />The below changes has to be made in opmn.xml file. <br /><br />a. The below tag has to be added as a part of the start-parameters category id for ias component SOA_GROUP and for process type OC4J_SOA. <br />( In our case we can find it at line <br />data id=""oc4j-options"" value=""-out /d11/oraias/prodias/10.1.3/opmn/logs/mylog.out -err /oracle/101202J2EE/opmn/logs/mylog.err"" /> <br /><br />b.Change the line <br />log path=""$ORACLE_HOME/opmn/logs/opmn.log"" comp=""internal;ons;pm""<br />rotation-size=""1500000"" to<br />log path=""$ORACLE_HOME/opmn/logs/opmn.log"" comp=""internal;ons;pm""<br />rotation-hour=""0"" <br /><br />2. opmnctl stopall <br />3. backup of the log files. <br />4. opmnctl startallRaghavendra Lakshmipathyhttp://www.blogger.com/profile/09047289715110089027noreply@blogger.com0tag:blogger.com,1999:blog-811846438034249793.post-64577412556554690422010-09-08T06:37:00.000-07:002010-09-08T06:38:26.290-07:00Metalink DISCOVERERError Unable To Connect To Oracle Applications Database 'afscpgcs' After Cloning An APPS Instance & Changing APPS Passwords [ID 788706.1]<br /><br />Connecting to Oracle EBS with Discoverer Fails with ORA-01017 Invalid Username/Password Error [ID 784358.1]<br /><br />A Quick Reference on Oracle Applications 11i/12i .DBC File and Discoverer [ID 421424.1]<br /><br />Connecting from Oracle Applications E-Business to Discoverer Fails with GWYUID and ORA-12154 Errors [ID 470471.1]<br /><br />How To Create An E-Business Suite Connection Trace For Discoverer 10g (10.1.2) Plus / Viewer [ID 370800.1]Raghavendra Lakshmipathyhttp://www.blogger.com/profile/09047289715110089027noreply@blogger.com0tag:blogger.com,1999:blog-811846438034249793.post-65915710097092223112010-08-29T06:52:00.000-07:002010-08-29T07:08:55.393-07:00Version related commands/scripts on appln side<strong><strong>To check the oracle application framework version</strong></strong><br />http://hostname.domain.com:8000/OA_HTML/OAInfo.jsp<br /><br /><strong><strong>Execute the following command to determine the version of OA.jsp:</strong></strong> <br />ident $FND_TOP/html/OA.jsp and <br />ident $OA_HTML/OA.jsp <br /><br />Attention: The version of OA.jsp in $FND_TOP/html should be the same as the one in $OA_HTML if the version of OA.jsp does not match do the following :<br /><br />Run adadmin. <br />Choose the option called Maintain Application Files. <br />Choose the option called Copy files to destination. <br /><br />If the ident command return any version information for OA.jsp then you have not installed the Oracle Applications Self Service Framework. <br />Note: ident is not available on SUN & AIX. In this instance please use:<br /><br />adident Header $FND_TOP/html/OA.jsp and<br />adident Header $OA_HTML/OA.jsp<br /><br /><strong><strong>How to check whether GSM is enabled or not thru the back end</strong></strong><br /> select PROFILE_OPTION_VALUE from fnd_profile_option_values where PROFILE_OPTION_ID=(select PROFILE_OPTION_ID from fnd_profile_options where PROFILE_OPTION_NAME like '%GSM%');<br />check for FNDSM process ps -ef|grep FNDRaghavendra Lakshmipathyhttp://www.blogger.com/profile/09047289715110089027noreply@blogger.com0tag:blogger.com,1999:blog-811846438034249793.post-81038151415341955532010-08-27T09:42:00.000-07:002010-08-27T09:46:06.809-07:00TEMP tablespace droping and recreating<strong><strong>TEMP tablespace droping and recreating</strong></strong><br />SQL> DROP TABLESPACE temp;<br />DROP TABLESPACE temp<br />*<br />ERROR at line 1:<br />ORA-12906: cannot drop default temporary tablespace<br /><br />SQL> select * from database_properties<br /> 2 where property_name = 'DEFAULT_TEMP_TABLESPACE';<br /><br />PROPERTY_NAME<br />------------------------------<br />PROPERTY_VALUE<br />--------------------------------------------------------------------------------<br />DESCRIPTION<br />--------------------------------------------------------------------------------<br />DEFAULT_TEMP_TABLESPACE<br />TEMP<br /><br />Because this is the default temporary tablespace Oracle won't let you drop it. Here are the steps needed to drop your default temporary tablespace.<br /><br />1) Create a New Temporary Tablespace<br />First, you will need to create a new temporary tablespace...in this example I've called the tablespace "TEMPX". Note that I purposefully made the tablespace small in size (10mb). This is because we will drop this tablespace at the end of this process.<br /><br />SQL> CREATE TEMPORARY TABLESPACE TEMPX<br /> 2 TEMPFILE '/data/tempx.dbf' SIZE 10M AUTOEXTEND OFF<br /> 3 EXTENT MANAGEMENT LOCAL UNIFORM SIZE 1024K<br /> 4 SEGMENT SPACE MANAGEMENT MANUAL<br />/<br /><br />Tablespace created.<br /><br /><br />2) Make Your New Temporary Tablespace the Default Temporary Tablespace <br />SQL> ALTER DATABASE DEFAULT TEMPORARY TABLESPACE tempx;<br /><br />Database altered.<br /><br /><br />3) Drop Your Original Temporary Tablespace <br />SQL> DROP TABLESPACE temp INCLUDING CONTENTS AND DATAFILES;<br /><br />Tablespace dropped.<br /><br /><br />4) Recreate Your 'TEMP' Tablespace <br />SQL> CREATE TEMPORARY TABLESPACE TEMP<br /> 2 TEMPFILE '/data/temp.dbf' SIZE 1000M AUTOEXTEND OFF<br /> 3 EXTENT MANAGEMENT LOCAL UNIFORM SIZE 1024K<br /> 4 SEGMENT SPACE MANAGEMENT MANUAL<br />/<br />or<br />"SQL> CREATE TEMPORARY TABLESPACE TEMP TEMPFILE '/d23/oraptch/ptchdata/temp01.dbf' SIZE 50M<br /> 2 AUTOEXTEND ON NEXT 50M MAXSIZE 2048M,<br /> 3 '/d23/oravis/visdata/temp02.dbf' SIZE 50M AUTOEXTEND ON NEXT 50M MAXSIZE 2048M,<br /> 4 '/d23/oravis/visdata/temp03.dbf' SIZE 50M AUTOEXTEND ON NEXT 50M MAXSIZE 2048M,<br /> 5 '/d23/oravis/visdata/temp04.dbf' SIZE 50M AUTOEXTEND ON NEXT 50M MAXSIZE 2048M<br /> 6 EXTENT MANAGEMENT LOCAL UNIFORM SIZE 1M BLOCKSIZE 8K;<br /><br /><br />5) Make Your 'TEMP' Tablespace the Default Temporary Tablespace Again <br />SQL> ALTER DATABASE DEFAULT TEMPORARY TABLESPACE temp;<br /><br />Database altered.<br /><br /><br />6) Drop Your 'TEMPX' Tablespace Now That It Is Not Needed <br />SQL> DROP TABLESPACE tempx INCLUDING CONTENTS AND DATAFILES;<br /><br />Tablespace dropped.Raghavendra Lakshmipathyhttp://www.blogger.com/profile/09047289715110089027noreply@blogger.com0tag:blogger.com,1999:blog-811846438034249793.post-82435082122402306802010-08-19T11:42:00.000-07:002010-09-13T22:22:42.055-07:00Tkprof and Explain plan and traceanalyzer<strong><strong>Tkprof and Explain plan</strong></strong><br /><br />SHHO PARAMETER user_dump_dest(copy and save this parameter)<br /><br /><br />ALTER SYSTEM SET user_dump_dest='/ngs/app/rsrt/personal/';<br /><br /><br />ALTER SESSION SET timed_statistics=TRUE;<br /><br /><br />ALTER SESSION SET sql_trace=TRUE;<br /><br /><br />EXEC <procedure><br /><br /><br />RUN SQL Statements<br /><br /><br />ALTER SYSTEM SET user_dump_dest=<orginal value of user_dump_dest>;<br /><br /><br />ALTER SYSTEM SET timed_statistics=FALSE;<br /><br /><br />ALTER SESSION SET sql_trace =FALSE;<br /><br /><br />N.B : ORADBA ACCESS/ALTER SYSTEM Privelege Required for ALTER SYSTEM<br /><br />trace will have spid in file name, from sid of session can get spid from v$process<br /><br />tkprof filename.trc filename.txt sys=no explain=username/password<br /><br />or<br /><br />tkprof filename.trc output_filename.out sys=no explain=apps/password <br /> sort='(prsela,exeela,fchela)'<br /><br />The above mentioned command sorts the SQL by placing the worst performing <br />SQL statement at the top of the output file<br /><br /><br /><strong><strong>USING AUTOTRACE</strong></strong><br />SET AUTOT ON -- Runs the SQL and gives the execution plan and statistics<br /><br />SET AUTOT TRACE -- Runs the SQL but does not show the result and gives the execution plan and statistics<br /><br />SET AUTOT TRACE EXP -- Only gives the execution plan without running the Query<br /><br />SET AUTOT OFF -- Switching off Autotrace facility<br /><br /><br />To get trace file from a concurrent request id :<br />select oracle_process_id from fnd_concurrent_requests where request_id=38156196; <br /><br />ORACLE_PROCESS_ID <br />------------------------------ <br /><br /><br />trace file name will have oracle process id in its name<br /><br /><br /><strong><strong>Create a TraceAnalyzer File </strong></strong><br />A trace file is a raw set of data produced by the Oracle Database. <br />TraceAnalyzer reformats the raw data so that it is easier to review. It is <br />a more powerful tool than TKPROF. It translates things like bind <br />variables and make them easy to find. <br />a. Download TraceAnalyzer from Note.224270.1 <br />b. Install trace analyzer in SQL: @TRCACREA.sql; <br />c. Retrieve the trace file. <br />d. In SQL*Plus, issue a command like the following to create a TraceAnalyzer <br />version of the trace file: @TRCANLZR.sql UDUMP vis015_ora_22854.trcRaghavendra Lakshmipathyhttp://www.blogger.com/profile/09047289715110089027noreply@blogger.com0tag:blogger.com,1999:blog-811846438034249793.post-29104306062496897592010-08-19T11:40:00.000-07:002010-09-11T08:52:41.535-07:00Form<strong><strong>Form Trace:</strong></strong><br />To turn <br />trace on at the form level, go to the toolbar and navigate to <br />HELP -> TOOLS -> TRACE. By clicking on the word trace, a checkmark will appear <br />next to it, indicating that trace has been turned on. Reproduce the problem. <br />Navigate to HELP -> TOOLS -> TRACE, and click on the word trace to uncheck <br />trace and turn it off.<br /><br /><br /><strong><strong>How will you take the forms trace?</strong></strong><br />http://hostname:port/dev60cgi/f60cgi?play=&record=collect&log=filename<br /><br /><strong><strong>Loging into forms directly</strong></strong><br />You should able to start application services without any error. If you are able to start application services without any error but still you are not able to login then you should check direct forms login. <br /><br />For Release 11i : http://<hostname>:<port>/dev60cgi/f60cgi<br />For Release 12: http://<hostname>:<port>/forms/frmservlet<br />For direct forms logging, below parameter in CONTEXT file should be set to OFF. If it is not set to OFF then make below changes and run autoconfig. <br /><br /><appserverid_authentication oa_var=""s_appserverid_authentication"">OFF</appserverid_authentication> <br /><br />Once you are able to login in forms mentioned in step 4 but still personal home page login is not working then you can confirm that issue is now with personel home page login only and no issue with APPS password. <br /><br />Run AOL/J Test. Use below URL to run AOL/J Test: <br />http://<hostname>:<port>/OA_HTML/jsp/fnd/aoljtest.jsp<br /><br /><br /><strong><strong>To set up forms loadbalancing:</strong></strong><br />What settings are required in each node's xml files?<br /><br />s_methost<br />s_leastloadedhost<br /><br />Are there any other variables that need to be set?<br /><br />RESEARCH<br />=========<br />(Note: This is INTERNAL ONLY research. No action should be taken by the customer on this information.<br />This is research only, and may NOT be applicable to your specific situation.)<br /><br />Note 217368.1 Advanced Configurations and Topologies for Enterprise Deployments of E-Business Suite 11i<br />----> Option 2.4. Forms Server / Metrics Layer Load Balancing<br />------>Option 2.4.2. When using Forms Metric Server<br /><br /><br />ANSWER<br />=======<br /><br />Please note that you need to run autoconfig after each step as given in the Note.<br /><br />I. On each Web server node, run the AutoConfig Context Editor and set the following. <br /><br />1. update the <SID>.xml file value<br />Forms Server FND_TOP context value (s_formsfndtop in $APPL_TOP/admin/<SID>.xml) to the location of FND_TOP on the Forms Server nodes. <br /><br />If you have multiple Forms Server nodes, they must all have the same directory structure.<br /><br />2.set the Metrics Server Load Balancing Host context value (s_leastloadedhost in $<br />APPL_TOP/admin/<SID>.xml) to %LeastLoadedHost%<br /><br />3.Metrics Server Error URL context value (s_meterrorurl in $APPL_TOP/admin/<SID>.xml) to your default Metrics Se<br />rver error page, if any<br /><br /><br />II. On all Forms Metrics Client Nodes, run the AutoConfig Context Editor and set the following. <br /><br />set the Metrics Server Host context value (s_methost in $APPL_TOP/admin/<SID>.xml) to the hostname that runs the Pri<br />mary Forms Metrics Server Process. <br /><br />For example, if the Primary Forms Metrics Server Process is running on host "formsmetrics.oracle.com", then Metrics Server<br />Host = formsmetrics.oracle.com<br /><br /><br />TO DISABLE:<br />===========<br />Option 2.4.3. Disabling Forms Metric ServerIf you have multiple Forms Server nodes in your configuration and you use the Primary Forms Metrics Server Process and Forms Metrics <br /><br />Clients to do Forms Load Balancing, and wish to disable this functionality perform the following steps.<br /><br />On all Forms Metrics Client Nodes, run the AutoConfig Context Editor . <br />In the Context Detail screen, set the Metrics Server Load Balancing Host context value (s_leastloadedhost in $APPL_TOP/admin/<SID>.xml) to the value of "%s_hostname%" context <br /><br />variable:<br /><br />Metrics Server Load Balancing Host = myserver <br />[ where "myserver" is the name of this host and value of %s_hostname%. ]<br /><br />Save your changes. <br />Generate new configuration files by running AutoConfig as described in MetaLink Note 165195.Raghavendra Lakshmipathyhttp://www.blogger.com/profile/09047289715110089027noreply@blogger.com0tag:blogger.com,1999:blog-811846438034249793.post-10537426675134449982010-08-19T10:55:00.000-07:002010-09-16T02:18:37.236-07:00Tablespace, datafiles, tables, index related scripts<strong><strong>The following is the query to find the total space, used space and free space for TEMP tablespace.</strong></strong><br />SELECT TABLESPACE_NAME "Tablespace", (TOTAL_BLOCKS * 8192) "Total Size", (USED_BLOCKS * 8192 ) "Used" ,(FREE_BLOCKS * 8192) "Free" , (MAX_USED_BLOCKS * 8192 ) "Max Used" FROM V$SORT_SEGMENT;<br /><br /><br /><br /><strong><strong>To find temporary tablespace usage query</strong></strong><br />"SELECT s.sid ""SID"",s.username ""User"",s.program ""Program"", u.tablespace ""Tablespace"",<br />u.contents ""Contents"", u.extents ""Extents"", u.blocks*8/1024 ""Used Space in MB"", q.sql_text ""SQL TEXT"",<br />a.object ""Object"", k.bytes/1024/1024 ""Temp File Size""<br />FROM v$session s, v$sort_usage u, v$access a, dba_temp_files k, v$sql q<br />WHERE s.saddr=u.session_addr<br />and s.sql_address=q.address<br />and s.sid=a.sid<br />and u.tablespace=k.tablespace_name;<br /><br /><strong><strong>To find free space,alloated space in TEMP tablespace</strong></strong><br />select TABLESPACE_NAME, BYTES_USED/1024/1024,BYTES_FREE/1024/1024 from v$temp_space_header;<br />TABLESPACE_NAME BYTES_USED/1024/1024<br />BYTES_FREE/1024/1024<br />------------------------------ --------------------<br />--------------------<br />TEMP 1100 <br />900<br /><br /><br />select TABLESPACE_NAME,BYTES/1024/1024 from dba_temp_files where TABLESPACE_NAME ='%TEMP%';<br />select TABLESPACE_NAME,BYTES/1024/1024 from dba_temp_files;<br />TABLESPACE_NAME BYTES/1024/1024<br />------------------------------ ---------------<br />TEMP 1100<br /><br />find if it is autoextensible:<br />select substr(file_name,1,35),AUTOEXTENSIBLE,bytes/1024/1024 from dba_temp_files;<br />SUBSTR(FILE_NAME,1,35) AUT <br />BYTES/1024/1024<br />--- ---------------<br />/ge2vis/oradata/data01/temp01.dbf NO <br />1100<br /><br />add tempfile or resize:<br />ALTER TABLESPACE TEMP ADD TEMPFILE '/dbtemp/PROD/temp01.dbf'<br /> SIZE 2000M REUSE AUTOEXTEND ON NEXT 8192 MAXSIZE 2000M;<br /><br />or <br />alter database tempfile '/ge2vis/oradata/data01/temp01.dbf' resize 2000m ;<br /><br /><strong><strong>How can one see who is using a temporary segment?</strong></strong><br />For every user using temporary space, there is an entry in SYS.V$_LOCK with type 'TS'. <br />All temporary segments are named 'ffff.bbbb' where 'ffff' is the file it is in and 'bbbb' is first block of the segment. <br /><br />If your temporary tablespace is set to TEMPORARY, all sorts are done in one large temporary segment. For usage stats, see SYS.V_$SORT_SEGMENT <br /><br />From Oracle 8.0, one can just query SYS.v$sort_usage. Look at these examples: <br /> select s.username, u."USER", u.tablespace, u.contents, u.extents, u.blocks<br /> from sys.v_$session s, sys.v_$sort_usage u<br /> where s.saddr = u.session_addr<br /> /<br /><br /> select s.osuser, s.process, s.username, s.serial#,<br /> sum(u.blocks)*vp.value/1024 sort_size<br /> from sys.v_$session s, sys.v_$sort_usage u, sys.v_$parameter vp<br /> where s.saddr = u.session_addr<br /> and vp.name = 'db_block_size'<br /> and s.osuser like '&1'<br /> group by s.osuser, s.process, s.username, s.serial#, vp.value<br /> /<br /><br /><strong><strong>How would you check if the tablespace is in backup mode ?- Check in v$backup</strong></strong><br />SQL> SELECT d.tablespace_name, b.status FROM dba_data_files d, v$backup b<br />WHERE d.file_id = b.FILE# and b.STATUS = ‘ACTIVE’ ;<br /><br /><br /><strong><strong>Creating dictionary mangaed temporary tablespsce</strong></strong><br />create tablespace temp datafile '/d14/oracle/proddata/temp01.dbf' size 2048M temporary default storage (initial 5m next 5m pctincrease 0);<br /><br /><br /><strong><strong>To find ablespace free size</strong></strong><br />set pagesize 0 linesize 200 heading off feedback off<br />column total_bytes format 999999999999<br />column free_bytes format 999999999999<br />select d.tablespace_name,<br /> d.num_files num_files,<br /> f.bytes free_bytes,<br />-- d.file_bytes file_bytes,<br /> d.maxbytes bytes ,<br /> round((d.file_bytes/d.maxbytes)*100,0) percent_used<br />from (select tablespace_name, sum(bytes) bytes<br /> from dba_free_space group by tablespace_name) f,<br /> (select tablespace_name, count(distinct(file_id)) num_files, <br />sum(bytes) file_bytes,<br /> sum(decode(autoextensible,'YES',maxbytes,bytes)) maxbytes<br /> from dba_data_files group by tablespace_name) d<br /> where d.tablespace_name = f.tablespace_name<br /> and d.tablespace_name not like 'DW%'<br /> and d.tablespace_name not like 'RBS%'<br /> and d.tablespace_name not like '%UNDO%'<br /> and d.tablespace_name not like '%TEMP%'<br /> order by d.tablespace_name;<br />quit;<br /><br /><strong><strong>To find free space , available space and % of used space in a tablespace use</strong></strong><br />SELECT tba, free_spc, avail_spc, free_spc/avail_spc "free %"<br />FROM<br />( SELECT tablespace_name TBA, <br />SUM ( bytes ) free_spc <br />FROM dba_free_space<br />GROUP BY tablespace_name ),<br />( SELECT tablespace_name TBB,<br />SUM ( bytes ) avail_spc<br />FROM dba_data_files<br />GROUP BY tablespace_name )<br />where TBA='APPS_TS_INTERFACE'<br /><br />SELECT tablespace_name ,SUM ( bytes ) free_spc <br />FROM dba_free_space<br />where tablespace_name ='APPS_TS_INTERFACE'<br />group by tablespace_name ;<br /><br />select a.tablespace_name, b.tot_space, a.free_space from ( select tablespace_name, sum (bytes) free_space from dba_free_space group by tablespace_name) a, ( select tablespace_name, sum( bytes) tot_space from dba_data_files group by tablespace_name )b where a.tablespace_name = b.tablespace_name and ( a.free_space / b.tot_space ) * 100 < 10;<br /><br />select tba,free_spc,avail_spc, free_spc/avail_spc "free%" FROM <br />(select tablespace_name TBA, SUM(bytes) free_spc FROM dba_free_space GROUP BY tablespace_name),<br />(select tablespace_name TBB, SUM(bytes) avail_spc FROM dba_data_files GROUP By tablespace_name)<br />WHERE TBB=TBA(+) AND free_spc/avail_spc <.2 ;<br /><br /><strong><strong>To check which all tablespaces are less than 15 % free etc</strong></strong><br />select d1.tablespaces_name , sum(d2.bytes)/sum(d1.bytes) “free”<br /> from dba_data_files d1, dba_free_space d2 <br /> where d1.tablespace_name=d2.tablespace_name <br /> group by d1.tablespace_name having ( sum(d2.bytes)/sum(d1.bytes) ) < 0.15;<br /><br /><strong><strong>Table move to new tablespace command</strong></strong> <br />rebuild is new tablespace created before and added 10 GB space in that<br />alter TABLE owner.table_name move tablespace REBUILD storage (initial 500m next 500m minextents 5 maxextents unlimited pctincrease 0); <br />alter index owner.name rebuild online tablespace REBUILD parallel 2;<br /><br />alter table AP.AP_SELECTED_INVOICES_ALL move tablespace reorg;<br />"SQL> alter table AP.AP_SELECTED_INVOICES_ALL move tablespace reorg;<br />alter table AP.AP_SELECTED_INVOICES_ALL move tablespace reorg<br /> *<br />ERROR at line 1:<br />ORA-01950: no privileges on tablespace 'REORG'<br /><br />SQL> ALTER USER AP QUOTA unlimited on reorg;<br />User altered."<br /><br /><strong><strong>Set auto extend on for all data files in the database with a maxsize of 2GB</strong></strong><br />Login as Oracle user<br />You can run the following SELECT to generate auto extend script for all data files and spool it to a file.<br />$ sqlplus system/xxxx<br />SQL> set pagesize 0<br />SQL> set linesize 140<br />SQL> set feedback off<br />SQL> spool autoextfile.sql<br />SQL> select 'alter database datafile '||chr (39)||file_name||chr (39)||' autoextend on next 10M maxsize 2000m;' from dba_data_files;<br />SQL> exit;<br />Edit the file autoextfile.sql appropriately and then run it.<br />$ sqlplus system/ @autoextfile.sql <br /><br /><br /><strong><strong>Database Corrupted Block Information</strong></strong><br />select * from v$database_block_corruption;<br />select tablespace_name,segment_type,owner,segment_name from dba_extents where file_id=&file_no and &block_number between block_id and block_id + blocks -1;<br /><br /><strong><strong>TO Know tablespaces which have less than 20% freee space</strong></strong><br />SQL> select tba, free_spc, avail_spc, free_spc/avail_spc "free%" from<br /> 2 (select tablespace_name TBA, sum(bytes) free_spc from dba_free_space group by tablespace_name),<br /> 3 (select tablespace_name TBB, sum(bytes) avail_spc from dba_data_files group by tablespace_name)<br /> 4 where TBB=TBA(+) and free_spc/avail_spc<.2;<br /><br /><br /><strong><strong>command to rebuild the QP_QUALIFIERS_N1 and the QP_QUALIFIERS_N2 indexes and gather the stats on the QP_QUALIFIERS table by logging as APPS</strong></strong><br />alter session set db_file_multiblock_read_count = 32;<br />alter index QP.QP_QUALIFIERS_N2 rebuild;<br />alter index QP.QP_QUALIFIERS_N1 rebuild;<br />exec fnd_stats.gather_table_stats(OWNNAME=>'QP',TABNAME=>'QP_QUALIFIERS',percent => '100', granularity => 'ALL', partname => NULL,cascade =>true, degree => 4 );<br />alter session set db_file_multiblock_read_count = 8;<br /><br /><br /><strong><strong>To find HWM , blocks is HWM</strong></strong><br />SELECT blocks, empty_blocks, num_rows FROM dba_tables WHERE table_name ='AP_SELECTED_INVOICE_CHECKS_ALL';<br /><br /><strong><strong>Command to Rebuild index and gather stats</strong></strong><br />spool ind.sql<br />select 'alter index '||index_name||' rebuild nologging parallel 8;' from user_indexes<br />where index_name not like 'SYS%' and index_name not like '%_FK_%' and index_name not like '%_PK';<br />spool off<br />few more eg:<br />alter index GL.GL_INTERFACE_CONTROL_N1 rebuild parallel 4 nologging;<br />alter index APPLSYS.FND_HELP_TARGETS_N1 rebuild online nologging parallel 8;<br />select 'alter index '||owner||'.'||index_name||' rebuild online parallel tablespace <tablespace_name> nologging;' from dba_indexes where owner=<username>;<br /><br />exec fnd_stats.gather_table_stats(owner=>'GL',tabname=>'GL_INTERFACE',percent=>10);<br /><br /><br /><strong><strong>command to move table, index, table partition, index partition of owner = MSC<br />o/p like </strong></strong><br />alter TABLE MSC.MSC_ALLOC_DEMANDS move partition ALLOC_DEMANDS_0 tablespace APPS_TS_TX_DATA;<br />alter TABLE MSC.MSC_ALLOC_DEMANDS move partition ALLOC_DEMANDS_1023 tablespace APPS_TS_TX_DATA;<br />alter TABLE MSC.MSC_ALLOC_DEMANDS move partition ALLOC_DEMANDS_2023 tablespace APPS_TS_TX_DATA;<br />alter TABLE MSC.MSC_ALLOC_SUPPLIES move partition ALLOC_SUPPLIES_0 tablespace APPS_TS_TX_DATA;<br /><br />.<br />.<br />.<br />.<br />.<br />.<br />alter INDEX MSC.MSC_X_EXCEPTION_DETAILS_U1 rebuild online tablespace APPS_TS_TX_IDX;<br />alter INDEX MSC.MSC_ZONE_REGIONS_N1 rebuild online tablespace APPS_TS_TX_IDX;<br />alter INDEX MSC.MSC_ZONE_REGIONS_N2 rebuild online tablespace APPS_TS_TX_IDX;<br /><br /><br />Set pages 0 <br />Set trimspool on<br />Set lines 300<br />Spool moveobj.sql<br />select 'alter ' || decode( segment_type, 'TABLE', 'TABLE', 'INDEX','INDEX','TABLE PARTITION','TABLE','INDEX PARTITION','INDEX')<br /> || ' ' || owner||'.'|| segment_name || decode(segment_type, 'TABLE',' move ','INDEX',' rebuild online ','TABLE PARTITION',' move partition ','INDEX PARTITION',' rebuild partition ') || decode (segment_type, 'TABLE PARTITION',partition_name,'INDEX PARTITION',partition_name||' online ') || ' tablespace '||tablespace_name ||';' from dba_segments<br />where segment_type in ( 'TABLE', 'INDEX', 'TABLE PARTITION','INDEX PARTITION') and owner = 'MSC' and segment_name not like '%MSC_ERRORS%' order by segment_type desc, segment_name ASC<br />/<br /><br /><strong><strong>How to create index on a table: As Apps Database user in PROD, run the following sql statements:</strong></strong><br />SQL> drop index po.PO_HEADERS_N5 ;Make sure the ""index dropped "" message comes through.<br />SQL> CREATE INDEX PO.PO_HEADERS_N5 ON PO.PO_HEADERS_ALL<br />(WF_ITEM_TYPE, WF_ITEM_KEY)<br />LOGGING<br />TABLESPACE PO_IDX<br />PCTFREE 0<br />INITRANS 11<br />MAXTRANS 255<br />STORAGE (<br /> INITIAL 48K<br /> MINEXTENTS 1<br /> MAXEXTENTS 2147483645<br /> PCTINCREASE 0<br /> FREELISTS 4<br /> FREELIST GROUPS 4<br /> BUFFER_POOL DEFAULT<br /> )<br />NOPARALLEL;<br />Make sure the ""index created"" message comes through.<br /><br /><strong><strong>To take backup of table:</strong></strong><br />create table XYZ as select * from orig_name ;<br /><br /><strong><strong>Find if all datafiles are autoextensible</strong></strong><br />select distinct AUTOEXTENSIBLE from dba_data_files ;<br /> o/p is ‘NO’ or ‘YES’<br /><br /><strong><strong>script to create all tablespaces with diffrent mount locations</strong></strong><br /><br /> set pages 0<br /> set feedback off<br /> set lines 200<br /> spool tbscreate.sql<br /> select 'create tablespace '||tablespace_name||' datafile ''/topusi/oradata/data01/'||lower(tablespace_name)||'01.dbf'' size '|| bytes *1.0 || ' autoextend on maxsize 1999m;'<br /> from dba_data_files<br /> order by tablespace_name<br /> /<br /> select 'create tablespace '||tablespace_name||' tempfile ''/topusi/oradata/data01/'||lower(tablespace_name)||'01.dbf'' size '|| bytes *1.0 || ' autoextend on maxsize 1999m;'<br /> from dba_temp_files<br /> order by tablespace_name<br /> /<br /> spool off<br /><br /><br /><strong><strong>To create a tablespace: eg XDO</strong></strong><br />create tablespace xdod datafile '/d01/oradata/data03/xdod01.dbf' size 20m autoextend on maxsize 2000m;<br /><br /><strong><strong>To autoextend all datafiles</strong></strong><br />Ensure all datafile are autoextensible: Spool & Run this script & fix it for specific datafiles of size more than 4000M: -<br />select 'alter database datafile '||''''||file_name||''''||' autoextend on next 50M maxsize 4000M;' from dba_data_files;<br /><br /><strong><strong>You can run the following SELECT to generate auto extend script for all data files and spool it to a file</strong></strong><br />Edit the file autoextfile.sql appropriately and then run it. $sqlplus system/ @autoextfile.sql<br />$ sqlplus system/<br /> SQL> set pagesize 0<br /> SQL> set linesize 140<br /> SQL> set feedback off<br /> SQL> spool autoextfile.sql<br /> SQL> select 'alter database datafile '||chr(39)||file_name||chr(39)||' autoextend on next 10M maxsize 2000m;' from dba_data_files;<br />SQL> exit;<br /><br /><br /><strong><strong>Before resizing tablespace check :to make sure that there is space or not left for the tablespace and then resize or add datafile</strong></strong><br />select tablespace_name, max(bytes/1024/1024) from dba_free_space<br />where tablespace_name ='APPLSYSD' group by tablespace_name;<br /><br /><br /><strong><strong>usn corresponds to segment_id in dba_rollback_segs;</strong></strong><br />select usn , optsize,extents,hwmsize from v$rollstat where usn=2;<br />select SEGMENT_NAME,OWNER,INITIAL_EXTENT,NEXT_EXTENT,MIN_EXTENTS,MAX_EXTENTS from dba_rollback_segs where SEGMENT_ID=2;<br /><br /><strong><strong>To list all tables</strong></strong><br />select * from tab where tname like “%…%’;<br /><br /><strong><strong>To check constraint name in a table</strong></strong> <br />select constraint_name from dba_constraints;<br /><br /><strong><strong>Adding a datafile</strong></strong><br />alter tablespace CNX add datafile '/data/oravis/visdata/cnx_03.dbf' size 1200M AUTOEXTEND ON NEXT 20M MAXSIZE 2000M;<br />alter tablespace GLD add datafile '/data/oravis/visdata/gld02.dbf' size 1000M; (if dbf > 1.5 GB)<br /> alter database datafile '/data/oravis/visdata/xkbx02.dbf' resize 1500M;(if dbf < 1.5 GB)<br /><br /><strong><strong>TO FIND THE DATAFILE MOUNT POINT LOCATION ADN THE TOTAL SIZE OF DATABASE</strong></strong><br />select substr(file_name,6,9),sum(bytes)/1024 from dba_data_files group by substr(file_name,6,9);<br />change (6,9) accordingly<br />select substr(file_name,1,35),AUTOEXTENSIBLE,sum(bytes)/1024/1024 from dba_data_files where TABLESPACE_NAME='APPS_TS_INTERFACE';<br /><br />SUBSTR(FILE_NAME,1,35)<br />--------------------------------------------------------------------------------<br />AUT BYTES/1024/1024<br />--- ---------------<br />/d01/oradata/data03/a_int01.dbf<br />YES 1800<br /><br /><br /><strong><strong>To create a datafiel for a schema which use 16k databalock</strong></strong><br />Set the parameter db_16k_cache_size=250MB in the init.ora parameter<br />Create a tablespace with 16k data block size by using the following command<br />Create tablespace schemaname datafile ‘/d03/visdata/data01.dbf’ size 500M autoextend on maxsize 8000M blocksize 16k;Raghavendra Lakshmipathyhttp://www.blogger.com/profile/09047289715110089027noreply@blogger.com0tag:blogger.com,1999:blog-811846438034249793.post-3212596299458606582010-08-18T10:50:00.000-07:002010-09-13T23:07:48.762-07:00Statspack<strong><strong>To implement statspack:</strong></strong><br /><br />alter session set tracefile_identifier=<br />Note ids 149124.1, 149121.1 , 149113.1<br />Steps for installing statspack: <br />1) Bring down application services <br />2) Check for invalid objects <br />3) Log on as sysdba (sqlplus "/ as sysdba") <br />4) Run the following script: <br />@ $ORACLE_HOME/rdbms/admin/sprecreate.sql <br />Provide the perfstat user password (keep it same as username) and the default tablespace (PERFSTAT - already created) and temporary tablespace (TEMP) when prompted. <br />5) Check the spcreate.lis file to if there are any errors <br />6) Check to see if there any more invalids than there were before installing. If there are then recompille the invlaid objects. <br />7) Log on as the perfstat user <br />8) Execute the following package: <br />SQL> exec statspack.snap(); <br />9) Make sure the package completes without errors. <br />10) Bring the application services back up <br /><br /><strong><strong>How to install statspack</strong></strong><br />Shut down Applications <br />Create the PERFSTAT tablespace ( Ensure perfstat user’s default tablespace is set to this tablespace during install) <br />Install the statspack utility <br />Verify Install <br />bring up the Application Services<br />The detailed level steps for statspack install is below:<br />SQL> CREATE TABLESPACE perfstat<br /> DATAFILE '/d05/oracle/visdata/perfstat02.dbf' SIZE 500M <br /> EXTENT MANAGEMENT LOCAL UNIFORM SIZE 512K max size 4096m<br /> SEGMENT SPACE MANAGEMENT AUTO<br /> PERMANENT<br /> ONLINE; <br />Install Statspack. At the prompt specify PERFSTAT password. Keep it same as username. Also give <br />default tablespace as perfstat tablespace and temporary tablespace as TEMP.<br />SQL> @$ORACLE_HOME/rdbms/admin/spcreate.sql<br />Verfiy its been installed properly<br />SQL> connect perfstat/perfstat<br />SQL> exec statspack.snap(i_snap_level=>10);<br /><br /><br /><strong><strong>steps for dropping perfstat user and tablespace:</strong></strong><br /><br />DROP the PERFSTAT USER by running the following sql from the DATABASE ORACLE HOME<br />sqlplus '/as sysdba';<br />@$ORACLE_HOME/rdbms/admin/spdrop.sql<br /><br />2.drop package body SYS.STATSPACK_EXT_NEW;<br /><br />3.drop trigger SYS.LOG_ERRORS_TRIG;<br /><br />4.DROP TABLESPACE PERF_EXT including contents and datafiles;Raghavendra Lakshmipathyhttp://www.blogger.com/profile/09047289715110089027noreply@blogger.com0tag:blogger.com,1999:blog-811846438034249793.post-3211782421768087812010-08-18T08:43:00.000-07:002010-09-13T21:11:35.159-07:00Issues<strong><strong>To identify table and column name from where OAM (Oracle Application Manager) was populating “Application System Name” in Applications Dashboard & Site Map</strong></strong><br /><br />SELECT table_name, column_name<br />FROM all_tab_columns<br />–WHERE column_name LIKE ‘%’<br /><br /><br />SELECT table_name, column_name<br />FROM all_tab_columns<br />WHERE column_name LIKE ‘APPLICATION%SYSTEM%NAME%’;<br /><br />and output was<br /><br />TABLE_NAME COLUMN_NAME<br />—————————— ——————————<br />AD_APPL_TOPS APPLICATIONS_SYSTEM_NAME<br />FND_PRODUCT_GROUPS APPLICATIONS_SYSTEM_NAME<br />BIN$TQikJhLucVHgRAADuqIcxA==%content APPLICATIONS_SYSTEM_NAME<br /><br />APPLICATIONS_SYSTEM_NAME in FND_PRODUCT_GROUPS was pointing to source instance where as it should be same as SID.<br /><br />Fix was simply to update table FND_PRODUCT_GROUPS like<br /><br />Backup existing table as<br />SQL>create table FND_PRODUCT_GROUPS_BAK as select * from FND_PRODUCT_GROUPS;<br /><br />Update table with target name<br />SQL> update FND_PRODUCT_GROUPS<br />set APPLICATIONS_SYSTEM_NAME = ‘<NEW DATABASE NAME>’ ;<br />commit;<br /><br /><br /><strong><strong>Legal statement on login page is not apprearing correctly.</strong></strong><br /><br />1) Login to VIS Apps servers and navigate to the $OA_HTML directory <br />2) Make a copy of the AppsLocalLogin.jsp file <br />3) Open the AppsLocalLogin.jsp in vi <br />4) search for the keyword "legal". You should see the following line <br /><br /> StyledTextBean legalMessage = new StyledTextBean(); <br /><br />5) Replace the word "StyledTextBean" with "RawTextBean" <br />6) Please note that the replace needs to made to 2 places on the same line <br />7) Stop the Apache <br />8) Navigate to the $OA_HTML/.. directory <br />9) Delete the contents of the directory _pages <br />10) Start the Apache and login to the database to see if the legal message appeara correctly. <strong></strong><br /><br /><br /><strong><strong>Rebuild/recreate help search index</strong></strong><br />To recreate the Index FND_LOBS_CTX, we can do any of these two although 1st one is preferable. Please perform this steps once whole patching gets completed.<br /> <br /> 1> Log on with the System Administrator responsibility, and run the concurrent program ""Rebuild Help Search Index"".<br /> 2> conn as apps/<pwd> <br /> SQL>@$FND_TOP/sql/aflobbld.sql applsys apps<br /><br /><br /><strong><strong>To reproduce ora-600 error and to see the trace file generated in udump</strong></strong><br />ORA-0600 [17035] using this while connected as APPS:<br />select fnd_web_sec.URLEncrypt('hello', 'world') from dual;<br /><br /><strong><strong>To make login authentication off for direct login to http://hostname.domainname/dev60cgi/f60cgi</strong></strong><br />update fnd_nodes set server_id='OFF' where node_name='AUTHENTICATION';<br /><br /><strong><strong>Error:“ORA-1653: unable to extend table APPLSYS.FND_ENV_CONTEXT”</strong></strong><br />First find out all the next_extent parameters for FND_ENV_CONTEXT<br /> select partition_name, initial_extent, next_extent, pct_increase<br /> from dba_tab_partitions<br /> where table_name = ‘FND_ENV_CONTEXT’;<br /><br />For any of these that are over 5 MB run the following..<br />alter table applsys.FND_ENV_CONTEXT<br />modify partition <partition name from query above><br />storage (next 1M pctincrease 0);<br /><br />also check resize the tablespace corresponding to this<br /><br /><br /><strong><strong>Yellow bar isse resolution</strong></strong><br />take a backup of these files:<br /> $APPL_TOP/admin/out/adcert.txt<br />$APPL_TOP/admin/adsign.txt <br />$APPL_TOP/admin/appltop.cer <br />$HOME/identitydb.obj .<br /><br />copy the above four files from another apps node <br /><br />make sure these files have the same sizes on both <br /><br />then from adadmin generate jar files with force option<br />before doing this, make sure all the processes are down<br />if some process does not come down, then kill it<br />clean up the _pages and move the apache logfiles and jserv logfiles<br /><br /><br /><strong><strong>The following steps need to be completed to manually generate the tnsnames.ora and listener.ora files due to a bug with ATG:</strong></strong><br />cd $TNS_ADMIN<br />ls -l<br />total 8<br />-rw-r--r-- 1 applvis dba 6821 Oct 3 09:53 tnsnames.ora<br />$<br />$AD_TOP/bin/adgentns.pl contextfile=$APL_TOP/VIS_hostname.xml<br />ls -l<br />total 8<br />-rw-r--r-- 1 applvis dba 1939 Oct 5 11:55 listener.ora<br />-rw-r--r-- 1 applvis dba 2891 Oct 5 11:55 tnsnames.oraRaghavendra Lakshmipathyhttp://www.blogger.com/profile/09047289715110089027noreply@blogger.com0tag:blogger.com,1999:blog-811846438034249793.post-26490717371412211042010-08-15T07:41:00.000-07:002010-09-17T08:34:07.945-07:00Apache or web service<strong><strong>Verify that both httpd.conf and httpd_pls.conf have the same location for LockFile parameter</strong></strong><br />cd $IAS_ORACLE_HOME/Apache/Apache/conf" "grep LockFile httpd*conf" and this should return the LockFile location as /var/Apache/VIS/ and not NFS.<br /><br /><br /><strong><strong>Load balance setup:</strong></strong><br />Following Documents have been followed to enable the HTTP Layer Hardware Balance. <br />1.) HTTP Layer Hardware Balancer 217368.1 <br />In the Context file (XML File) the following configuration values are set as part of the Hardware Load Balancer <br />Web entry point Host to the HTTP load-balancer machine name (prod) -------- the name of the Load Balancer device <br />Web entry point Domain to the HTTP load-balancer domain name (abc.com) <br />Web entry protocol to the HTTP load-balancer protocol e.g. "http" or "https" (http) <br />Active Web Port to the value of the HTTP load-balancer's external port (8000) <br />Login Page to include <"Web entry protocol">://<"Web Host entry point">.<"Web domain entry point">:<"Active Web Port"> <br />should like below <br />http://prod.abc.com:8000/oa_servlets/AppsLogin <br /><br /><br />Forms Listener Servlet 201340.1 (Requirement in Step 1) <br />Enable/Disable the Forms Listener Servlet <br />Edit the context file ($APPL_TOP/admin/.xml) <br />Locate the following two variables: <br /><br /><server_url oa_var="s_forms_servlet_serverurl"><br />to enable set to /forms/formservlet<br />eg: <server_url oa_var="s_forms_servlet_serverurl"> /forms/formservlet </server_url><br />to disable set to blank<br />eg: <server_url oa_var="s_forms_servlet_serverurl"/><br /><servlet_comment oa_var="s_forms_servlet_comment"><br />to enable set to blank<br />eg: <servlet_comment oa_var="s_forms_servlet_comment"/><br />to disable set to #Raghavendra Lakshmipathyhttp://www.blogger.com/profile/09047289715110089027noreply@blogger.com0tag:blogger.com,1999:blog-811846438034249793.post-49090255783904827892010-08-15T07:31:00.000-07:002010-08-17T04:55:53.938-07:00Steps for dropping and creating undo tablespace:Comment out the undo setting in the init.ora. Shutdown the database normal and startup. Drop the UNDO tablespace, remove the UNDO datafiles and recreate with five datafilies with initial size of 2048M for a total of 10GB. Uncomment the undo parameters and restart the instance. <br /><br />su - opreprod<br />sqlplus '/as sysdba';<br /><br />set lines 1000<br />column file_name format a50<br />set pages 100<br /><br />1.select FILE_NAME,TABLESPACE_NAME from dba_data_files where tablespace_name like 'APPS_UNDOTS1'; <br /><br />2. Create a new undo tablespace: <br /><br />create undo tablespace second_undo datafile '/d03/opreprod/preproddata/s_undo01.dbf' size 100M autoextend on maxsize 2000M;<br /><br />3. Make this tablespace default undo tablespace:<br /><br />alter system set undo_tablespace= second_undo;<br /><br />4. Drop the old undo tablespace which came from production:<br /><br />drop tablespace APPS_UNDOTS1 including contents and datafiles;<br /><br />(check whether all the datafiles that are returned by select statement in step 1 are deleted from their respective locations, if not delete them manually)<br /><br />5.Recreate the tablespace APPS_UNDOTS1 with required space.<br /><br />create undo tablespace APPS_UNDOTS1 datafile '/d03/opreprod/preproddata/undo01.dbf' size 500M autoextend on maxsize 8000M;<br /><br />ALTER TABLESPACE APPS_UNDOTS1 ADD datafile '/d03/opreprod/preproddata/undo02.dbf' size 500M autoextend on maxsize 8000M;<br /><br />ALTER TABLESPACE APPS_UNDOTS1 ADD datafile '/d03/opreprod/preproddata/undo03.dbf' size 500M autoextend on maxsize 8000M;<br /><br />ALTER TABLESPACE APPS_UNDOTS1 ADD datafile '/d03/opreprod/preproddata/undo04.dbf' size 500M autoextend on maxsize 8000M;<br /><br />ALTER TABLESPACE APPS_UNDOTS1 ADD datafile '/d03/opreprod/preproddata/undo05.dbf' size 500M autoextend on maxsize 8000M;<br /><br />ALTER TABLESPACE APPS_UNDOTS1 ADD datafile '/d03/opreprod/preproddata/undo06.dbf' size 500M autoextend on maxsize 8000M;<br /><br />ALTER TABLESPACE APPS_UNDOTS1 ADD datafile '/d03/opreprod/preproddata/undo07.dbf' size 500M autoextend on maxsize 8000M;<br /><br />6. make APPS_UNDOTS1 undo tablespace a default one:<br /><br />alter system set undo_tablespace= APPS_UNDOTS1;<br /><br />7. drop ‘second_undo’ undo tablespace:<br /><br />drop tablespace second_undo including contents and datafiles;Raghavendra Lakshmipathyhttp://www.blogger.com/profile/09047289715110089027noreply@blogger.com0tag:blogger.com,1999:blog-811846438034249793.post-75970458833252627792010-08-15T07:16:00.000-07:002010-08-15T07:22:07.583-07:00Procedure to drop and recreate log filesPerform following DDL/DCL commands after loging to database with OS command sqlplus "/ as sysdba". <br /><br />ALTER DATABASE ADD LOGFILE GROUP 6 ('/orafiles/vis/redo/redo06a.dbf') SIZE 150m; <br />ALTER DATABASE ADD LOGFILE GROUP 7 ('/orafiles/vis/redo/redo07a.dbf') SIZE 150m; <br />ALTER DATABASE ADD LOGFILE GROUP 8 ('/orafiles/vis/redo/redo08a.dbf') SIZE 150m; <br />ALTER DATABASE ADD LOGFILE GROUP 9 ('/orafiles/vis/redo/redo09a.dbf') SIZE 150m; <br />ALTER DATABASE ADD LOGFILE GROUP 10 ('/orafiles/vis/redo/redo10a.dbf') SIZE 150m; <br /><br />reapeat "alter system switch logfile;" command till CURRENT log switch reaches group 6 or 7 or 8 <br />or 9 or 10 and old redo log group members 1-5 reaches in INACTIVE status (not in CURRENT or ACTIVE status) <br /><br />alter system switch logfile; <br /><br />set lines 200 <br />select * from v$log; <br /><br />The status should be something like this <br /><br /> GROUP# THREAD# SEQUENCE# BYTES MEMBERS ARC STATUS FIRST_CHANGE# FIRST_TIM <br />---------- ---------- ---------- ---------- ---------- --- ---------------- ------------- --------- <br /> 1 1 ..... 15728640 1 YES INACTIVE 1.9618E+12 04-AUG-08 <br /> 2 1 ..... 15728640 1 YES INACTIVE 1.9618E+12 04-AUG-08 <br /> 3 1 ..... 10485760 1 YES INACTIVE 1.9618E+12 04-AUG-08 <br /> 4 1 ..... 15728640 1 YES INACTIVE 1.9618E+12 04-AUG-08 <br /> 5 1 ..... 15728640 1 YES INACTIVE 1.9618E+12 04-AUG-08 <br /> 6 1 ..... 157286400 1 YES INACTIVE 1.9618E+12 04-AUG-08 <br /> 7 1 ..... 157286400 1 YEs INACTIVE 1.9618E+12 04-AUG-08 <br /> 8 1 ..... 157286400 1 YES INACTIVE 1.9618E+12 04-AUG-08 <br /> 9 1 ..... 157286400 1 YES INACTIVE 1.9618E+12 04-AUG-08 <br /> 10 1 ..... 157286400 1 YES CURRENT 1.9618E+12 04-AUG-08 <br /><br /><br />if redo log group 1 to 5 are INACTIVE drop them, make surw ecorresponding OS file is removed from relevant <br />directory too and then recreate dropped groups back (A GROUP HAS TO BE DROPPED WHEN IT INACTIVE) <br /><br />...check group 1 is INACTIVE the drop it <br />ALTER DATABASE DROP LOGFILE GROUP 1; <br /><br />...check group 2 is INACTIVE the drop it <br />ALTER DATABASE DROP LOGFILE GROUP 2; <br /><br />...check group 3 is INACTIVE the drop it <br />ALTER DATABASE DROP LOGFILE GROUP 3; <br /><br />...check group 4 is INACTIVE the drop it <br />ALTER DATABASE DROP LOGFILE GROUP 4; <br /><br />...check group 5 is INACTIVE the drop it <br />ALTER DATABASE DROP LOGFILE GROUP 5; <br /><br />After dropping an online redo log group, make sure that the drop completed successfully, and then use <br />the appropriate operating system command to delete the dropped online redo log files. <br /><br />rm -f /orafiles/vis/redo/redo01a.dbf <br />rm -f /orafiles/vis/redo/redo02a.dbf <br />rm -f /orafiles/vis/redo/redo03a.dbf <br />rm -f /orafiles/vis/redo/redo04a.dbf <br />rm -f /orafiles/vis/redo/redo05a.dbf <br /><br />The add them again with redo log memeber size 150M <br /><br />ALTER DATABASE ADD LOGFILE GROUP 1 ('/orafiles/vis/redo/redo01a.dbf') SIZE 150m; <br />ALTER DATABASE ADD LOGFILE GROUP 2 ('/orafiles/vis/redo/redo02a.dbf') SIZE 150m; <br />ALTER DATABASE ADD LOGFILE GROUP 3 ('/orafiles/vis/redo/redo03a.dbf') SIZE 150m; <br />ALTER DATABASE ADD LOGFILE GROUP 4 ('/orafiles/vis/redo/redo04a.dbf') SIZE 150m; <br />ALTER DATABASE ADD LOGFILE GROUP 5 ('/orafiles/vis/redo/redo05a.dbf') SIZE 150m; <br /><br />The final status should be somthing like this... <br />set lines 200 <br />select * from v$log; <br /><br />The status should be something like this <br /> GROUP# THREAD# SEQUENCE# BYTES MEMBERS ARC STATUS FIRST_CHANGE# FIRST_TIM <br />---------- ---------- ---------- ---------- ---------- --- ---------------- ------------- --------- <br /> 1 1 ..... 157286400 1 NO UNUSED 1.9618E+12 04-AUG-08 <br /> 2 1 ..... 157286400 1 NO UNUSED 1.9618E+12 04-AUG-08 <br /> 3 1 ..... 157286400 1 NO UNUSED 1.9618E+12 04-AUG-08 <br /> 4 1 ..... 157286400 1 NO UNUSED 1.9618E+12 04-AUG-08 <br /> 5 1 ..... 157286400 1 NO UNUSED 1.9618E+12 04-AUG-08 <br /> 6 1 ..... 157286400 1 YES INACTIVE 1.9618E+12 04-AUG-08 <br /> 7 1 ..... 157286400 1 YEs INACTIVE 1.9618E+12 04-AUG-08 <br /> 8 1 ..... 157286400 1 YES INACTIVE 1.9618E+12 04-AUG-08 <br /> 9 1 ..... 157286400 1 YES INACTIVE 1.9618E+12 04-AUG-08 <br /> 10 1 ..... 157286400 1 YES CURRENT 1.9618E+12 04-AUG-08 <br /><br />The perform multilple switch logs till all files come out of UNUSED status...something like the following.. <br /><br />alter system switch logfile; <br />alter system switch logfile; <br />alter system switch logfile; <br />alter system switch logfile; <br />alter system switch logfile; <br />alter system switch logfile; <br />alter system switch logfile; <br /><br />The status should be something like this <br /> GROUP# THREAD# SEQUENCE# BYTES MEMBERS ARC STATUS FIRST_CHANGE# FIRST_TIM <br />---------- ---------- ---------- ---------- ---------- --- ---------------- ------------- --------- <br /> 1 1 ..... 157286400 1 YES INACTIVE 1.9618E+12 04-AUG-08 <br /> 2 1 ..... 157286400 1 YES INACTIVE 1.9618E+12 04-AUG-08 <br /> 3 1 ..... 157286400 1 YES INACTIVE 1.9618E+12 04-AUG-08 <br /> 4 1 ..... 157286400 1 YES INACTIVE 1.9618E+12 04-AUG-08 <br /> 5 1 ..... 157286400 1 YES INACTIVE 1.9618E+12 04-AUG-08 <br /> 6 1 ..... 157286400 1 YES INACTIVE 1.9618E+12 04-AUG-08 <br /> 7 1 ..... 157286400 1 YEs INACTIVE 1.9618E+12 04-AUG-08 <br /> 8 1 ..... 157286400 1 YES CURRENT 1.9618E+12 04-AUG-08 <br /> 9 1 ..... 157286400 1 YES INACTIVE 1.9618E+12 04-AUG-08 <br /> 10 1 ..... 157286400 1 YES INACTIVE 1.9618E+12 04-AUG-08Raghavendra Lakshmipathyhttp://www.blogger.com/profile/09047289715110089027noreply@blogger.com0tag:blogger.com,1999:blog-811846438034249793.post-64023157830503387652010-08-15T07:12:00.000-07:002010-09-17T07:32:51.536-07:00RAC<strong><strong>For Oracle RAC installations:</strong></strong><br />a. Start the database as follows, where db_name is the database name:<br /><br />$ srvctl start database -d db_name<br /><br /><br /><strong><strong>To stop and start 10g rac database (10g crs )</strong></strong><br />Database:<br />server1: login as oraprod<br />srvctl stop database -d PROD<br />lsnrctl stop LISTENER_server1<br /><br />server2: login as oraprod<br />lsnrctl stop LISTENER_server2<br /><br />server3: login as oraprod<br />lsnrctl stop LISTENER_server3<br /><br />CRS:<br /><br />server1, server2, server3: login as root<br />/d02/oracrs/prodcrs/10.2.0/bin/crsctl stop crs<br /><br />-Repeat above steps on all servers indicated above<br /><br />start procedure<br />CRS:<br /><br />server1, server2, server3: login as root<br />/d02/oracrs/prodcrs/10.2.0/bin/crsctl start crs<br /><br />-Repeat above steps on all servers indicated above<br /><br /><br />Database:<br /><br />server1: login as oraprod<br />---> DO NOT START MANUALLY, CRS SHOULD START IT UP AUTOMATICALLY.<br />---> IF CRS FAILS TO STARTUP THE DB, EXECUTE: srvctl start database -d PROD<br />lsnrctl start LISTENER_server1<br /><br />server2: login as oraprod<br />lsnrctl start LISTENER_server2<br /><br />server3: login as oraprod<br />lsnrctl start LISTENER_server3<br /><br /><strong><strong>crs commands</strong></strong><br />crs_stat -t :Show HA resource status (hard to read) <br />crsstat :Ouptut of crs_stat –t formatted better <br />ps -ef|grep d.bin :crsd.bin evmd.bin ocssd.bin <br />crsctl check crs :css, crs, evm appear healthy <br />crsctl stop crs :Stop crs and all other services <br />crsctl disable crs :Prevent crs from starting on reboot <br />crsctl enable crs :Enable crs start on reboot <br />crs_stop -all :Stop everything <br />crs_start -all :Start everything <br /><br /><strong><strong>Go to following location and check the current CRS version.</strong></strong><br />cd /CRS/app/10.2.0/bin<br />./crsctl query crs softwareversion<br />./crsctl query crs activeversion<br /><br /><br /><br /><strong><strong>To configure AutoConfig to generate the failover aliases</strong></strong><br />To generate the failover aliases use the database tier context variable s_alt_service_instances. <br />Specify a comma separated list of "servicename:instance" to use connect time failover management. <br /><br />For example 'SERVICE_NAME:INSTANCE_NAME1,SERVICE_NAME:INSTANCE_NAME2' will generate a TNS Alias in the tnsnames.ora file that fails over to INSTANCE_NAME1 when the current instance is not available. If INSTANCE_NAME1 is not available it fails over to INSTANCE_NAME2. <br /><br />So Update the context variable s_alt_service_instances in the database tier context file applying the failover rules as described above. <br />Run AutoConfig on all database tiers. <br />Run AutoConfig on all application tiers. <br />These steps will generate tns aliases <INSTANCE_NAME>_FO with description lists as configured in s_alt_service_instances. These aliases will still not be used anywhere. Have to set the two task variables like s_tool_twotask to actually use these aliases. <br /><br />Note: On database versions that are 8.1.7.4 or higher the generated alias <INSTANCE_NAME>_FO can only be used for failover. On 8.0.6 the generated alias can't be used for failover. However, it can be used for load balancing. <br /><br /><strong><strong>undo for rac</strong></strong><br />RAC01.undotablespace=undotbs3<br />RAC02.undotablespace=undotbs2<br />alter system set undo_tablespace=undotbs1 sid='RAC01';<br /><br /><strong><strong>Threads in rac</strong></strong><br />RAC01.thread=1<br />RAC02.thread=2<br />alter database add logfile thread 2 group 4<br />alter databse add logfile thread 2 group 5<br />alter database enable thread 2<br /><br /><br /><strong><strong>srvctl command</strong></strong><br />srvctl start instance -d RACDB -i RACDB1,RACDB2<br />srvctl stop instance -d RACDB -i RACDB1,RACDB2<br />srvctl start database -d RACDB -o open<br /><br />srvctl start|stop instance -d <db_name> -i <inst_name_list><br />[-o open|mount|nomount|normal|transactional|immediate|abort]<br />[-c connect_str | -q]<br /><br />srvctl start|stop database -d <db_name><br />[-o open|mount|nomount|normal|transactional|immedaite|abort]<br />[-c connect_str | -q]<br /><br />srvctl modify database -d RACDB -y manual<br /><br /><strong><strong>Here are some of the commands for managing database-related services:</strong></strong><br />srvctl start instance -d orcl -i orcl2 :Start an instance <br />srvctl start database -d orcl :Start all instances <br />srvctl stop database -d orcl :Stop all instances and close database <br />srvctl stop instance -d orcl -i orcl2 :Stop an instance <br />srvctl start service -d orcl -s rac :Start a service <br />srvctl stop service -d orcl -s rac :Stop a service <br />srvctl status service -d orcl :Check status of a service <br />srvctl status instance -d orcl -i orcl1 :Check an individual instance <br />srvctl status database -d orcl :Check status of all instances <br />srvctl start nodeapps -n server1 :Start gsd, vip, listener, and ons. <br />srvctl stop nodeapps -n server2 :Stop gsd, vip, listener, and ons <br /><strong><strong>For help with all commands:</strong></strong><br />srvctl -h<br /><br />For Detailed Help:<br />srvctl start database -h<br />srvctl add service -h<br />srvctl relocate service –h<br /><br /><br /><strong><strong>Check out current configuration information:</strong></strong><br />srvctl config database <br /> Displays the configuration information of the cluster database.<br /> <br />srvctl config service <br /> Displays the configuration information for the services.<br /> <br />srvctl config nodeapps <br /> Displays the configuration information for the node applications.<br /> <br />srvctl config asm <br /> Displays the configuration for the ASM instances on the node.<br /><br /><strong><strong>To look for all the registered services for a database:</strong></strong><br />srvctl config service -d DBATOOLS<br /><br /><strong><strong>To get VIP information:</strong></strong><br />srvctl config nodeapps -n xxxx708 -a<br /><br />VIP exists.: /xxxx708-vip/xx.xxx.xx.xx/xxx.xxx.xxx.0/eth4<br /><br />or <br /><br />ifconfig -a<br /><br /><strong><strong>Look for interface with two IPs</strong></strong><br />When node 1 comes back up, it will start it’s CRS stack. It will take back <br />it’s VIP. In order to fail the service back, you would need to run <br />srvctl relocate<br />The following example is relocating the service from node 2 back to node 1: <br /><br />srvctl relocate service -d V10SN -s SRV_AVAIL -i V10SN2 -t V10SN1<br /><br /><strong><strong>To Forcefully remove a database from the OCR</strong></strong><br /><br />srvctl remove database with -f (force option): <br />Then use belwo to add again<br />srvctl add database -d DB -o $ORACLE_HOME<br /><br /><br /><br /><strong><strong>alter parameter on rac db</strong></strong><br />alter system set dpname SCOPE=MEMORY sid='sid|*';<br />alter system RESET dpname SCOPE=MEMORY sid='sid';<br /><br /><br /><strong><strong>Stop/Start Oracle RAC</strong></strong><br /><br />1. Stop Oracle 10g on one of RAC nodes.<br />$ export ORACLE_SID=RAC1<br />$ srvctl stop instance -d RAC -i RAC1<br />$ srvctl stop asm -n orarac1 <br />$ srvctl stop nodeapps -n orarac1 <br /><br />2. Start Oracle 10g on one of RAC nodes.<br />$ export ORACLE_SID=RAC1<br />$ srvctl start nodeapps -n orarac1<br />$ srvctl start asm -n orarac1 <br />$ srvctl start instance -d RAC -i RAC1<br /><br />3. Stop/start Oracle 10g on all RAC nodes.<br />$ srvctl stop database -d RAC<br />$ srvctl start database -d RAC<br /><br /><br /><strong><strong>Check status of Oracle RAC </strong></strong><br />$ srvctl status database -d RAC<br />Instance RAC1 is running on node orarac1<br />Instance RAC2 is running on node orarac2<br /><br />$srvctl status instance -d RAC -i RAC1<br />Instance RAC1 is running on node orarac1<br /><br />$srvctl status asm -n orarac1<br />ASM instance +ASM1 is running on node orarac1<br /><br />$srvctl status nodeapps -n orarac1<br />VIP is running on node: orarac1<br />GSD is running on node: orarac1<br />PRKO-2016 : Error in checking condition of listener on node: orarac1<br />ONS daemon is running on node: orarac1<br /><br /><br /><strong><strong>Check Oracle Listener on nodes</strong></strong><br />$ srvctl config listener -n orarac1<br />orarac1 LISTENER_ORARAC1<br />$ lsnrctl start LISTENER_ORARAC1<br /><br /><strong><strong>Check configuration of ORACLE RAC,ASM,nodeapps</strong></strong><br />$srvctl config database -d RAC<br />orarac1 RAC1 /space/oracle/product/10.2.0/db_2<br />orarac2 RAC2 /space/oracle/product/10.2.0/db_2<br /><br />$srvctl config asm -n orarac2<br />+ASM2 /space/oracle/product/10.2.0/db_2<br /><br />$srvctl config nodeapps -n orarac1 -a -g -s -l<br />VIP exists.: /orarac1-vip.abc.lan/xx.xx.xx.xx/xxx.xxx.xxx.x/Public<br />GSD exists.<br />ONS daemon exists.<br />Listener exists.<br /><br /><br /><strong><strong>Check, backup, restore OCR</strong></strong><br />$ocrconfig -showbackup<br /><br />$ocrconfig -export /data/backup/rac/ocrdisk.bak<br /><br />To restore OCR, it must stop Clusterware on all nodes before.<br />$ocrconfig -import /data/backup/rac/ocrdisk.bak<br /><br />$cluvfy comp ocr -n all //verification<br /><br />$ocrcheck //check OCR disk usageRaghavendra Lakshmipathyhttp://www.blogger.com/profile/09047289715110089027noreply@blogger.com0