Mandalika's scratchpad [ Work blog @Oracle | My Music Compositions ]

Old Posts: 09.04  10.04  11.04  12.04  01.05  02.05  03.05  04.05  05.05  06.05  07.05  08.05  09.05  10.05  11.05  12.05  01.06  02.06  03.06  04.06  05.06  06.06  07.06  08.06  09.06  10.06  11.06  12.06  01.07  02.07  03.07  04.07  05.07  06.07  08.07  09.07  10.07  11.07  12.07  01.08  02.08  03.08  04.08  05.08  06.08  07.08  08.08  09.08  10.08  11.08  12.08  01.09  02.09  03.09  04.09  05.09  06.09  07.09  08.09  09.09  10.09  11.09  12.09  01.10  02.10  03.10  04.10  05.10  06.10  07.10  08.10  09.10  10.10  11.10  12.10  01.11  02.11  03.11  04.11  05.11  07.11  08.11  09.11  10.11  11.11  12.11  01.12  02.12  03.12  04.12  05.12  06.12  07.12  08.12  09.12  10.12  11.12  12.12  01.13  02.13  03.13  04.13  05.13  06.13  07.13  08.13  09.13  10.13  11.13  12.13  01.14  02.14  03.14  04.14  05.14  06.14  07.14  09.14  10.14  11.14  12.14  01.15  02.15  03.15  04.15  06.15  09.15  12.15  01.16  03.16  04.16  05.16  06.16  07.16  08.16  09.16  12.16  01.17  02.17  03.17  04.17  06.17  07.17  08.17  09.17  10.17  12.17  01.18  02.18  03.18  04.18  05.18  06.18  07.18  08.18  09.18  11.18  12.18  01.19  02.19  05.19  06.19  08.19  10.19  11.19  05.20  10.20  11.20  12.20  09.21  11.21  12.22 


Tuesday, July 30, 2013
 
Oracle Tips : Solaris lgroups, CT optimization, Data Pump, Recompilation of Objects, ..

1. [Re]compiling all objects in a schema

exec DBMS_UTILITY.compile_schema(schema => 'SCHEMA');

To recompile only the invalid objects in parallel:

exec UTL_RECOMP.recomp_parallel(<NUM_PARALLEL_THREADS>, 'SCHEMA');

A NULL value for SCHEMA recompiles all invalid objects in the database.


2. SGA breakdown in Solaris Locality Groups (lgroup)

To find the breakdown, execute pmap -L | grep shm. Then separate the lines that are related to each locality group and sum up the value in 2nd column to arrive at a number that shows the total SGA memory allocated in that locality group.

(I'm pretty sure there will be a much easier way that I am not currently aware of.)


3. Default values for shared pool, java pool, large pool, ..

If the *pool parameters were not set explicitly, executing the following query is one way to find out what are they currently set to.

eg.,
SQL> select * from v$sgainfo;

NAME                                  BYTES RES
-------------------------------- ---------- ---
Fixed SGA Size                      2171296 No
Redo Buffers                      373620736 No
Buffer Cache Size                8.2410E+10 Yes
Shared Pool Size                 1.7180E+10 Yes
Large Pool Size                   536870912 Yes
Java Pool Size                   1879048192 Yes
Streams Pool Size                 268435456 Yes
Shared IO Pool Size                       0 Yes
Granule Size                      268435456 No
Maximum SGA Size                 1.0265E+11 No
Startup overhead in Shared Pool  2717729536 No
Free SGA Memory Available                 0

12 rows selected.

4. Fix to PLS-00201: identifier 'GV$SESSION' must be declared error

Grant select privilege on gv_$SESSION to the owner of the database object that failed to compile.

eg.,
SQL> alter package OWF_MGR.FND_SVC_COMPONENT compile body;
Warning: Package Body altered with compilation errors.

SQL> show errors
Errors for PACKAGE BODY OWF_MGR.FND_SVC_COMPONENT:

LINE/COL ERROR
-------- -----------------------------------------------------------------
390/22   PL/SQL: Item ignored
390/22   PLS-00201: identifier 'GV$SESSION' must be declared

SQL> grant select on gv_$SESSION to OWF_MGR;
Grant succeeded.

SQL> alter package OWF_MGR.FND_SVC_COMPONENT compile body;
Package body altered.

5. Solaris Critical Thread (CT) optimization for Oracle logwriter (lgrw)

Critical Thread is a new scheduler optimization available in Oracle Solaris releases Solaris 10 Update 10 and later versions. Latency sensitive single threaded components of software such as Oracle database's logwriter benefit from CT optimization.

On a high level, LWPs marked as critical will be granted more exclusive access to the hardware. For example, on SPARC T4 and T5 systems, such a thread will be assigned exclusive access to a core as much as possible. CT optimization won't delay scheduling of any runnable thread in the system.

Critical Thread optimization is enabled by default. However the users of the system have to hint the OS by marking a thread or two "critical" explicitly as shown below.

priocntl -s -c FX -m 60 -p 60 -i pid <pid_of_critical_single_threaded_process>

From database point of view, logwriter (lgwr) is one such process that can benefit from CT optimization on Solaris platform. Oracle DBA's can either make the lgwr process 'critical' once the database is up and running, or can simply patch the 11.2.0.3 database software by installing RDBMS patch 12951619 to let the database take care of it automatically. I believe Oracle 12c does it by default. Future releases of 11g software may make lgwr critical out of the box.

Those who install the database patch 12951619 need to carefully follow the post installation steps documented in the patch README to avoid running into unwanted surprises.


6. ORA-14519 error while importing a table from a Data Pump export dump
ORA-14519: Conflicting tablespace blocksizes for table : Tablespace XXX block size 32768 [partition specification] conflicts with previously specified/implied tablespace YYY block size 8192
 [object-level default]
Failing sql is:
CREATE TABLE XYZ
..

All partitions in table XYZ are using 32K blocks whereas the implicit default partition is pointing to a 8K block tablespace. Workaround is to use the REMAP_TABLESPACE option in Data Pump impdp command line to remap the implicit default tablespace of the partitioned table to the tablespace where the rest of partitions are residing.


7. Index building task in Data Pump import process

When Data Pump import process is running, by default, index building is performed with just one thread, which becomes a bottleneck and causes the data import process to take a long time especially if many large tables with millions of rows are being imported into the target database. One way to speed up the import process execution is by skipping index building as part of data import task with the help of EXCLUDE=INDEX impdp command line option. Extract the index definitions for all the skipped indexes from the Data Pump dump file as shown below.

impdp <userid>/<password> directory=<directory> dumpfile=<dump_file>.dmp sqlfile=<index_def_file>.sql INCLUDE=INDEX

Edit <index_def_file>.sql to set the desired number of parallel threads to build each index. And finally execute the <index_def_file>.sql to build the indexes once the data import task is complete.

Labels:





2004-2019 

This page is powered by Blogger. Isn't yours?