The following suggested best practices are applicable to all Siebel deployments on CMT hardware (Tx00, T5x20, T5x40) running Solaris 10 [Note: some of this tuning applies to Siebel running on conventional hardware running Solaris]. These recommendations are based on our observations from the 14,000 user benchmark
on Sun SPARC Enterprise T5440. Your mileage may vary.
Pro-actively avoid running into stdio's 256 file descriptors limitation.
Set the following in a shell or add the following lines to the shell's profile (bash/ksh).
ulimit -n 2048
Technically the file descriptor limit can be set to as high as 65536. However from the application's perspective, 2048 is a reasonable limit.
Improve scalability with MT-hot memory allocation library, libumem or libmtmalloc.
To improve the scalability of the multi-threaded workloads, preload MT-hot object-caching memory allocation library like
eg., To preload the
libumem library, set the LD_PRELOAD_32 environment variable in the shell (bash/ksh) as shown below.
Web and the Application servers in the Siebel Enterprise stack are 32-bit. However Oracle 10g or 11g RDBMS on Solaris 10 SPARC is 64-bit. Hence the path to the
libumem library in the PRELOAD statement differs slightly in the database-tier as shown below.
Be aware that the trade-off is the increase in memory footprint -- you may notice 5 to 20% increase in the memory footprint with one of these MT-hot memory allocation libraries preloaded. Also not every Siebel application module benefits from MT-hot memory allocators. The recommendation is to experiment before implementing in production environments.
Application fared well with the following set of TCP/IP parameters on Solaris 10 5/08.
ndd -set /dev/tcp tcp_time_wait_interval 60000
ndd -set /dev/tcp tcp_conn_req_max_q 1024
ndd -set /dev/tcp tcp_conn_req_max_q0 4096
ndd -set /dev/tcp tcp_ip_abort_interval 60000
ndd -set /dev/tcp tcp_keepalive_interval 900000
ndd -set /dev/tcp tcp_rexmit_interval_initial 3000
ndd -set /dev/tcp tcp_rexmit_interval_max 10000
ndd -set /dev/tcp tcp_rexmit_interval_min 3000
ndd -set /dev/tcp tcp_smallest_anon_port 1024
ndd -set /dev/tcp tcp_slow_start_initial 2
ndd -set /dev/tcp tcp_xmit_hiwat 799744
ndd -set /dev/tcp tcp_recv_hiwat 799744
ndd -set /dev/tcp tcp_max_buf 8388608
ndd -set /dev/tcp tcp_cwnd_max 4194304
ndd -set /dev/tcp tcp_fin_wait_2_flush_interval 67500
ndd -set /dev/udp udp_xmit_hiwat 799744
ndd -set /dev/udp udp_recv_hiwat 799744
ndd -set /dev/udp udp_max_buf 8388608
Siebel Application Tier
Experiment with less number of Siebel Object Managers.
Configure the Object Managers in such a way that each OM will be handling at least 200 active users. Siebel's standard recommendation of 100 or less users per Object Manager is suitable for conventional systems but not ideal for CMT systems like Tx000, T5x20, T5x40, T5440. Sun's CMT systems are ideal for running multi-threaded processes with tons of LWPs per process. Besides, there will be significant improvement in the overall memory footprint with less number of Siebel Object Managers.
Try Oracle 11g R1 client in the application-tier. Oracle 10g R2 clients may crash under high load. For the symptoms of the crash, check Solaris/SPARC: Oracle 11gR1 client for Siebel 8.0.
Oracle 10g R2 10.2.0.4 32-bit client is supposed to have a fix for the process crash issue - however it wasn't verified in our test environment.
Siebel Database Tier
Store data files separate from the redo log files -- If the data files and the redo log files are stored on the same disk drive and if that disk drive fails, the files cannot be used in the database recovery procedures.
In the 14,0000 user benchmark setup, there are two Sun StorateTek 2540 arrays connected to the T5440 - one array was holding the data files, where as the other was holding the Oracle redo log files.
Size online redo logs to control the frequency of log switches.
In the 14,0000 user benchmark setup, two online redo logs were configured each with 10 GB disk space. When all 14,000 concurrent users are on-line, there is only one log switch in a 60 minute period.
If the storage array supports the read-ahead feature, enable it. When 'read-ahead enabled' is set to true, the write will be committed to the cache as opposed to the disk, and the OS signals the application that the write has been committed.
Oracle Database Initialization Parameters
Set Oracle's initialization parameter
DB_FILE_MULTIBLOCK_READ_COUNT to appropriate value.
DB_FILE_MULTIBLOCK_READ_COUNT parameter specifies the maximum number of blocks read in one I/O operation during a sequential scan.
In the 14,0000 user benchmark configuration,
DB_BLOCK_SIZE was set to 8 kB. During the benchmark run, the average reads are around 18.5 kB per second. Hence setting
DB_FILE_MULTIBLOCK_READ_COUNT to a high value does not necessarily improve the I/O performance. A value of 8 for the database init parameter
DB_FILE_MULTIBLOCK_READ_COUNT seems to perform better.
On T5240 and T5440 servers, set the database initialization parameter
CPU_COUNT to 64. Otherwise, by default Oracle RDBMS assumes 128 and 256 for the
CPU_COUNT on T5240 and T5440 respectively. Oracle's optimizer might use a completely different execution plan when it notices such a large number for the
CPU_COUNT; and the resulting execution plan need not necessarily be an optimal one. In the 14,000 user benchmark, setting
CPU_COUNT to 64 produced optimal execution plans.
On T5240 and T5440 servers, explicitly set the database initialization parameter
_enable_NUMA_optimization to FALSE. On these multi-socket servers,
_enable_NUMA_optimization will be set to TRUE by default. During the 14,000 user benchmark run, we noticed intermittent shadow process crashes with the default behavior. We didn't realize any additional gains either with the default NUMA optimizations.
Siebel Web Tier
(Originally posted at:
Sun| Solaris | Oracle| Siebel | CRM | UltraSPARC | CMT | Best Practices | Performance Tuning
- Upgrade to the latest service pack of Sun Java Web Server 6.1 (32-bit).
- Run the Sun Java Web Server in multi-process mode by setting the
MaxProcs directive in
magnus.conf to a value that is greater than 1. In the multi-process mode, the web server can handle requests using multiple processes with multiple threads in each process.
When you specify a value greater than 1 for the
MaxProcs, the web server relies on the operating system to distribute connections among/between multiple web server processes. However many modern operating systems including Solaris do not distribute connections evenly, particularly when there are a small number of concurrent connections.
- Tune the maximum simultaneous requests by setting the
RqThrottle parameter in the
magnus.conf file to appropriate value. A value of 1024 was used in the 14,000 user benchmark.