The following suggested best practices are applicable to all Siebel deployments on CMT hardware (Tx00, T5x20, T5x40) running Solaris 10 [Note: some of this tuning applies to Siebel running on conventional hardware running Solaris]. These recommendations are based on our observations from the 14,000 user benchmark
on Sun SPARC Enterprise T5440. Your mileage may vary.
Pro-actively avoid running into stdio's 256 file descriptors limitation.
Set the following in a shell or add the following lines to the shell's profile (bash/ksh).
ulimit -n 2048
Technically the file descriptor limit can be set to as high as 65536. However from the application's perspective, 2048 is a reasonable limit.
Improve scalability with MT-hot memory allocation library, libumem or libmtmalloc.
To improve the scalability of the multi-threaded workloads, preload MT-hot object-caching memory allocation library like
eg., To preload the
libumem library, set the LD_PRELOAD_32 environment variable in the shell (bash/ksh) as shown below.
Web and the Application servers in the Siebel Enterprise stack are 32-bit. However Oracle 10g or 11g RDBMS on Solaris 10 SPARC is 64-bit. Hence the path to the
libumem library in the PRELOAD statement differs slightly in the database-tier as shown below.
Be aware that the trade-off is the increase in memory footprint -- you may notice 5 to 20% increase in the memory footprint with one of these MT-hot memory allocation libraries preloaded. Also not every Siebel application module benefits from MT-hot memory allocators. The recommendation is to experiment before implementing in production environments.
Application fared well with the following set of TCP/IP parameters on Solaris 10 5/08.
ndd -set /dev/tcp tcp_time_wait_interval 60000
ndd -set /dev/tcp tcp_conn_req_max_q 1024
ndd -set /dev/tcp tcp_conn_req_max_q0 4096
ndd -set /dev/tcp tcp_ip_abort_interval 60000
ndd -set /dev/tcp tcp_keepalive_interval 900000
ndd -set /dev/tcp tcp_rexmit_interval_initial 3000
ndd -set /dev/tcp tcp_rexmit_interval_max 10000
ndd -set /dev/tcp tcp_rexmit_interval_min 3000
ndd -set /dev/tcp tcp_smallest_anon_port 1024
ndd -set /dev/tcp tcp_slow_start_initial 2
ndd -set /dev/tcp tcp_xmit_hiwat 799744
ndd -set /dev/tcp tcp_recv_hiwat 799744
ndd -set /dev/tcp tcp_max_buf 8388608
ndd -set /dev/tcp tcp_cwnd_max 4194304
ndd -set /dev/tcp tcp_fin_wait_2_flush_interval 67500
ndd -set /dev/udp udp_xmit_hiwat 799744
ndd -set /dev/udp udp_recv_hiwat 799744
ndd -set /dev/udp udp_max_buf 8388608
Siebel Application Tier
Experiment with less number of Siebel Object Managers.
Configure the Object Managers in such a way that each OM will be handling at least 200 active users. Siebel's standard recommendation of 100 or less users per Object Manager is suitable for conventional systems but not ideal for CMT systems like Tx000, T5x20, T5x40, T5440. Sun's CMT systems are ideal for running multi-threaded processes with tons of LWPs per process. Besides, there will be significant improvement in the overall memory footprint with less number of Siebel Object Managers.
Try Oracle 11g R1 client in the application-tier. Oracle 10g R2 clients may crash under high load. For the symptoms of the crash, check Solaris/SPARC: Oracle 11gR1 client for Siebel 8.0.
Oracle 10g R2 10.2.0.4 32-bit client is supposed to have a fix for the process crash issue - however it wasn't verified in our test environment.
Siebel Database Tier
Store data files separate from the redo log files -- If the data files and the redo log files are stored on the same disk drive and if that disk drive fails, the files cannot be used in the database recovery procedures.
In the 14,0000 user benchmark setup, there are two Sun StorateTek 2540 arrays connected to the T5440 - one array was holding the data files, where as the other was holding the Oracle redo log files.
Size online redo logs to control the frequency of log switches.
In the 14,0000 user benchmark setup, two online redo logs were configured each with 10 GB disk space. When all 14,000 concurrent users are on-line, there is only one log switch in a 60 minute period.
If the storage array supports the read-ahead feature, enable it. When 'read-ahead enabled' is set to true, the write will be committed to the cache as opposed to the disk, and the OS signals the application that the write has been committed.
Oracle Database Initialization Parameters
Set Oracle's initialization parameter
DB_FILE_MULTIBLOCK_READ_COUNT to appropriate value.
DB_FILE_MULTIBLOCK_READ_COUNT parameter specifies the maximum number of blocks read in one I/O operation during a sequential scan.
In the 14,0000 user benchmark configuration,
DB_BLOCK_SIZE was set to 8 kB. During the benchmark run, the average reads are around 18.5 kB per second. Hence setting
DB_FILE_MULTIBLOCK_READ_COUNT to a high value does not necessarily improve the I/O performance. A value of 8 for the database init parameter
DB_FILE_MULTIBLOCK_READ_COUNT seems to perform better.
On T5240 and T5440 servers, set the database initialization parameter
CPU_COUNT to 64. Otherwise, by default Oracle RDBMS assumes 128 and 256 for the
CPU_COUNT on T5240 and T5440 respectively. Oracle's optimizer might use a completely different execution plan when it notices such a large number for the
CPU_COUNT; and the resulting execution plan need not necessarily be an optimal one. In the 14,000 user benchmark, setting
CPU_COUNT to 64 produced optimal execution plans.
On T5240 and T5440 servers, explicitly set the database initialization parameter
_enable_NUMA_optimization to FALSE. On these multi-socket servers,
_enable_NUMA_optimization will be set to TRUE by default. During the 14,000 user benchmark run, we noticed intermittent shadow process crashes with the default behavior. We didn't realize any additional gains either with the default NUMA optimizations.
Siebel Web Tier
(Originally posted at:
Sun| Solaris | Oracle| Siebel | CRM | UltraSPARC | CMT | Best Practices | Performance Tuning
- Upgrade to the latest service pack of Sun Java Web Server 6.1 (32-bit).
- Run the Sun Java Web Server in multi-process mode by setting the
MaxProcs directive in
magnus.conf to a value that is greater than 1. In the multi-process mode, the web server can handle requests using multiple processes with multiple threads in each process.
When you specify a value greater than 1 for the
MaxProcs, the web server relies on the operating system to distribute connections among/between multiple web server processes. However many modern operating systems including Solaris do not distribute connections evenly, particularly when there are a small number of concurrent connections.
- Tune the maximum simultaneous requests by setting the
RqThrottle parameter in the
magnus.conf file to appropriate value. A value of 1024 was used in the 14,000 user benchmark.
Recently Sun announced the 14,000 user Siebel 8.0 PSPP benchmark results on a single Sun SPARC Enterprise T5440
. An Oracle white paper with Sun's 14,000 user benchmark results
is available on Oracle's Siebel benchmark web site. The content in this blog post complements the benchmark white paper.
Some of the notes and highlights from this competitive benchmark are as follows:
- Key specifications for the Sun SPARC Enterprise T5440 system under test, are: 4 x UltraSPARC T2 Plus processors, 32 cores, 256 compute threads and 128 GB of memory in a 4RU space.
- The entire Siebel 8.0 solution was deployed on a single Sun SPARC Enterprise T5440 including the web, gateway, application, and database servers.
9 load driver clients with dual-core Opteron and Xeon processors were used to load up 14,000 concurrent users
- Web, Application and the Database servers were isolated from each other by creating three Solaris Containers (non-global zones or local zones) dedicated one each for all those servers.
Solaris 10 Binary Application Guarantee Program guarantees the binary compatibility for all applications running under Solaris native host operating system environments as well as Solaris 10 OS running as a guest operating system in a virtualized platform environment.
- Siebel Gateway server and the Siebel Application servers were installed and configured in one of the three Solaris Containers. Two identical copies of Siebel Application server instances were configured to handle 7,000 user load by each of those instances.
From our experiments with the Siebel 8.0 benchmark workload, it appears that a single instance of Siebel Application server could scale up to 10,000 active users. Siebel Connection Broker (SCBroker) component becomes the bottleneck at the peak load in a single instance of the Siebel Application server.
- To keep it simple, the benchmark publication white paper limits itself to an overview of the system configuration. The full details are available in the diagram below.
The breakdown of the approximate averages of CPU and memory utilization by each tier is shown below.
System-wide averages are as follows:
|Web + App + DB||82%||93.5 GB|
- 1276 watts is the average power consumption when all the 14,000 concurrent users are in the steady state of the benchmark test. That is, in the case of similarly configured workloads, T5440 supports 10.97 users per watt of the power consumed; and supports 3500 users per rack unit.
Based on the above notes: Sun SPARC Enterprise T5440 is inexpensive, requires: less power and data center footprint, ideal for consolidation
and equally importantly scales well
How does Sun's new 14,000 user benchmark result compare with the high watermark benchmark results published by other vendors using the same Siebel 8.0 PSPP workload?
Besides Sun, IBM and HP are the only other vendors who published benchmark results so far with the Siebel 8.0 PSPP benchmark workload. IBM's highest user count is 7,000; where as 5,200 is HP's. Here is a quick comparison of the throughputs based on the results published by Sun, IBM and HP with the highest number of active users.
Sun Microsystems' 14,000 user benchmark
on a single T5440 outperformed:
- IBM's 7,000 user benchmark result by 1.9x
- HP's 5,200 user benchmark result by 2.5x
HP published the 5,200 user result with a combination of 2 x BL460c running Windows Server 2003 and 1 x rx6600 HP system running HP-UX.
- Sun's own 10,000 user benchmark result on a combination of 2 x T5120 and 2 x T5220s by 1.4x
From the operating system perspective, Solaris outperformed AIX, Windows Server 2003 and HP-UX. Linux is nowhere to be found in the competitive landscape.
A simple comparison of all the published Siebel 8.0 benchmark results
(as of today) by all vendors justifies the title of this blog post. As IBM and HP do not post the list price of all of their servers, I am not even attempting to show the price/performance comparison in here. On the other hand, Sun openly lists out all the list prices at store.sun.com
Although T5440 possesses a ton of great qualities, it might not be suitable for deploying workloads with heavy single-threaded dependencies. The T5440 is an excellent hardware platform for multi-threaded, and moderately single-threaded/multi-process workloads. When in doubt, it is a good idea to leverage Sun Microsystems' Try & Buy
program to try the workloads on this new and shiny T5440 before making the final call.
I would like to share the tuning information from the OS and the underlying hardware perspective for couple of reasons -- 1. Oracle's benchmark white paper does not include any of the system specific tuning information, and 2. it may take quite a bit of time for Oracle Corporation to update the Siebel Tuning Guide for Solaris with some of the tuning information that you find in here.
Check the second part
of this blog post for the best practices running Siebel on Sun CMT hardware.
(Originally posted at: