Mandalika's scratchpad | [ Work blog @Oracle | My Music Compositions ] |
Old Posts: 09.04 10.04 11.04 12.04 01.05 02.05 03.05 04.05 05.05 06.05 07.05 08.05 09.05 10.05 11.05 12.05 01.06 02.06 03.06 04.06 05.06 06.06 07.06 08.06 09.06 10.06 11.06 12.06 01.07 02.07 03.07 04.07 05.07 06.07 08.07 09.07 10.07 11.07 12.07 01.08 02.08 03.08 04.08 05.08 06.08 07.08 08.08 09.08 10.08 11.08 12.08 01.09 02.09 03.09 04.09 05.09 06.09 07.09 08.09 09.09 10.09 11.09 12.09 01.10 02.10 03.10 04.10 05.10 06.10 07.10 08.10 09.10 10.10 11.10 12.10 01.11 02.11 03.11 04.11 05.11 07.11 08.11 09.11 10.11 11.11 12.11 01.12 02.12 03.12 04.12 05.12 06.12 07.12 08.12 09.12 10.12 11.12 12.12 01.13 02.13 03.13 04.13 05.13 06.13 07.13 08.13 09.13 10.13 11.13 12.13 01.14 02.14 03.14 04.14 05.14 06.14 07.14 09.14 10.14 11.14 12.14 01.15 02.15 03.15 04.15 06.15 09.15 12.15 01.16 03.16 04.16 05.16 06.16 07.16 08.16 09.16 12.16 01.17 02.17 03.17 04.17 06.17 07.17 08.17 09.17 10.17 12.17 01.18 02.18 03.18 04.18 05.18 06.18 07.18 08.18 09.18 11.18 12.18 01.19 02.19 05.19 06.19 08.19 10.19 11.19 05.20 10.20 11.20 12.20 09.21 11.21 12.22
It's been mentioned and proved several times that Sun/Oracle's T-series hardware is the best fit to deploy and run Siebel CRM. Feel free to browse through the list of Siebel benchmarks that Sun published in the past on T-series:
2004-2010 : A Look Back at Sun Published Oracle BenchmarksOracle Corporation announced the availability of SPARC T3 servers in Oracle OpenWorld 2010, and sure enough there is a Siebel CRM benchmark on SPARC T3-1 server to support the server launch event. Check the following web page for high level details of the benchmark.
SPARC T3-1 Server Posts a High Score on New Siebel CRM 8.1.1 BenchmarkI intend to provide the missing pieces of information in this blog post.
First of all, it is not a "Platform Sizing and Performance Program" (PSPP) benchmark. Siebel 8.1.1 was used to run the benchmark, and there is no Siebel PSPP benchmark kit available as of today for v8.1.1. Hence the test results from this benchmark exercise are not directly comparable to the Siebel 8.0 PSPP benchmark results.
Workload
The benchmark workload consists of a mix of Siebel Financial Services Call Center and Siebel Web Services / EAI transactions. The FINS Call Center transactions create a bunch of Opportunities, Quotes and Orders, where as the Web Services / EAI transactions submit new Service Requests (SR), search for and update existing SRs. The transaction mix is 40% FINS Call Center transactions and 60% Web Services / EAI transactions.
Software Versions
Hardware Configuration
Virtualization Technology
iPlanet Web Server and the Oracle 11g Database Server were configured on a single Sun SPARC Enterprise T5240 Server. Those software layers were isolated from each other with the help of Oracle Solaris Containers virtualization technology. Resource allocations are shown below.
Tier | #vCPU | Memory (GB) |
---|---|---|
Database | 96 | 48 |
Web | 32 | 16 |
Test Results
#vUsers | Avg Trx Resp Time (sec) | Business Trx Throughput/HR |
Avg CPU Utilization (%) | Avg Memory Footprint (GB) | |||||
---|---|---|---|---|---|---|---|---|---|
FINS | EAI | FINS | EAI | App | DB | Web | App | DB + Web | |
13,000 | 0.43 | 0.2 | 48,409 | 116,449 | 58 | 42 | 37 | 52 | 35 |
Why stop at 13K users?
Notice that the average CPU utilization on the application server node (SPARC T3-1) is only ~58%. The application server node has room to accommodate more online vusers - however, there is not enough free memory left on the server to scale beyond 13,000 concurrent users. That is the main reason to stop at 13,000 user count in this benchmark.
Siebel Best Practices
Check the following presentation:
Siebel on Oracle Solaris : Best Practices, Tuning TipsAcknowledgments
Credit to all our peers at Oracle Corporation who helped us with the hardware, workload, verification and validation etc., in a timely manner. Also Jenny deserves special credit for spending enormous amount of time running the benchmark with patience.
Original blog post URL:
http://blogs.sun.com/mandalika/entry/sparc_t3_reiterates_siebel_s
(Even though the title explicitly states "Solaris Versus .. ", this blog entry is equally applicable to all the operating systems in the world with few changes.)
Lately I have seen quite a few e-mails and heard few customer representatives talking about the performance of their application(s) on Solaris, Windows and Linux. Typically they go like the following with a bunch of supporting data (all numbers) and no hardware configuration specified whatsoever.
Lack of awareness and taking the hardware completely out of the discussions and context are the biggest problems with complaints like these. Those claims make sense only when the underlying hardware is the same in all test cases. For example, comparing a single user, single threaded transaction running on Windows, Linux and Solaris on x86 hardware is appropriate (as long as the type and speed of the processor are identical), but not against Solaris running on SPARC hardware. This is mainly because the processor architecture is completely different for x86 and SPARC platforms.
Besides, these days Oracle offers two types of SPARC hardware - 1. T-series and 2. M-series, which serve different purposes though they are compatible with each other. It is hard to compare and analyze the performance discrimination between different SPARC offerings (T- and M-series) too with no proper understanding of the characteristics of the CPUs in use. Choosing the right hardware for the right job is the key.
It is improper to compare the business transactions running on x86 with SPARC systems or even between different types of SPARC systems, and to incorrectly attribute the hardware strength or weakness to the operating system that runs on top of the bare metal. If there is so much of discrepancy among different operating environments, it is recommended to spend some time understanding the nuances in testing hardware before spending enormous amounts of time trying to tune the application and the operating system.
The bottomline: in addition to the software (application + OS), hardware plays an important role in the performance and scalability of an application - so, unless the testing hardware is the same for all test cases on different operating systems, don't you just focus on the operating system alone and make hasty decisions to switch to other operating platforms. Carefully choose appropriate hardware for the task in hand.
2004-2019 |