|Mandalika's scratchpad||[ Work blog @Oracle | My Music Compositions ]|
Hardly six months after announcing Siebel 126.96.36.199 benchmark results on Oracle SPARC T4 servers, we have a brand new set of Siebel 188.8.131.52 benchmark results on Oracle SPARC T5 servers. There are no updates to the Siebel benchmark kit in the last couple years - so, we continued to use the Siebel 184.108.40.206 benchmark workload to measure the performance of Siebel Financial Services Call Center and Order Management business transactions on the recently announced SPARC T5 servers.
The latest Siebel 220.127.116.11 benchmark was executed on a mix of SPARC T5-2, SPARC T4-2 and SPARC T4-1 servers. The benchmark test simulated the actions of a large corporation with 40,000 concurrent active users. To date, this is the highest user count we achieved in a Siebel benchmark.
User Load Breakdown & Achieved Throughput
|Siebel Application Module||%Total Load||#Users||Business Trx per Hour|
|Financial Services Call Center||70||28,000||273,786|
Average Transaction Response Times for both Financial Services Call Center and Order Management transactions were under one second.
Software & Hardware Specification
|Test Component||Software||Version||Server Model||Server Qty||Per Server Specification||OS|
|Chips||Cores||vCPUs||CPU Speed||CPU Type||Memory|
|Application Server||Siebel||18.104.22.168||SPARC T5-2||2||2||32||256||3.6 GHz||SPARC-T5||512 GB||Solaris 10 1/13 (S10U11)|
|Database Server||Oracle 11g R2||22.214.171.124||SPARC T4-2||1||2||16||128||2.85 GHz||SPARC-T4||256 GB||Solaris 10 8/11 (S10U10)|
|Web Server||iPlanet Web Server||7.0.9 (7 U9)||SPARC T4-1||1||1||8||64||2.85 GHz||SPARC-T4||128 GB||Solaris 10 8/11 (S10U10)|
|Load Generator||Oracle Application Test Suite||9.21.0043||SunFire X4200||1||2||4||4||2.6 GHz||AMD Opteron 285 SE||16 GB||Windows 2003 R2 SP2|
|Load Drivers (Agents)||Oracle Application Test Suite||9.21.0043||SunFire X4170||8||2||12||12||2.93 GHz||Intel Xeon X5670||48 GB||Windows 2003 R2 SP2|
|Gateway/Application Server||20,000||67.03||205.54 GB|
|Application Server||20,000||66.09||206.24 GB|
|Database Server||40,000||33.43||108.72 GB|
|Web Server||40,000||29.48||14.03 GB|
Finally, how does this benchmark stack up against other published benchmarks? Short answer is "very well". Head over to the Oracle Siebel Benchmark White Papers webpage to do the comparison yourself.
[Credit to our hard working colleagues in SAE, Siebel PSR, benchmark and Oracle Platform Integration (OPI) teams. Special thanks to Sumti Jairath and Venkat Krishnaswamy for the last minute fire drill]
To be clear, this post is about a white paper that's been out there for more than two months. Access it through the following url.
The focus of the paper is on databases and zones. On SuperCluster, customers have the choice of running their databases in logical domains that are dedicated to running Oracle Database 11g R2. With exclusive access to Exadata Storage Servers, those domains are aptly called "Database" domains. If the requirement mandates, it is possible to create and use all logical domains as "database domains" or "application domains" or a mix of those. Since the focus is on databases, the paper talks only about the database domains and how zones can be created, configured and used within each database domain for fine grained control over multiple databases consolidated in a SuperCluster environment.
When multiple databases are being consolidated (including RAC databases) in database logical domains, zones are one of the options that fulfill requirements such as the fault, operation, network, security and resource isolation, multiple RAC instances in a single logical domain, separate identity and independent manageability for database instances.
The best practices cover the following topics. Some of those are applicable to standalone, non-engineered environments as well.
Oracle RAC Configuration
Securing the Databases, and
Example Database Consolidation Scenarios
A large group of experts reviewed the material and provided quality feedback. Hence they deserve credit for their work and time. Listed below are some of those reviewers (sincere apologies if I missed listing any major contributors).
Kesari Mandyam, Binoy Sukumaran, Gowri Suserla, Allan Packer, Jennifer Glore, Hazel Alabado, Tom Daly, Krishnan Shankar, Gurubalan T, Rich long, Prasad Bagal, Lawrence To, Rene Kundersma, Raymond Dutcher, David Brean, Jeremy Ward, Suzi McDougall, Ken Kutzer, Larry Mctintosh, Roger Bitar, Mikel Manitius
Just like the Siebel 8.1.x/SPARC T4 benchmark post, this one too was overdue for at least four months. In any case, I hope the Oracle BI customers already knew about the OBIEE 11g/SPARC T4 benchmark effort. In here I will try to provide few additional / interesting details that aren't covered in the following Oracle PR that was posted on oracle.com on 09/30/2012.
System Under Test
The entire BI middleware stack including the WebLogic 11g Server, OBI Server, OBI Presentation Server and Java Host was installed and configured on a single SPARC T4-4 server consisting four 8-Core 3.0 GHz SPARC T4 processors (total #cores: 32) and 128 GB physical memory. Oracle Solaris 10 8/11 is the operating system.
BI users were authenticated against Oracle Internet Directory (OID) in this benchmark - hence OID software which was part of Oracle Identity Management 126.96.36.199.0 was also installed and configured on the system under test (SUT). Oracle BI Server's Query Cache was turned on, and as a result, most of the query results were cached in OBIS layer, that resulted in minimal database activity making it ideal to have the Oracle 11g R2 database server with the OBIEE database running on the same box as well.
Oracle BI database was hosted on a Sun ZFS Storage 7120 Appliance. The BI Web Catalog was under a ZFS/zpool on a couple of SSDs.
In this benchmark, 25000 concurrent users assumed five different business user roles -- Marketing Executive, Sales Representative, Sales Manager, Sales Vice-president, and Service Manager. The load was distributed equally among those five business user roles. Each of those different BI users accessed five different pre-built dashboards with each dashboard having an average of five reports - a mix of charts, tables and pivot tables - and returning 50-500 rows of aggregated data. The benchmark test scenario included drilling down into multiple levels from a table or chart within a dashboard. There is a 60 second think time between requests, per user.
BI Setup & Test Results
OBIEE 11g 188.8.131.52.0 was deployed on SUT in a vertical scale-out fashion. Two Oracle BI Presentation Server processes, one Oracle BI Server process, one Java Host process and two instances of WebLogic Managed Servers handled 25,000 concurrent user sessions smoothly. This configuration resulted in a sub-second overall average transaction response time (average of averages over a duration of 120 minutes or 2 hours). On average, 450 business transactions were executed per second, which triggered 750 SQL executions per second.
It took only 52% of CPU on average (~5% system CPU and rest in user land) to do all this work to achieve the throughput outlined above. Since 25,000 unique test/BI users hammered different dashboards consistently, not so surprisingly bulk of the CPU was spent in Oracle BI Presentation Server layer, which took a whopping 29%. BI Server consumed about 10-11% and the rest was shared by Java Host, OID, WebLogic Managed Server instances and the Oracle database.
So, what is the key take away from this whole exercise?
SPARC T4 rocks Oracle BI world. OBIEE 11g/SPARC T4 is an ideal combination that may work well for majority of OBIEE deployments on Solaris platform. Or in marketing jargon - The excellent vertical and horizontal scalability of the SPARC T4 server gives customer the option to scale up as well as scale out growth, to support large BI EE installations, with minimal hardware investment.
Evaluate and decide for yourself.[Credit to our colleagues in Oracle FMW PSR, ISVe teams and SCA lab support engineers]
It is one of the hot topics among Galaxy S II users. In web forums, some of the recurring solutions appear to be rooting the phone or muting the "system" sounds. They seem to work in some cases. However there is a much simpler solution for Galaxy S II phones running Ice Cream Sandwich (ICS) version of Android.
Launch the Camera application
Tap the Menu key button to bring up "Edit Shortcuts" menu
Tap "Edit Shortcuts" menu item to list out all available shortcuts
Look for "Shutter sound" shortcut
Press, hold and drag the "Shutter sound" option to one of the empty boxes shown on top. If there are no empty boxes, simply drop it on to one of the non-empty boxes that contain the least desired shortcut/option.
Finally tap on "Shutter sound" icon and select the "Off" button to keep the camera shutter silent
Screenshots were captured using T-Mobile Samsung Galaxy S II (SGH-T989) device running Android 4.0.3
Siebel is a multi-threaded native application that performs well on Oracle's T-series SPARC hardware. We have several versions of Siebel benchmarks published on previous generation T-series servers ranging from Sun Fire T2000 to Oracle SPARC T3-4. So, it is natural to see that tradition extended to the current genration SPARC T4 as well.
29,000 user Siebel 184.108.40.206 benchmark on a mix of SPARC T4-1 and T4-2 servers was announced during the Oracle OpenWorld 2012 event. In this benchmark, Siebel application server instances ran on three SPARC T4-2/Solaris 10 8/11 systems where as the Oracle database server 11gR2 was configured on a single SPARC T4-1/Solaris 11 11/11 system. Several iPlanet web server 7 U9 instances with the Siebel Web Plug-in (SWE) installed ran on one SPARC T4-1/Solaris 10 8/11 system. Siebel database was hosted on a single Sun Storage F5100 flash array consisting 80 flash modules (FMODs) each with capacity 24 GB.
Siebel Call Center and Order Management System are the modules that were tested in the benchmark. The benchmark workload had 70% of virtual users running Siebel Call Center transactions and the remaining 30% vusers running Siebel Order Management System transactions. This benchmark on T4 exhibited sub-second response times on average for both Siebel Call Center and Order Management System modules.
Load balancing at various layers including web and test client systems ensured near uniform load across all web and application server instances. All three Siebel application server systems consumed ~78% CPU on average. The database and web server systems consumed ~53% and ~18% CPU respectively.
All these details are supposed to be available in a standard Oracle|Siebel benchmark template - but for some reason, I couldn't find it on Oracle's Siebel Benchmark White Papers web page yet. Meanwhile check out the following PR that was posted on oracle.com on 09/28/2012.
Looks like the large number of vusers (29,000 to be precise) sets this benchmark apart from the other benchmarks published with the same Siebel 220.127.116.11 benchmark workload.
[Credit to our colleagues in Siebel PSR, benchmark, SAE and ISVe teams]