PeopleSoft Campus Solutions 9.0 benchmark on Sun SPARC Enterprise M4000 and X6270 blades
Oracle|Sun published PeopleSoft Campus Solutions 9.0 benchmark results on February 18, 2010. Here is the direct URL to the benchmark results white paper:
PeopleSoft Enterprise Campus Solutions 9.0 using Oracle 11g on a Sun SPARC Enterprise M4000 & Sun Blade X6270 ModulesSun published three PeopleSoft benchmarks on SPARC platform over the last 12 month period -- one OLTP and two batch benchmarks
[1]. The latest benchmark is somewhat special for at least couple of reasons:
- Campus Solutions 9.0 workload has both online transactions and batch processes, and
- This is the very first time ever Sun published a PeopleSoft benchmark on x64 hardware running Oracle Enterprise Linux
The summary of the benchmark test results is shown below. These numbers were extracted from the very first page of the benchmark results white papers where Oracle|PeopleSoft highlights the significance of the test results and the actual numbers that are of interest to the customers. Test results are sorted by the hourly throughput (invoices & transcripts per hour) in the descending order. Click on the link that is underneath the vendor name to open corresponding benchmark result.
While analyzing these test results, remember that the higher the throughput, the better. In the case of online transactions, it is desirable to keep the response times as low as possible.
Oracle PeopleSoft Campus Solutions 9.0 Benchmark Test Results
Vendor | Hardware Configuration | OS | Resource Utilization | Response/Elapsed Times at Peak Load (6,000 users) |
---|
Online Transactions: Avg Response Times (sec) | Batch Throughput/hr |
---|
CPU% | Mem (GB) | Logon | LSC | Page Load | Page Save | Invoice | Transcripts |
---|
Sun | DB | 1 x M4000 with 2 x 2.53GHz SPARC64 VII QC processors, 32GB RAM 1 x Sun Storage Flash Accelerator F20 with 4 x 24GB FMODs 1 x ST2540 array with 11 × 136.7GB SAS 15K RPM drives | Solaris 10 | 37.29 | 20.94 | 0.64 | 0.78 | 0.82 | 1.57 | 31,797 | 36,652 |
APP | 2 x X6270 blades with 2 x 2.93GHz Xeon 5570 QC processors, 24GB RAM | OEL4 U8 | 41.69* | 4.99* |
WEB+PS | 1 x X6270 blade with 2 x 2.8GHz Xeon 5560 QC processors, 24GB RAM | OEL4 U8 | 33.08 | 6.03 |
|
HP | DB | 1 x Integrity rx6600 with 4 x 1.6GHz Itanium 9050 DC procs, 32G RAM 1 x HP StorageWorks EVA8100 array with 58 x 146GB drives | HP-UX 11iv3 | 61 | 30 | 0.71 | 0.91 | 0.83 | 1.63 | 22,753 | 30,257 |
APP | 2 x BL460c blade with 2 x 3.16GHz Xeon 5460 QC procs, 16GB RAM | RHEL4U5 | 61.81 | 3.6 |
WEB | 1 x BL460c blade with 2 x 3GHz Xeon 5160 DC procs, 8GB RAM | RHEL4U5 | 44.36 | 3.77 |
PS | 1 x BL460c blade with 2 x 3GHz Xeon 5160 DC procs, 8GB RAM | RHEL4U5 | 21.90 | 1.48 |
|
HP | DB | 1 x ProLiant DL580 G4 w/ 4 x 3.4GHz Xeon 7140M DC procs, 32G RAM 1 x HP StorageWorks XP128 array with 28 x 73GB drives | Win2003R2 | 70.37 | 21.26 |
0.72 | 1.17 | 0.94 | 1.80 | 17,621 | 25,423 |
APP | 4 x BL480c G1 blades with 2 x 3GHz Xeon 5160 DC procs, 12GB RAM | Win2003R2 | 65.61 | 2.17 |
WEB | 1 x BL460c G1 blades with 2 x 3GHz Xeon 5160 DC procs, 12GB RAM | Win2003R2 | 54.11 | 3.13 |
PS | 1 x BL460c G1 blades with 2 x 3GHz Xeon 5160 DC procs, 12GB RAM | Win2003R2 | 32.44 | 1.40 |
|
This is all public information. Feel free to compare the hardware configurations & the data presented in the table and draw your own conclusions. Since both Sun and HP used the same benchmark workload, toolkit and ran the benchmark with the same number of concurrent users and job streams for the batch processes, comparison should be pretty straight forward.
Hopefully the following paragraphs will provide relevant insights into the benchmark and the application.
Caution in interpreting the Online Transaction Response TimesAverage response times for the online transactions were measured using HP's QuickTest Pro (QTP) tool. This is a benchmark requirement. QTP test scripts have a dependency on the web browser (IE in particular) -- hence it is extremely sensitive to the web browser latencies,
remote desktop/VNC latencies and other latencies induced by the operating system. Be aware that all these latencies will be factored into the transaction response times and due to this, the final average transaction response times might be skewed a little. In other words, the reported average transaction response times may not necessarily be very accurate. In most of the cases we might be looking at the approximate values and the actual values might be far better than the ones reported in the benchmark report. (
I really wish Oracle|PeopleSoft would throw away some of the skewed samples to make the data more accurate and reliable.). Please keep this in mind when looking at the response times of the online transactions.
Quick note about ConsolidationIn our benchmark environment, we had the PeopleSoft Process Scheduler (batch server) set up on the same node as that of the web server node. In general, Oracle recommends setting up the process scheduler either on the database server node or on a dedicated system. However in the benchmark environment, we chose not to run the process scheduler on the database server node as it would hurt the performance of the online transactions. At the same time, we noticed plenty of idle CPU cycles on the web server node even at the peak load of 6,000 concurrent users, so we decided to run the PS on the web server node. In case if customers are not comfortable with this kind of setup, they can use any supported virtualization technology (eg., Logical Domains, Containers on Solaris, Oracle VM on OEL) to separate the process scheduler from the web server by allocating the system resources as they like. It is just a matter of choice.
PeopleSoft Load BalancingPeopleSoft has load balancing mechanism built into the web server to forward the incoming requests to appropriate application server in the enterprise, and within the application server to send the request to an appropriate application server process, PSAPPSRV. (
I'm not 100% sure but I think application server balances the load among application server processes in a round robin fashion on *nix platforms whereas on Windows, it forwards all the requests to a single application server process until it reaches the configured limit before moving on to the next available application server process.). However this in-built load balancing is not perfect. Most of the times, the number of requests processed by each of the identically configured application server processes [running on different application server nodes in the enterprise] may not be even. This minor shortcoming could lead to uneven resource usage across different nodes in the PeopleSoft deployment. You can notice this in the CPU and memory usage reported for the two app server nodes in the benchmark environment (check the benchmark results
white paper).
Sun Flash Accelerator F20 PCIe CardTo reduce I/O latency, hot tables and hot indexes were placed on a
Sun Flash Accelerator F20 PCIe Card in this benchmark. The F20 card has a total capacity of 96 GB with 4 x 24GB Flash Modules (FMODs). Although this workload is moderately I/O intensive, the batch processes in this benchmark generate a lot of I/O for few minutes in the steady state of the benchmark. The flash accelerator handled the burst of I/O activity pretty well, and as a result the performance of the batch processesing was improved.
Check the white paper
Best Practices for Oracle PeopleSoft Enterprise Payroll for North America using the Sun Storage F5100 Flash Array or Sun Flash Accelerator F20 PCIe Card to know more about the top flash products offered by Oracle|Sun and how they can be deployed in a PeopleSoft environment for maximum benefit.
Solaris specific TuningAlmost on all versions of Solaris 10, the kernel uses 4M as the maximum page size despite the fact that the underlying hardware supports as high as 256M pages. However large pages may
improve the performance of some of the memory intensive workloads such as Oracle database by reducing the number of virtual <=> physical translations there by reducing the expensive dTLB/iTLB misses. In the benchmark environment, the following values were set in the /etc/system configuration file of the database server node to enable 256MB pages for the process heap and ISM.
* 256M pages for process heap
set max_uheap_lpsize=0x10000000
* 256M pages for ISM
set mmu_ism_pagesize=0x10000000
While we are on the same topic, Linux configuration is out-of-the-box. No OS tuning was performed in this benchmark.
Tuning Tip for Solaris CustomersEven though we did not set up the middle-tier on a Solaris box in this benchmark, this particular tuning tip is still valid and may help all those customers running the application server on Solaris. Consider lowering the shell limit for the file descriptors to a value of 512 or less if it was set to any value greater than 512. As of today (until the release of PeopleTools 8.50), there are certain parts of code in PeopleSoft calls the file control routine,
fcntl()
, and the file close routine,
fclose()
, in a loop "
ulimit -n
" number of times to close a bunch of files which were opened to perform a specific task. In general, PeopleSoft processes won't open hundreds of files. Hence the above mentioned behavior results in ton of dummy calls that error out. Besides, those system calls are not cheap -- they consume CPU cycles. It gets worse when there are a number of PeopleSoft processes that exhibit this kind of behavior simultaneously. (
high system CPU% is one of the symptoms that helps identifying this behavior). Oracle|PeopleSoft is currently trying to address this performance issue. Meanwhile customers can lower the file descriptors shell limit to reduce its intensity and impact.
We have not observed this behavior on OEL when running the benchmark. But be sure to trace the system calls and figure out if the shell limit for the file descriptors need be lowered even on Linux or other supported platforms.
______________________________________
Footnotes:
1.
PeopleSoft benchmarks on Sun platform in year 2009-2010- PeopleSoft HRMS 8.9 SELF-SERVICE Using ORACLE on Sun SPARC Enterprise M3000 and Enterprise T5120 Servers -- online transactions (OLTP)
- PeopleSoft Enterprise Payroll 9.0 using Oracle for Solaris on a Sun SPARC Enterprise M4000 (8 streams) -- batch workload
- PeopleSoft Enterprise Payroll 9.0 using Oracle for Solaris on a Sun SPARC Enterprise M4000 (16 streams) -- batch workload
2.
*HP's benchmark results white paper did not show the CPU and memory breakdown numbers separately for each of the application server nodes. It only shows the average of average CPU and memory utilization for all app server nodes under "App Servers". Sun's average CPU, memory numbers [shown in the above table] were calculated in the same way for consistency.(
Copied from the original post at Oracle|Sun blogs @
http://blogs.sun.com/mandalika/entry/peoplesoft_campus_solutions_9_0)