Mandalika's scratchpad | [ Work blog @Oracle | My Music Compositions ] |
Old Posts: 09.04 10.04 11.04 12.04 01.05 02.05 03.05 04.05 05.05 06.05 07.05 08.05 09.05 10.05 11.05 12.05 01.06 02.06 03.06 04.06 05.06 06.06 07.06 08.06 09.06 10.06 11.06 12.06 01.07 02.07 03.07 04.07 05.07 06.07 08.07 09.07 10.07 11.07 12.07 01.08 02.08 03.08 04.08 05.08 06.08 07.08 08.08 09.08 10.08 11.08 12.08 01.09 02.09 03.09 04.09 05.09 06.09 07.09 08.09 09.09 10.09 11.09 12.09 01.10 02.10 03.10 04.10 05.10 06.10 07.10 08.10 09.10 10.10 11.10 12.10 01.11 02.11 03.11 04.11 05.11 07.11 08.11 09.11 10.11 11.11 12.11 01.12 02.12 03.12 04.12 05.12 06.12 07.12 08.12 09.12 10.12 11.12 12.12 01.13 02.13 03.13 04.13 05.13 06.13 07.13 08.13 09.13 10.13 11.13 12.13 01.14 02.14 03.14 04.14 05.14 06.14 07.14 09.14 10.14 11.14 12.14 01.15 02.15 03.15 04.15 06.15 09.15 12.15 01.16 03.16 04.16 05.16 06.16 07.16 08.16 09.16 12.16 01.17 02.17 03.17 04.17 06.17 07.17 08.17 09.17 10.17 12.17 01.18 02.18 03.18 04.18 05.18 06.18 07.18 08.18 09.18 11.18 12.18 01.19 02.19 05.19 06.19 08.19 10.19 11.19 05.20 10.20 11.20 12.20 09.21 11.21 12.22
[1] Solaris 11+ : changing hostname
Starting with Solaris 11, a system's identify (nodename) is configured through the config/nodename
service property of the svc:/system/identity:node
SMF service. Solaris 10 and prior versions have this information in /etc/nodename
configuration file.
The following example demonstrates the commands to change the hostname from "ihcm-db-01" to "ehcm-db-01".
eg.,# hostname ihcm-db-01 # svccfg -s system/identity:node listprop config config application config/enable_mapping boolean true config/ignore_dhcp_hostname boolean false config/nodename astring ihcm-db-01 config/loopback astring ihcm-db-01 # # svccfg -s system/identity:node setprop config/nodename="ehcm-db-01" # svccfg -s system/identity:node refresh -OR- # svcadm refresh svc:/system/identity:node # svcadm restart system/identity:node # svccfg -s system/identity:node listprop config config application config/enable_mapping boolean true config/ignore_dhcp_hostname boolean false config/nodename astring ehcm-db-01 config/loopback astring ehcm-db-01 # hostname ehcm-db-01
[2] Parallel Compression
This topic is not Solaris specific, but certainly helps Solaris users who are frustrated with the single threaded implementation of all officially supported compression tools such as compress, gzip, zip.
pigz (pig-zee) is a parallel implementation of gzip that suits well for the latest multi-processor, multi-core machines. By default, pigz breaks up the input into multiple chunks of size 128 KB, and compress each chunk in parallel with the help of light-weight threads. The number of compress threads is set by default to the number of online processors. The chunk size and the number of threads are configurable.
Compressed files can be restored to their original form using -d
option of pigz
or gzip
tools. As per the man page, decompression is not parallelized out of the box, but may show some improvement compared to the existing old tools.
The following example demonstrates the advantage of using pigz
over gzip
in compressing and decompressing a large file.
eg.,
Original file, and the target hardware.
$ ls -lh PT8.53.04.tar -rw-r--r-- 1 psft dba 4.8G Feb 28 14:03 PT8.53.04.tar $ psrinfo -pv The physical processor has 8 cores and 64 virtual processors (0-63) The core has 8 virtual processors (0-7) ... The core has 8 virtual processors (56-63) SPARC-T5 (chipid 0, clock 3600 MHz)
gzip
compression.
$ time gzip --fast PT8.53.04.tar real 3m40.125s user 3m27.105s sys 0m13.008s $ ls -lh PT8.53* -rw-r--r-- 1 psft dba 3.1G Feb 28 14:03 PT8.53.04.tar.gz /* the following prstat, vmstat outputs show that gzip is compressing the tar file using a single thread - hence low CPU utilization. */ $ prstat -p 42510 PID USERNAME SIZE RSS STATE PRI NICE TIME CPU PROCESS/NLWP 42510 psft 2616K 2200K cpu16 10 0 0:01:00 1.5% gzip/1 $ prstat -m -p 42510 PID USERNAME USR SYS TRP TFL DFL LCK SLP LAT VCX ICX SCL SIG PROCESS/NLWP 42510 psft 95 4.6 0.0 0.0 0.0 0.0 0.0 0.0 0 35 7K 0 gzip/1 $ vmstat 2 r b w swap free re mf pi po fr de sr s0 s1 s2 s3 in sy cs us sy id 0 0 0 776242104 917016008 0 7 0 0 0 0 0 0 0 52 52 3286 2606 2178 2 0 98 1 0 0 776242104 916987888 0 14 0 0 0 0 0 0 0 0 0 3851 3359 2978 2 1 97 0 0 0 776242104 916962440 0 0 0 0 0 0 0 0 0 0 0 3184 1687 2023 1 0 98 0 0 0 775971768 916930720 0 0 0 0 0 0 0 0 0 39 37 3392 1819 2210 2 0 98 0 0 0 775971768 916898016 0 0 0 0 0 0 0 0 0 0 0 3452 1861 2106 2 0 98
pigz
compression.
$ time ./pigz PT8.53.04.tar real 0m25.111s <== wall clock time is 25s compared to gzip's 3m 27s user 17m18.398s sys 0m37.718s /* the following prstat, vmstat outputs show that pigz is compressing the tar file using many threads - hence busy system with high CPU utilization. */ $ prstat -p 49734 PID USERNAME SIZE RSS STATE PRI NICE TIME CPU PROCESS/NLWP 49734 psft 59M 58M sleep 11 0 0:12:58 38% pigz/66 $ vmstat 2 kthr memory page disk faults cpu r b w swap free re mf pi po fr de sr s0 s1 s2 s3 in sy cs us sy id 0 0 0 778097840 919076008 6 113 0 0 0 0 0 0 0 40 36 39330 45797 74148 61 4 35 0 0 0 777956280 918841720 0 1 0 0 0 0 0 0 0 0 0 38752 43292 71411 64 4 32 0 0 0 777490336 918334176 0 3 0 0 0 0 0 0 0 17 15 46553 53350 86840 60 4 35 1 0 0 777274072 918141936 0 1 0 0 0 0 0 0 0 39 34 16122 20202 28319 88 4 9 1 0 0 777138800 917917376 0 0 0 0 0 0 0 0 0 3 3 46597 51005 86673 56 5 39 $ ls -lh PT8.53.04.tar.gz -rw-r--r-- 1 psft dba 3.0G Feb 28 14:03 PT8.53.04.tar.gz $ gunzip PT8.53.04.tar.gz <== shows that the pigz compressed file is compatible with gzip/gunzip $ ls -lh PT8.53* -rw-r--r-- 1 psft dba 4.8G Feb 28 14:03 PT8.53.04.tar
Decompression.
$ time ./pigz -d PT8.53.04.tar.gz real 0m18.068s user 0m22.437s sys 0m12.857s $ time gzip -d PT8.53.04.tar.gz real 0m52.806s <== compare gzip's 52s decompression time with pigz's 18s user 0m42.068s sys 0m10.736s $ ls -lh PT8.53.04.tar -rw-r--r-- 1 psft dba 4.8G Feb 28 14:03 PT8.53.04.tar
Of course, there are other tools such as Parallel BZIP2 (PBZIP2), which is a parallel implementation of the bzip2
tool are worth a try too. The idea here is to highlight the fact that there are better tools out there to get the job done in a quick manner compared to the existing/old tools that are bundled with the operating system distribution.
[3] Solaris 11+ : Upgrading SRU
Assuming the package repository is set up already to do the network updates on a Solaris 11+ system, the following commands are helpful in upgrading a SRU.
List all available SRUs in the repository.
# pkg list -af entire
Upgrade to the latest and greatest.
# pkg update
To find out what changes will be made to the system, try a dry run of the system update.
# pkg update -nv
Upgrade to a specific SRU.
# pkg update entire@<FMRI>
Find the Fault Managed Resource Identifier (FMRI) string by running pkg list -af entire
command.
Note that it is not so easy to downgrade SRU to a lower version as it may break the system. Should there be a need to downgrade or switch between different SRUs, relying on Boot Environments (BE) might be a good idea. Check Creating and Administering Oracle Solaris 11 Boot Environments document for details.
[4] Parallel NFS (pNFS)
Just a quick note — RFC 5661, Network File System (NFS) Version 4.1 introduced a new feature called "Parallel NFS" or pNFS, which allows NFS clients to access storage devices containing file data directly. When file data for a single NFS v4 server is stored on multiple and/or higher-throughput storage devices, using pNFS can result in significant improvement in file access performance. However Parallel NFS is an optional feature in NFS v4.1. Though there was a prototype made available few years ago when OpenSolaris was still alive, as of today, Solaris has no support for pNFS. Stay tuned for any updates from Oracle Solaris teams.
Here is an interesting write-up from one of our colleagues at Oracle|Sun (dated 2007) -- NFSv4.1's pNFS for Solaris.
(Credit to Rob Schneider and Tom Gould for initiating this topic)
[5] SPARC hardware : Check for and clear faults from ILOM
Couple of ways to check the faults using ILOM command line interface.
By running:
show faulty
command from ILOM command prompt, or
fmadm faulty
command from within the ILOM faultmgmt shell
Once found, use the clear_fault_action
property with the set
command to clear the fault for a FRU.
The following example checks for the faulty FRUs from ILOM faultmgmt shell, then clears it out.
eg.,
-> start /SP/faultmgmt/shell Are you sure you want to start /SP/faultmgmt/shell (y/n)? y faultmgmtsp> fmadm faulty ------------------- ------------------------------------ -------------- -------- Time UUID msgid Severity ------------------- ------------------------------------ -------------- -------- 2014-02-26/16:17:11 18c62051-c81d-c569-a4e6-e418db2f84b4 PCIEX-8000-SQ Critical ... ... Suspect 1 of 1 Fault class : fault.io.pciex.rc.generic-ue Certainty : 100% Affects : hc:///chassis=0/motherboard=0/cpuboard=1/chip=2/hostbridge=4 Status : faulted FRU Status : faulty Location : /SYS/PM1 Manufacturer : Oracle Corporation Name : TLA,PM,T5-4,T5-8 ... Description : A fault has been diagnosed by the Host Operating System. Response : The service required LED on the chassis and on the affected FRU may be illuminated. ... faultmgmtsp> exit -> set /SYS/PM1 clear_fault_action=True Are you sure you want to clear /SYS/PM1 (y/n)? y Set 'clear_fault_action' to 'True'
Note that this procedure clears the fault from the SP but not from the host.
Labels: fault oracle performance pnfs solaris sru tips
2004-2019 |