Pages

Friday, March 03, 2006

Solaris: Resource Controls - Physical Memory

Solaris Zones: Resource Controls - CPU explains the steps to control the CPU resources on any server, running Solaris 10 or later. It is also possible to restrict the physical memory usage by a process, or by all processes owned by a user. This can be done either in a local zone or in a global zone on Solaris 10 and later. Note that Solaris 9 and later versions can be used for capping physical memory.

The goal of this blog entry is to show the simple steps in restricting the total physical memory utilization by all processes owned by an user called giri to 2G (total physical memory installed: 8G), in a local zone called v1280appserv.
 v1280appserv:/% prtconf | grep Mem
 prtconf: devinfo facility not available
 Memory size: 8192 Megabytes

To achieve the physical mem cap, we have to start with a project creation, for the user giri. A project is a grouping of processes that are subject to a set of constraints. To define the physical memory resource cap for a project, establish the physical memory cap by adding rcap.max-rss attribute to the newly created project. rcap.max-rss in a project indicates the total amount of physical memory, in bytes, that is available to all processes in the project. Project creation and establishing the physical memory cap steps can be combined into one simple step as shown below:
 % projadd -c "App Serv - Restrict the physical memory usage to 2G" -K "rcap.max-rss=2147483648" \
     -U giri appservproj
where: appservproj is the name of the project.

It will append an entry to /etc/project file.
 % cat /etc/project
 system:0::::
 user.root:1::::
 ...
 appservproj:100:App Serv - Restrict the physical memory usage to 2G:giri::rcap.max-rss=2147483648

-l option of projects can be used to list all the configured projects, and the detailed information about each project.
 % projects -l
 system
  projid : 0
  comment: ""
  users : (none)
  groups : (none)
  attribs:
 user.root
  projid : 1
  comment: ""
  users : (none)
  groups : (none)
  attribs:
 ...
 ...
 appservproj
  projid : 100
  comment: "App Serv - Restrict the physical memory usage to 2G"
  users : giri
  groups : (none)
  attribs: rcap.max-rss=2147483648

Now associate the project appservproj to user giri, by appending the following line to /etc/user_attr file:
        giri::::project=appservproj
 % cat /etc/user_attr
 ...
 adm::::profiles=Log Management
 lp::::profiles=Printer Management
 root::::auths=solaris.*,solaris.grant;profiles=Web Console Management,All;lock_after_retries=no
 giri::::project=appservproj

Finally enable the resource capping daemon, rcapd if it is not running. The rcapd daemon enforces resource caps on collections of processes. It supports per-project physical memory caps, as we need.
 % ps -ef | grep rcapd
  root 21160 21097 0 18:10:36 pts/4 0:00 grep rcapd

 % rcapadm -E

 % pgrep -l rcapd
 21164 rcapd

That's about it. When the resident set size (RSS) of a collection of processes owned by user giri, exceeds its cap, rcapd takes action and reduces the total RSS of the collection to 2G. The excess memory will be paged out to the swap device. The following run-time statistics indicate that the physical memory cap is effective -- observe the total RSS size under project appservproj; and also from the paging activity (vmstat output).
 % prstat -J
  PID USERNAME SIZE RSS STATE PRI NICE TIME CPU PROCESS/NLWP
  21584 giri 555M 381M sleep 59 0 0:01:19 7.4% siebmtshmw/73
  21580 giri 519M 391M sleep 59 0 0:01:16 5.7% siebmtshmw/73
  21576 giri 547M 372M sleep 59 0 0:01:19 5.7% siebmtshmw/73
  21591 giri 519M 372M sleep 59 0 0:01:14 3.5% siebmtshmw/75
  21565 giri 209M 119M sleep 59 0 0:00:16 0.4% siebprocmw/9
  21549 giri 5560K 3080K sleep 49 0 0:00:00 0.1% prstat/1
  21620 giri 4776K 3728K cpu1 59 0 0:00:00 0.1% prstat/1
  21564 giri 162M 111M sleep 59 0 0:00:07 0.1% siebmtsh/10
  ...
  ...

 PROJID NPROC SIZE RSS MEMORY TIME CPU PROJECT
  100 14 3232M 2052M 26% 0:06:20 23% appservproj
  3 8 62M 28M 0.3% 0:01:08 0.1% default

 Total: 22 processes, 396 lwps, load averages: 1.56, 1.07, 0.63

 % vmstat 2
  kthr memory page disk faults cpu
  r b w swap free re mf pi po fr de sr s0 s1 s3 -- in sy cs us sy id
  0 0 0 16985488 6003688 3 6 8 5 4 0 0 1 0 0 0 340 280 230 4 1 96
  0 0 0 14698304 3809408 38 556 6523 0 0 0 0 643 164 0 0 6075 4301 4899 78 12 10
  6 0 0 14691624 3792080 25 604 5922 0 0 0 0 573 168 0 0 5726 5451 3269 93 6 0
  11 0 0 14680984 3770464 360 831 6316 0 0 0 0 882 191 0 0 7276 4352 3010 75 11 15
  7 0 0 14670192 3765472 211 747 5725 0 0 0 0 865 178 0 0 7428 4349 3628 73 13 14
  13 0 0 14663552 3778280 8 300 1493 0 0 0 0 2793 101 0 0 16703 4485 2418 68 7 25
  14 0 0 14659352 3825832 12 154 983 0 0 0 0 3202 104 0 0 18664 4147 2208 56 4 40
  20 0 0 14650432 3865952 4 157 1009 0 0 0 0 3274 116 0 0 19295 4742 1984 70 6 25
  6 0 0 14644240 3909936 2 119 858 0 0 0 0 3130 81 0 0 18528 3691 2025 54 5 42
  18 0 0 14637752 3953560 1 121 662 0 0 0 0 3284 70 0 0 18475 5327 2297 95 5 0

rcapd daemon can be monitored with rcapstat tool.
 % rcapstat
  id project nproc vm rss cap at avgat pg avgpg
  ...
  ...
  100 appservproj 14 2603M 1962M 2048M 0K 0K 0K 0K
  100 appservproj 14 2637M 1996M 2048M 0K 0K 0K 0K
  100 appservproj 14 2645M 2005M 2048M 0K 0K 0K 0K
  100 appservproj 14 2686M 2042M 2048M 0K 0K 0K 0K
  100 appservproj 14 2706M 2063M 2048M 24K 0K 24K 0K
  id project nproc vm rss cap at avgat pg avgpg
  100 appservproj 14 2731M 2071M 2048M 61M 0K 38M 0K
  100 appservproj 14 2739M 2001M 2048M 0K 0K 0K 0K
  100 appservproj 14 2751M 2016M 2048M 0K 0K 0K 0K
  100 appservproj 14 2771M 2036M 2048M 0K 0K 0K 0K
  100 appservproj 14 2783M 2049M 2048M 880K 0K 744K 0K
  100 appservproj 14 2796M 2054M 2048M 15M 0K 6576K 0K
  100 appservproj 14 2824M 2030M 2048M 0K 0K 0K 0K
  100 appservproj 14 2832M 2047M 2048M 0K 0K 0K 0K
  100 appservproj 14 2875M 2090M 2048M 33M 0K 21M 0K
  100 appservproj 14 2895M 1957M 2048M 21M 0K 21M 0K
  100 appservproj 14 2913M 1982M 2048M 0K 0K 0K 0K
  100 appservproj 14 2951M 2040M 2048M 0K 0K 0K 0K
  100 appservproj 14 2983M 2081M 2048M 20M 0K 1064K 0K
  100 appservproj 14 2996M 2030M 2048M 55M 0K 33M 0K
  100 appservproj 14 3013M 2052M 2048M 4208K 0K 8184K 0K
  100 appservproj 14 3051M 2100M 2048M 52M 0K 56M 0K
  100 appservproj 14 3051M 2100M 2048M 0K 0K 0K 0K
  100 appservproj 14 3064M 2078M 2048M 30M 0K 36M 0K
  100 appservproj 14 3081M 2099M 2048M 51M 0K 56M 0K
  100 appservproj 14 3119M 2140M 2048M 52M 0K 48M 0K
  ...
  ...

 % rcapstat -g
  id project nproc vm rss cap at avgat pg avgpg
  100 appservproj 14 3368M 2146M 2048M 842M 0K 692M 0K
 physical memory utilization: 50% cap enforcement threshold: 0%
  100 appservproj 14 3368M 2146M 2048M 0K 0K 0K 0K
 physical memory utilization: 50% cap enforcement threshold: 0%
  100 appservproj 14 3368M 2146M 2048M 0K 0K 0K 0K
 physical memory utilization: 50% cap enforcement threshold: 0%
  100 appservproj 14 3380M 2096M 2048M 48M 0K 44M 0K
  ...

To disable rcapd daemon, run the following command:
 % rcapadm -D

For more information and examples, see:
  1. System Administration Guide: Solaris Containers-Resource Management and Solaris Zones
  2. Brenden Gregg's Memory Resource Control demos

Technorati tags
|

No comments:

Post a Comment