Mandalika's scratchpad [ Work blog @Oracle | Stock Market Notes | My Music Compositions ]

Old Posts: 09.04  10.04  11.04  12.04  01.05  02.05  03.05  04.05  05.05  06.05  07.05  08.05  09.05  10.05  11.05  12.05  01.06  02.06  03.06  04.06  05.06  06.06  07.06  08.06  09.06  10.06  11.06  12.06  01.07  02.07  03.07  04.07  05.07  06.07  08.07  09.07  10.07  11.07  12.07  01.08  02.08  03.08  04.08  05.08  06.08  07.08  08.08  09.08  10.08  11.08  12.08  01.09  02.09  03.09  04.09  05.09  06.09  07.09  08.09  09.09  10.09  11.09  12.09  01.10  02.10  03.10  04.10  05.10  06.10  07.10  08.10  09.10  10.10  11.10  12.10  01.11  02.11  03.11  04.11  05.11  07.11  08.11  09.11  10.11  11.11  12.11  01.12  02.12  03.12  04.12  05.12  06.12  07.12  08.12  09.12  10.12  11.12  12.12  01.13  02.13  03.13  04.13  05.13  06.13  07.13  08.13  09.13  10.13  11.13  12.13  01.14  02.14  03.14  04.14  05.14  06.14  07.14  09.14  10.14  11.14  12.14  01.15  02.15  03.15 


Tuesday, March 31, 2015
 
Locality Group Observability on Solaris

Modern multi-socket servers exhibit NUMA characteristics that may hurt application performance if ignored. On a NUMA system (Non-uniform Memory Access), all memory is shared between/among processors. Each processor has access to its own memory - local memory - as well as memory that is local to another processor -- remote memory. However the memory access time (latency) depends on the memory location relative to the processor. A processor can access its local memory faster than the remote memory, and these varying memory latencies play a big role in application performance.

Solaris organizes the hardware resources -- CPU, memory and I/O devices -- into one or more logical groups based on their proximity to each other in such a way that all the hardware resources in a group are considered local to that group. These groups are referred as locality groups or NUMA nodes. In other words, a locality group (lgroup) is an abstraction that tells what hardware resources are near each other on a NUMA system. Each locality group has at least one processor and possibly some associated memory and/or IO devices. To minimize the impact of NUMA characteristics, Solaris considers the lgroup based physical topology when mapping threads and data to CPUs and memory.

Note that even though Solaris attempts to provide good performance out of the box, some applications may still suffer the impact of NUMA either due to misconfiguration of the hardware/software or some other reason. Solaris provided tools and APIs can be used to observe, diagnose, control and even correct or fix the issues related to locality and latency. Rest of this post is about the tools that can be used to examine the locality of cores, memory and I/O devices.

Sample outputs are collected from a SPARC T4-4 server.

Locality Group Hierarchy

lgrpinfo prints information about the lgroup hierarchy and its contents. It is useful in understanding the context in which the OS is trying to optimize applications for locality, and also in figuring out which CPUs are closer, how much memory is near them, and the relative latencies between the CPUs and different memory blocks.

eg.,

# lgrpinfo -a

lgroup 0 (root):
        Children: 1-4
        CPUs: 0-255
        Memory: installed 1024G, allocated 75G, free 948G
        Lgroup resources: 1-4 (CPU); 1-4 (memory)
        Latency: 18
lgroup 1 (leaf):
        Children: none, Parent: 0
        CPUs: 0-63
        Memory: installed 256G, allocated 18G, free 238G
        Lgroup resources: 1 (CPU); 1 (memory)
        Load: 0.0227
        Latency: 12
lgroup 2 (leaf):
        Children: none, Parent: 0
        CPUs: 64-127
        Memory: installed 256G, allocated 15G, free 241G
        Lgroup resources: 2 (CPU); 2 (memory)
        Load: 0.000153
        Latency: 12
lgroup 3 (leaf):
        Children: none, Parent: 0
        CPUs: 128-191
        Memory: installed 256G, allocated 20G, free 236G
        Lgroup resources: 3 (CPU); 3 (memory)
        Load: 0.016
        Latency: 12
lgroup 4 (leaf):
        Children: none, Parent: 0
        CPUs: 192-255
        Memory: installed 256G, allocated 23G, free 233G
        Lgroup resources: 4 (CPU); 4 (memory)
        Load: 0.00824
        Latency: 12

Lgroup latencies:

------------------
  |  0  1  2  3  4
------------------
0 | 18 18 18 18 18
1 | 18 12 18 18 18
2 | 18 18 12 18 18
3 | 18 18 18 12 18
4 | 18 18 18 18 12
------------------

CPU Locality

lgrpinfo utility shown above already provides CPU locality in a clear manner. Here is another way to retrieve the association between CPU ids and lgroups.

# echo ::lgrp -p | mdb -k

   LGRPID  PSRSETID      LOAD      #CPU      CPUS
        1         0     17873        64      0-63
        2         0     17755        64      64-127
        3         0      2256        64      128-191
        4         0     18173        64      192-255

Memory Locality

lgrpinfo utility shown above shows the total memory that belongs to each of the locality groups. However, it doesn't show exactly what memory blocks belong to what locality groups. One of mdb's debugger command (dcmd) helps retrieve this information.

1. List memory blocks

# ldm list-devices -a memory

MEMORY
     PA                   SIZE            BOUND
     0xa00000             32M             _sys_
     0x2a00000            96M             _sys_
     0x8a00000            374M            _sys_
     0x20000000           1048064M        primary


2. Print the physical memory layout of the system

# echo ::syslayout | mdb -k

         STARTPA            ENDPA  SIZE  MG MN    STL    ETL
        20000000        200000000  7.5g   0  0      4     40
       200000000        400000000    8g   1  1    800    840
       400000000        600000000    8g   2  2   1000   1040
       600000000        800000000    8g   3  3   1800   1840
       800000000        a00000000    8g   0  0     40     80
       a00000000        c00000000    8g   1  1    840    880
       c00000000        e00000000    8g   2  2   1040   1080
       e00000000       1000000000    8g   3  3   1840   1880
      1000000000       1200000000    8g   0  0     80     c0
      1200000000       1400000000    8g   1  1    880    8c0
      1400000000       1600000000    8g   2  2   1080   10c0
      1600000000       1800000000    8g   3  3   1880   18c0
 ...
 ...

The values under MN column (memory node) can be treated as lgroup numbers after adding 1 to existing values. For example, a value of zero under MN translates to lgroup 1, 1 under MN translate to lgroup 2 and so on. Better yet, ::mnode debugger command lists out the mapping of mnodes to lgroups as shown below.

# echo ::mnode | mdb -k

           MNODE ID LGRP ASLEEP UTOTAL  UFREE UCACHE KTOTAL  KFREE KCACHE
     2075ad80000  0    1      -   249g   237g   114m   5.7g   714m      -
     2075ad802c0  1    2      -   240g   236g   288m    15g   4.8g      -
     2075ad80580  2    3      -   246g   234g   619m   9.6g   951m      -
     2075ad80840  3    4      -   247g   231g    24m     9g   897m      -

Unrelated notes:

I/O Device Locality

-d option to lgrpinfo utility accepts a specified path to an I/O device and return the lgroup IDs closest to that device. Each I/O device on the system can be connected to one or more NUMA nodes - so, it is not uncommon to see more than one lgroup ID returned by lgrpinfo.

eg.,

# lgrpinfo -d /dev/dsk/c1t0d0
lgroup ID : 1

# dladm show-phys | grep 10000
net4              Ethernet             up         10000  full      ixgbe0

# lgrpinfo -d /dev/ixgbe0
lgroup ID : 1

# dladm show-phys | grep ibp0
net12             Infiniband           up         32000  unknown   ibp0

# lgrpinfo -d /dev/ibp0
lgroup IDs : 1-4

NUMA IO Groups

Debugger command ::numaio_group shows information about all NUMA I/O Groups.

# dladm show-phys | grep up
net0              Ethernet             up         1000   full      igb0
net12             Ethernet             up         10     full      usbecm2
net4              Ethernet             up         10000  full      ixgbe0

# echo ::numaio_group | mdb -k
            ADDR GROUP_NAME                     CONSTRAINT
    10050e1eba48 net4                  lgrp : 1
    10050e1ebbb0 net0                  lgrp : 1
    10050e1ebd18 usbecm2               lgrp : 1
    10050e1ebe80 scsi_hba_ngrp_mpt_sas1  lgrp : 4
    10050e1ebef8 scsi_hba_ngrp_mpt_sas0  lgrp : 1

Relying on prtconf is another way to find the NUMA IO locality for an IO device.

eg.,

# dladm show-phys | grep up | grep ixgbe
net4              Ethernet             up         10000  full      ixgbe0

== Find the device path for the network interface ==
# grep ixgbe /etc/path_to_inst | grep " 0 "
"/pci@400/pci@1/pci@0/pci@4/network@0" 0 "ixgbe"

== Find NUMA IO Lgroups ==
# prtconf -v /devices/pci@400/pci@1/pci@0/pci@4/network@0
 ...
    Hardware properties:
 ...
        name='numaio-lgrps' type=int items=1
            value=00000001
 ...

Process, Thread Locality

Examples:

# prstat -H

   PID USERNAME  SIZE   RSS STATE   PRI NICE      TIME  CPU LGRP PROCESS/NLWP
  1865 root      420M  414M sleep    59    0 447:51:13 0.1%    2 java/108
  3659 oracle   1428M 1413M sleep    38    0  68:39:28 0.0%    4 oracle/1
  1814 oracle    155M  110M sleep    59    0  70:45:17 0.0%    4 gipcd.bin/9
     8 root        0K    0K sleep    60    -  70:52:21 0.0%    0 vmtasks/257
  3765 root      447M  413M sleep    59    0  29:24:20 0.0%    3 crsd.bin/43
  3949 oracle    505M  456M sleep    59    0   0:59:42 0.0%    2 java/124
 10825 oracle   1097M 1074M sleep    59    0  18:13:27 0.0%    3 oracle/1
  3941 root      210M  184M sleep    59    0  20:03:37 0.0%    4 orarootagent.bi/14
  3743 root      119M   98M sleep   110    -  24:53:29 0.0%    1 osysmond.bin/13
  3324 oracle    266M  225M sleep   110    -  19:52:31 0.0%    4 ocssd.bin/34
  1585 oracle    122M   91M sleep    59    0  18:06:34 0.0%    3 evmd.bin/10
  3918 oracle    168M  144M sleep    58    0  14:35:31 0.0%    1 oraagent.bin/28
  3427 root      112M   80M sleep    59    0  12:34:28 0.0%    4 octssd.bin/12
  3635 oracle   1425M 1406M sleep   101    -  13:55:31 0.0%    4 oracle/1
  1951 root      183M  161M sleep    59    0   9:26:51 0.0%    4 orarootagent.bi/21
Total: 251 processes, 2414 lwps, load averages: 1.37, 1.46, 1.47

== Locality group 2 is the home lgroup of the java process with pid 1865 == 

# plgrp 1865

     PID/LWPID    HOME
    1865/1        2
    1865/2        2
 ...
 ...
    1865/22       4
    1865/23       4
 ...
 ...
    1865/41       1
    1865/42       1
 ...
 ...
    1865/60       3
    1865/61       3
 ...
 ...

# plgrp 1865 | awk '{print $2}' | grep 2 | wc -l
      30

# plgrp 1865 | awk '{print $2}' | grep 1 | wc -l
      25

# plgrp 1865 | awk '{print $2}' | grep 3 | wc -l
      25

# plgrp 1865 | awk '{print $2}' | grep 4 | wc -l
      28


== Let's reset the home lgroup of the java process id 1865 to 4 ==


# plgrp -H 4 1865
     PID/LWPID    HOME
    1865/1        2 => 4
    1865/2        2 => 4
    1865/3        2 => 4
    1865/4        2 => 4
 ...
 ...
    1865/184      1 => 4
    1865/188      4 => 4

# plgrp 1865 | awk '{print $2}' | egrep "1|2|3" | wc -l
       0

# plgrp 1865 | awk '{print $2}' | grep 4 | wc -l
     108

# prstat -H -p 1865

   PID USERNAME  SIZE   RSS STATE   PRI NICE      TIME  CPU LGRP PROCESS/NLWP
  1865 root      420M  414M sleep    59    0 447:57:30 0.1%    4 java/108

== List the home lgroup of all processes ==

# ps -aeH
  PID LGRP TTY         TIME CMD
    0    0 ?           0:11 sched
    5    0 ?           4:47 zpool-rp
    1    4 ?          21:04 init
    8    0 ?        4253:54 vmtasks
   75    4 ?           0:13 ipmgmtd
   11    3 ?           3:09 svc.star
   13    4 ?           2:45 svc.conf
 3322    1 ?         301:51 cssdagen
 ...
11155    3 ?           0:52 oracle
13091    4 ?           0:00 sshd
13124    3 pts/5       0:00 bash
24703    4 pts/8       0:00 bash
12812    2 pts/3       0:00 bash
 ...

== Find out the lgroups which shared memory segments are allocated from ==

# pmap -Ls 24513 | egrep "Lgrp|256M|2G"

         Address       Bytes Pgsz Mode   Lgrp Mapped File
0000000400000000   33554432K   2G rwxs-    1   [ osm shmid=0x78000047 ]
0000000C00000000     262144K 256M rwxs-    3   [ osm shmid=0x78000048 ]
0000000C10000000     524288K 256M rwxs-    2   [ osm shmid=0x78000048 ]
0000000C30000000     262144K 256M rwxs-    3   [ osm shmid=0x78000048 ]
0000000C40000000     524288K 256M rwxs-    1   [ osm shmid=0x78000048 ]
0000000C60000000     262144K 256M rwxs-    2   [ osm shmid=0x78000048 ]

== Apply MADV_ACCESS_LWP policy advice to a segment at a specific address ==

# pmap -Ls 1865 | grep anon

00000007DAC00000      20480K   4M rw---    4   [ anon ]
00000007DC000000       4096K    - rw---    -   [ anon ]
00000007DFC00000      90112K   4M rw---    4   [ anon ]
00000007F5400000     110592K   4M rw---    4   [ anon ]

# pmadvise -o 7F5400000=access_lwp 1865

# pmap -Ls 1865 | grep anon
00000007DAC00000      20480K   4M rw---    4   [ anon ]
00000007DC000000       4096K    - rw---    -   [ anon ]
00000007DFC00000      90112K   4M rw---    4   [ anon ]
00000007F5400000      73728K   4M rw---    4   [ anon ]
00000007F9C00000      28672K    - rw---    -   [ anon ]
00000007FB800000       8192K   4M rw---    4   [ anon ]

SEE ALSO:

Labels:




Saturday, February 28, 2015
 
Programming in C: Few Tidbits #5

1. Splitting a long string into multiple lines

Let's start with an example. Here is a sample string. It'd be nice to improve readability by splitting it into multiple lines.

const char *quote = "The shepherd drives the wolf from the sheep's for which the sheep thanks the shepherd as his liberator, while the wolf denounces him for the same act as the destroyer of liberty. Plainly, the sheep and the wolf are not agreed upon a definition of liberty.";

Couple of ideas.

2. Simultaneous writing to multiple streams

The straight forward approach is to make multiple standard I/O library function calls to write to desired streams.

eg.,

..
fprintf(stdout, "some formatted string");
fprintf(stderr, "some formatted string");
fprintf(filepointer, "some formatted string");
..

This approach may work well as long as there are only a few occurrences of such writing. However if there is a need to repeat it many times over the lifetime of a process, it can be simplified by wrapping all those function calls that write to different streams into a function so a single call to the wrapper function takes care of writing to different streams. If the number of arguments in the formatted string is not constant or not known in advance, one option is to make the wrapper function a variadic function so that it accepts a variable number of arguments. Here is an example.

The following example writes all messages to the log file, writes only informative messages to standard output (stdout) and writes only fatal errors to the high priority log. Without the logfmtstring() wrapper function, the same code would have had five different standard I/O library calls rather than just three that the sample code has.

% cat multstreams.c
#include <stdio.h>
#include <string.h>
#include <stdarg.h>

FILE *log, *highprioritylog;

void logfmtstring(const char *fmtstr, ...) {

        va_list args;
        va_start(args, fmtstr);

        vfprintf(log, fmtstr, args);

        if (strstr(fmtstr, "[info]") != NULL) {
                vfprintf(stdout, fmtstr, args);
        }

        if (strstr(fmtstr, "[fatal]") != NULL) {
                vfprintf(highprioritylog, fmtstr, args);
        }

        va_end(args);

}

void some_function(..) {

        log = fopen("app.log", "w");
        highprioritylog = fopen("app_highpriority.log", "w");

        ...

        logfmtstring("[info] successful entries: %d. failed entries: %d\n", completecount, failedcount);
        logfmtstring("[fatal] billing system not available\n");
        logfmtstring("[error] unable to ping internal system at %s\n", ip);

        ...

        fclose(log);
        fclose(highprioritylog);

}

% cc -o mstreams multstreams.c

% ./mstreams
[info] successful entries: 52. failed entries: 7

% cat app.log
[info] successful entries: 52. failed entries: 7
[fatal] billing system not available
[error] unable to ping internal system at 10.135.42.36

% cat app_highpriority.log
[fatal] billing system not available
%

Web search keywords: C Variadic Functions

3. Declaring variables within the scope of a CASE label in a SWITCH block

It is possible to declare and use variables within the scope of a case label with one exception -- the first statement after a case label should be a statement or an expression, but not a declaration. If not compiler throws an error during compilation such as the following.

% cat switch.c
#include <stdio.h>

int main() {
        switch(NULL) {
                default:
                        int cyear = 2015;
                        printf("\nin default ..");
                        printf("\nyear = %d", cyear);
        }
}

% cc -V -o switch switch.c
cc: Sun C 5.12 SunOS_sparc Patch 148917-08 2014/09/10
acomp: Sun C 5.12 SunOS_sparc Patch 148917-08 2014/09/10
"switch.c", line 6: syntax error before or at: int
"switch.c", line 8: undefined symbol: cyear
cc: acomp failed for switch.c

The above failure can be fixed by either moving the variable declaration to any place after a valid statement if possible, or by adding a dummy or null statement right after the case label.

eg.,

1. Move the declaration from right after the case label to any place after a valid statement.

/* works */
switch(NULL) {
        default:
                printf("\nin default ..");
                int cyear = 2015;
                printf("\nyear = %d", cyear);
}

2. Add a dummy or null statement right after the case label.

/* works */
switch(NULL) {
        default:
                ; // NULL statement
                int cyear = 2015;
                printf("\nin default ..");
                printf("\nyear = %d", cyear);
}

3. Yet another option is to define or create scope using curly braces ({}) for the case where variables are declared.

/* works too */
switch(NULL) {
        default:
        {
                int cyear = 2015;
                printf("\nin default ..");
                printf("\nyear = %d", cyear);
        }
}

Also see: Keyword – switch, case, default

Labels:




Saturday, January 31, 2015
 
Programming in C: Few Tidbits #4

1. Using Wildcards in Filename Pattern Matching

Relying on *stat() API is not much of an option when using wildcards to match a filename pattern. Some of the options involve traversing a directory checking each file for a match using fnmatch(), or to use system() function to execute an equivalent shell command. Another option that is well suited for this task is the glob*() API, which is part of Standard C Library Functions. (I believe glob() depends on fnmatch() for finding matches).

Here is a simple example that displays the number of matches found for pattern "/tmp/lint_" along with the listing of matches.

% ls -1 /tmp/lint_*
/tmp/lint_AAA.21549.0vaOfQ
/tmp/lint_BAA.21549.1vaOfQ
/tmp/lint_CAA.21549.2vaOfQ
/tmp/lint_DAA.21549.3vaOfQ
/tmp/lint_EAA.21549.4vaOfQ
/tmp/lint_FAA.21549.5vaOfQ
/tmp/lint_GAA.21549.6vaOfQ


% cat match.c
#include <stdio.h>
#include <glob.h>

int main(int argc, char **argv) {

        glob_t buf;

        if (argc == 1) return 0;

        glob(argv[1], 0 , NULL , &buf);

        printf("\nNumber of matches found for pattern '%s': %d\n",
                argv[1], buf.gl_pathc);

        for (int i = 0; i < buf.gl_pathc; ++i) {
                printf("\n\t%d. %s", (i + 1), buf.gl_pathv[i]);
        }

        globfree(&buf);

        return 0;
}

% cc -o match match.c

% ./match /tmp/lint_\*

Number of matches found for pattern '/tmp/lint_*': 7

        1. /tmp/lint_AAA.21549.0vaOfQ
        2. /tmp/lint_BAA.21549.1vaOfQ
        3. /tmp/lint_CAA.21549.2vaOfQ
        4. /tmp/lint_DAA.21549.3vaOfQ
        5. /tmp/lint_EAA.21549.4vaOfQ
        6. /tmp/lint_FAA.21549.5vaOfQ
        7. /tmp/lint_GAA.21549.6vaOfQ

Please check the man page out for details -- glob(3C).

2. Microtime[stamp]

One of the old blog posts has an example to extract the current timestamp using time API. It shows the timestamp in standard format month-date-year hour:min:sec. In this post, let's add microseconds to the timestamp.

Here is the sample code.

% cat microtime.c
#include <stdio.h>
#include <time.h>

int main() {

        char timestamp[80], etimestamp[80];
        struct timeval tmval;
        struct tm *curtime;

        gettimeofday(&tmval, NULL);

        curtime = localtime(&tmval.tv_sec);
        if (curtime == NULL) return 1;

        strftime(timestamp, sizeof(timestamp), "%m-%d-%Y %X.%%06u", curtime);
        snprintf(etimestamp, sizeof(etimestamp), timestamp, tmval.tv_usec);

        printf("\ncurrent time: %s\n", etimestamp);
        return 0;

}

% cc -o mtime microtime.c

% ./mtime
current time: 01-31-2015 15:49:26.041111

% ./mtime
current time: 01-31-2015 15:49:34.575214

One major change from old approach is the reliance on gettimeofday() since it returns a structure [timeval] with a member variable [tv_usec] to represent the microseconds.

strftime() fills up the date/time data in timestamp variable as per the specifiers used in time format (third argument). By the time strftime() completes execution, timestamp will have month-date-year hr:min:sec filled out. Subsequent snprintf fills up the only remaining piece of time data - microseconds - using the tv_usec member in timeval structure and writes the updated timestamp to a new variable, etimestamp.

Credit: stackoverflow user unwind.

3. Concatenating Multi-Formatted Strings

I have my doubts about this header - so, let me show an example first. The following rudimentary example attempts to construct a sentence that is something like "value of pi = (22/7) = 3.14". In other words, the sentence has a mixture of character strings, integers, floating point number and special characters.

% cat fmt.c
#include <stdio.h>
#include <string.h>

int main() {

        char tstr[48];
        char pistr[] = "value of pi = ";
        int num = 22, den = 7;
        float pi = ((float)num/den);

        char snum[8], sden[8], spi[8];

        sprintf(sden, "%d", den);
        sprintf(snum, "%d", num);
        sprintf(spi, "%0.2f", pi);

        strcpy(tstr, pistr);
        strcat(tstr, "(");
        strcat(tstr, snum);
        strcat(tstr, "/");
        strcat(tstr, sden);
        strcat(tstr, ") = ");
        strcat(tstr, spi);

        puts(tstr);
        return 0;

}

% cc -o fmt fmt.c

% ./fmt
value of pi = (22/7) = 3.14

Nothing seriously wrong with the above code. It is just that it uses a bunch of sprintf(), strcpy() and strcat() calls to construct the target string. Also it overallocates the memory required for the actual string.

The same effect can be achieved by using asprintf(). The resulting code will be much smaller and easy to maintain however. This function also eases the developer from the burden of allocating memory of appropriate size. In general, overallocation leads to memory wastage and underallocation likely leads to buffer overflows posing unnecessary security risks. When relying on asprintf(), developers are not relieved from two factors though -- checking the return value to see if the call succeeded, and in freeing up the buffer when done with it. Ignoring those two aspects lead to program failures in the worst case, and memory leaks are almost guaranteed.

Here is the alternate version that achieves the desired effect by making use of asprintf().

% cat ifmt.c
#include <stdio.h>
#include <stdlib.h>

int main() {

        char *tstr;
        int num = 22, den = 7;
        float pi = ((float)num/den);

        int ret = asprintf(&tstr, "value of pi = (%d/%d) = %0.2f", num, den, pi);

        if (ret == -1) return 1;

        puts(tstr);
        free(tstr);
        return 0;

}

% cc -o ifmt ifmt.c

% ./ifmt
value of pi = (22/7) = 3.14

Also see: snprintf()

Labels:




Tuesday, December 23, 2014
 
Solaris Studio : C/C++ Dynamic Analysis

First, a reminder - Oracle Solaris Studio 12.4 is now generally available. Check the Solaris Studio 12.4 Data Sheet before downloading the software from Oracle Technology Network.

Dynamic Memory Usage Analysis

Code Analyzer tool in Oracle Solaris Studio compiler suite can analyze static data, dynamic memory access data, and code coverage data collected from binaries that were compiled with the C/C++ compilers in Solaris Studio 12.3 or later. Code Analyzer is supported on Solaris and Oracle Enterprise Linux.

Refer to the static code analysis blog entry for a quick summary of steps involved in performing static analysis. The focus of this blog entry is the dynamic portion of the analysis. In this context, dynamic analysis is the evaluation of an application during runtime for memory related errors. Main objective is to find and debug memory management errors -- robustness and security assurance are nice side effects however limited their extent is.

Code Analyzer relies on another primary Solaris Studio tool, discover, to find runtime errors that are often caused by memory mismanagement. discover looks for potential errors such as accessing outside the bounds of the stack or an array, unallocated memory reads and writes, NULL pointer deferences, memory leaks and double frees. Full list of memory management issues analyzed by Code Analyzer/discover is at: Dynamic Memory Access Issues

discover performs the dynamic analysis by instrumenting the code so that it can keep track of memory operations while the binary is running. During runtime, discover monitors the application's use of memory by interposing on standard memory allocation calls such as malloc(), calloc(), memalign(), valloc() and free(). Fatal memory access errors are detected and reported immediately at the instant the incident occurs, so it is easy to correlate the failure with actual source. This behavior helps in detecting and fixing memory management problems in large applications with ease somewhat. However the effectiveness of this kind of analysis highly depends on the flow of control and data during the execution of target code - hence it is important to test the application with variety of test inputs that may maximize code coverage.

High-level steps in using Code Analyzer for Dynamic Analysis

Given the enhancements and incremental improvements in analytical tools, Solaris Studio 12.4 is recommended for this exercise.

  1. Build the application with debug flags

    –g (C) or -g0 (C++) options generate debug information. It enables Code Analyzer to display source code and line number information for errors and warnings.

    • Linux users: specify –xannotate option on compile/link line in addition to -g and other options
  2. Instrument the binary with discover

    % discover -a -H <filename>.%p.html -o <instrumented_binary> <original_binary>

    where:

    • -a : write the error data to binary-name.analyze/dynamic directory for use by Code Analyzer
    • -H : write the analysis report to <filename>.<pid>.html when the instrumented binary was executed. %p expands to the process id of the application. If you prefer the analysis report in a plain text file, use -w <filename>.%p.txt instead
    • -o : write the instrumented binary to <instrumented_binary>

    Check Command-Line Options page for the full list of discover supported options.

  3. Run the instrumented binary

    .. to collect the dynamic memory access data.

    % ./<instrumented_binary> <args>

  4. Finally examine the analysis report for errors and warnings

Example

The following example demonstrates the above steps using Solaris Studio 12.4 C compiler and discover command-line tool. Same code was used to demonstrate static analysis steps as well.

% cat someapp.c
#include <stdio.h>
#include <stdlib.h>

#define SIZE 3

int main() {

        int *arrX[SIZE];

        for (int i = 0; i < SIZE; ++i) {
                arrX[i] = calloc(1, sizeof(int));
                *arrX[i] = (i*5);
        }

        for (int i = 1; i <= SIZE; ++i) {
                printf("\narrX[%d] = %d", i, *arrX[i]);
                free(arrX);
        }

        return 0;
}

% cc -V
cc: Sun C 5.13 SunOS_sparc 2014/10/20

% cc -g -o someapp someapp.c

% discover -a -w dynanalysis.%p.txt -o someapp.dyn someapp
discover: (warning): /var/tmp/someapp: 94.3% of code instrumented (4 out of 5 functions).

% ./someapp.dyn

arrX[1] = 5
arrX[2] = 10Segmentation Fault (core dumped)

% cat dynanalysis.4492.txt
ERROR 1 (BFM): freeing wrong memory block "arrX" at address 0xffbffb00 at:
        main() + 0x214  <someapp.c:17>
                14:
                15:     for (int i = 1; i <= SIZE; ++i) {
                16:             printf("\narrX[%d] = %d", i, *arrX[i]);
                17:=>           free(arrX);
                18:     }
                19:
                20:     return 0;
        _start() + 0x108
ERROR 2 (UMR): accessing uninitialized data at address 0xffbffb0c (4 bytes) on the stack at:
        main() + 0x1c0  <someapp.c:16>
                13:     }
                14:
                15:     for (int i = 1; i <= SIZE; ++i) {
                16:=>           printf("\narrX[%d] = %d", i, *arrX[i]);
                17:             free(arrX);
                18:     }
                19:
        _start() + 0x108
ERROR 3 (IMR): read from invalid memory at address 0x0 (4 bytes)  at:
        main() + 0x200  <someapp.c:16>
                13:     }
                14:
                15:     for (int i = 1; i <= SIZE; ++i) {
                16:=>           printf("\narrX[%d] = %d", i, *arrX[i]);
                17:             free(arrX);
                18:     }
                19:
        _start() + 0x108

Few things to be aware of:

Reference & Recommended Reading:

  1. Oracle Solaris Studio 12.4 : Code Analyzer User's Guide
  2. Oracle Solaris Studio 12.4 : Discover and Uncover User's Guide

Labels:




Saturday, December 20, 2014
 
Economic Effects of Inflation

Read the first part in this series: Inflation

The following are some of the consequences of inflation on a very high level.

Source & References: Various www sources including investopedia, wikipedia, economicshelp.org

Image courtesy: unknown source on www





2004-2015 

This page is powered by Blogger. Isn't yours?