Mandalika's scratchpad | [ Work blog @Oracle | My Music Compositions ] |
Old Posts: 09.04 10.04 11.04 12.04 01.05 02.05 03.05 04.05 05.05 06.05 07.05 08.05 09.05 10.05 11.05 12.05 01.06 02.06 03.06 04.06 05.06 06.06 07.06 08.06 09.06 10.06 11.06 12.06 01.07 02.07 03.07 04.07 05.07 06.07 08.07 09.07 10.07 11.07 12.07 01.08 02.08 03.08 04.08 05.08 06.08 07.08 08.08 09.08 10.08 11.08 12.08 01.09 02.09 03.09 04.09 05.09 06.09 07.09 08.09 09.09 10.09 11.09 12.09 01.10 02.10 03.10 04.10 05.10 06.10 07.10 08.10 09.10 10.10 11.10 12.10 01.11 02.11 03.11 04.11 05.11 07.11 08.11 09.11 10.11 11.11 12.11 01.12 02.12 03.12 04.12 05.12 06.12 07.12 08.12 09.12 10.12 11.12 12.12 01.13 02.13 03.13 04.13 05.13 06.13 07.13 08.13 09.13 10.13 11.13 12.13 01.14 02.14 03.14 04.14 05.14 06.14 07.14 09.14 10.14 11.14 12.14 01.15 02.15 03.15 04.15 06.15 09.15 12.15 01.16 03.16 04.16 05.16 06.16 07.16 08.16 09.16 12.16 01.17 02.17 03.17 04.17 06.17 07.17 08.17 09.17 10.17 12.17 01.18 02.18 03.18 04.18 05.18 06.18 07.18 08.18 09.18 11.18 12.18 01.19 02.19 05.19 06.19 08.19 10.19 11.19 05.20 10.20 11.20 12.20 09.21 11.21 12.22
libumem
is a userland memory allocator (a library) with some debugging features that enable easy identification and troubleshooting of process memory leaks and memory access errors. Apparently target application must either be linked with the library, or the library must be preloaded into the address space of the target process before users can take advantage of the diagnostic features offered by libumem
.
Besides the diagnostic support, libumem
strives to improve the memory allocation performance by creating and managing multiple independent caches each with a different buffer size. This results in good scaling due to reduced lock contention especially in multi-threaded applications with many threads allocating and deallocating memory concurrently. Evidently there will be a tradeoff somewhere that achieves better scalability; and the tradeoff in this case is a slight memory overhead. Keep in mind that some of this additional memory will be accounted for when examining the process for memory leaks.
Rest of this post details the steps involved in examining memory leaks with a simple native process as an example.
High-Level Steps:
Runtime debugging features such as memory leak detection, buffer overflows can be controlled by UMEM_*
environment variables. Check umem_debug(3MALLOC)
man page for the complete list of environment variables along with brief description.
Check if the target application was linked with libumem
library (-lumem
). If not, preload /usr/lib/libumem.so.*
before running the application.
1) Executable linked with libumem
library
% cc -g -o mleak -lumem leak.c % ldd mleak libumem.so.1 => /lib/libumem.so.1 libc.so.1 => /lib/libc.so.1
2) Executable not linked with libumem
library
% cc -g -o mleak leak.c % ldd mleak libc.so.1 => /lib/libc.so.1 % export LD_PRELOAD_32=/usr/lib/libumem.so.1 % export UMEM_DEBUG=default % ./mleak
Memory leaks can be examined with the help of modular debugger, mdb
. Couple of possibilities.
attach a live process to the mdb
debugger and run relevant dcmd
s such as ::findleaks
# echo ::findleaks | mdb -p `pgrep mleak`
generate a core image of the running process and examine the core file for memory leaks
# gcore `pgrep mleak` gcore: core.7487 dumped # echo ::findleaks | mdb core.7487
Complete Example:
% cat -n leak.c 1 #include <stdio.h> 2 #include <stdlib.h> 3 #include <unistd.h> 4 5 int *intblk() 6 { 7 int *someint = malloc( sizeof(int) * 2 ); 8 return malloc( sizeof(int) ); 9 } 10 11 void main() 12 { 13 int i = 0; 14 while ( 1 ) { 15 int *ptr = intblk(); 16 *ptr = (++i * 2); 17 if ( !(*ptr % 3) ) { 18 continue; 19 } 20 if ( *ptr > 500 ) { 21 abort(); 22 } 23 free( ptr ); 24 sleep( 1 ); 25 } 26 } % cc -g -o mleak leak.c # export UMEM_DEBUG=default # export LD_PRELOAD_32=/usr/lib/libumem.so.1 # ./mleak & [1] 22427
High level summary memory leak report
mdb's findleaks
dcmd displays the potential memory leaks. The summary leak report shows the bufctl address along with the topmost stack frame at the point when the memory was allocated.
eg., contd.,
# mdb -p `pgrep mleak` Loading modules: [ ld.so.1 libumem.so.1 libc.so.1 ] > ::findleaks CACHE LEAKED BUFCTL CALLER 000b0008 52 002cc780 intblk+4 000b0008 17 002cc8e8 intblk+0x10 ------------------------------------------------------------------------ Total 69 buffers, 1104 bytes
Stack trace for each leak
To get the stack trace for a memory allocation that resulted in a leak, dump the bufctl
structure. The address of this structure can be obtained from the output of the findleaks
dcmd (highlighted in blue in above output).
eg., contd.,
> 002cc780::bufctl_audit ADDR BUFADDR TIMESTAMP THREAD CACHE LASTLOG CONTENTS 2cc780 2c3fe0 50963bfdaab48 1 b0008 0 0 libumem.so.1`umem_cache_alloc+0x148 libumem.so.1`umem_alloc+0x6c libumem.so.1`malloc+0x28 intblk+4 main+8 _start+0x108 > 002cc8e8::bufctl_audit ADDR BUFADDR TIMESTAMP THREAD CACHE LASTLOG CONTENTS 2cc8e8 2c3f80 509643711ef91 1 b0008 0 0 libumem.so.1`umem_cache_alloc+0x148 libumem.so.1`umem_alloc+0x6c libumem.so.1`malloc+0x28 intblk+0x10 main+8 _start+0x108
Detailed report in one shot
To obtain a detailed leak report that shows the summary report along with stack traces for each memory allocation that ended up with a leak, run findleaks
dcmd with -d
option. -fv
options provide some additional detail.
eg., contd.,
> ::findleaks -d CACHE LEAKED BUFCTL CALLER 000b0008 52 002cc780 intblk+4 000b0008 17 002cc8e8 intblk+0x10 ------------------------------------------------------------------------ Total 69 buffers, 1104 bytes umem_alloc_16 leak: 52 buffers, 16 bytes each, 832 bytes total ADDR BUFADDR TIMESTAMP THREAD CACHE LASTLOG CONTENTS 2cc780 2c3fe0 50963bfdaab48 1 b0008 0 0 libumem.so.1`umem_cache_alloc+0x148 libumem.so.1`umem_alloc+0x6c libumem.so.1`malloc+0x28 intblk+4 main+8 _start+0x108 umem_alloc_16 leak: 17 buffers, 16 bytes each, 272 bytes total ADDR BUFADDR TIMESTAMP THREAD CACHE LASTLOG CONTENTS 2cc8e8 2c3f80 509643711ef91 1 b0008 0 0 libumem.so.1`umem_cache_alloc+0x148 libumem.so.1`umem_alloc+0x6c libumem.so.1`malloc+0x28 intblk+0x10 main+8 _start+0x108
Histogram of the size of the non-freed allocations
[cache]::umem_malloc_info
reports information about malloc()
's by size for the memory allocations that weren't free()
'd - hence leaked. ::umem_malloc_info
output can be used to figureout the maximum allocation in a particular buffer.
eg., contd.,
> 000b0008::umem_malloc_info -gz CACHE BUFSZ MAXMAL BUFMALLC AVG_MAL MALLOCED OVERHEAD %OVER 000b0008 16 8 69 7 484 5043 1041.9% malloc size ------------------ Distribution ------------------ count 4 |@@@@@@@@@@@@ 17 8 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ 52
The above output shows that there were 17 malloc( 4 )
(4 bytes) calls at intblk+0x10
with no matching free(<addr>)
found anywhere. Disassembled code near that address is shown below.
eg., contd.,
> intblk+0x10::dis __init_kernel_mode_misaligned_data_trap_handler+8: nop 0x10994: illtrap 0x0 0x10998: illtrap 0x10000 0x1099c: illtrap 0x10000 0x109a0: illtrap 0x10000 0x109a4: illtrap 0x10000 intblk: save %sp, -0x68, %sp intblk+4: call +0x1017c <PLT=libumem.so.1`malloc> intblk+8: mov 0x8, %o0 intblk+0xc: st %o0, [%fp - 0x8] intblk+0x10: call +0x10170 <PLT=libumem.so.1`malloc> intblk+0x14: mov 0x4, %o0 intblk+0x18: st %o0, [%fp - 0x4] intblk+0x1c: ld [%fp - 0x4], %l0 intblk+0x20: or %l0, %g0, %i0 intblk+0x24: ret intblk+0x28: restore 0x109d4: illtrap 0x10000 0x109d8: illtrap 0x10000 0x109dc: illtrap 0x10000 0x109e0: illtrap 0x10000 > ::quit
Other related mdb dcmds of interest: umastat
Trivia:
Some of the libumem
's debugging features work only for allocations that are smaller than 16 KB in size. Allocations larger than 16 KB could have reduced support. Such allocations are usually referred to as oversized allocations in memory leak reports generated by findleaks
dcmd.
SEE ALSO:
Labels: oracle solaris libumem memory+leak mdb dbx solaris+studio valgrind
2004-2019 |