Mandalika's scratchpad [ Work blog @Oracle | Stock Market Notes | My Music Compositions ]

Old Posts: 09.04  10.04  11.04  12.04  01.05  02.05  03.05  04.05  05.05  06.05  07.05  08.05  09.05  10.05  11.05  12.05  01.06  02.06  03.06  04.06  05.06  06.06  07.06  08.06  09.06  10.06  11.06  12.06  01.07  02.07  03.07  04.07  05.07  06.07  08.07  09.07  10.07  11.07  12.07  01.08  02.08  03.08  04.08  05.08  06.08  07.08  08.08  09.08  10.08  11.08  12.08  01.09  02.09  03.09  04.09  05.09  06.09  07.09  08.09  09.09  10.09  11.09  12.09  01.10  02.10  03.10  04.10  05.10  06.10  07.10  08.10  09.10  10.10  11.10  12.10  01.11  02.11  03.11  04.11  05.11  07.11  08.11  09.11  10.11  11.11  12.11  01.12  02.12  03.12  04.12  05.12  06.12  07.12  08.12  09.12  10.12  11.12  12.12  01.13  02.13  03.13  04.13  05.13  06.13  07.13  08.13  09.13  10.13  11.13  12.13  01.14  02.14  03.14  04.14  05.14  06.14  07.14  09.14  10.14  11.14  12.14 


Friday, September 30, 2005
 
Setting up Webmin on Solaris 10

Webmin, a web-based interface for system administration has been bundled with Solaris 10. It gets installed by default, but we need to set it up if we want to configure and administer the system, over the web. Java based rich client Solaris Management Console (/usr/sbin/smc) is also available on Solaris 8 and later. However Webmin is faster and light weight compared to SMC.

Instructions for the setup:
  1. Login as root
  2. cd /usr/sfw/lib/webmin
  3. Run the setup script
    • ./setup.sh
    • Enter 1 for Operating System ie., Sun Solaris. There's Sun Java Desktop System as option 59. Perhaps it is the Linux version of JDS
    • Enter 6 for Version ie., Sun Solaris 10
    • Enter a port number for the web server. 10000 is the default port
    • Installation completes without asking any further questions
  4. Open up the URL: https://localhost:10000 in a web browser. Replace 10000 with the configured port, if the default value has been changed
  5. Login as root, and have fun
  6. To uninstall webmin, simply run /etc/webmin/uninstall.sh

Here's a screenshot of Webmin running on Solaris 10/JDS:

Note:
  1. For other versions of Solaris, download webmin from prdownloads.sourceforge.net/webadmin/webmin-1.230.tar.gz web page, and follow the instructions posted here
  2. Webmin documentation is at www.webmin.com/index2.html
  3. According to the supported Operating Systems web page, The best supported systems at the moment are Solaris, Linux (Redhat in particular) and FreeBSD. Webmin currently supports 59 flavors of *nix {and even Windows, I guess}
________________
Technorati tag:


Thursday, September 29, 2005
 
Troubleshooting: Solaris {vold} refuses to mount & ejects CD/DVD

Problem:

While Volume Management Daemon (vold) is running, Solaris rejects all CD/DVD media and ejects 'em as soon as they were inserted in the CD/DVD drive. However manual mounting of the same media works, with no issues.

Solution:

vold needs the removable media device server smserver, to be running in order to manage the removable media devices. So, make sure smserver is enabled, and on-line.
# svcs -l smserver
fmri svc:/network/rpc/smserver:default
name removable media management
enabled false
state disabled

next_state none
state_time Thu Sep 29 20:25:19 2005
restarter svc:/network/inetd:default
contract_id
dependency require_all/restart svc:/network/rpc/bind (online)

smserver is not running, and the following lines from truss output confirms that client (vold) is not able to communicate with the smserver.
/9:     open("/var/run/smedia_svc", O_RDONLY)           = 35
/9: door_call(35, 0xD244DCA0) Err#9 EBADF
/9: close(35) = 0

The above series of calls can be easily mapped to the Solaris source code hosted at OpenSolaris web site, starting from call to smedia_get_handle in medium.c, and then tracing the calls further {in source code}.

Comments in smediad.c source file helps understanding the implementation a bit.

So to resolve this issue, simply bring smserver online. Let's continue with the above example.
# svcadm -v enable smserver
svc:/network/rpc/smserver:default enabled.

# svcs -p smserver
STATE STIME FMRI
online 21:17:43 svc:/network/rpc/smserver:default

Now vold is able to communicate with smserver and did its job of mounting the removable media (CD).

Excerpt from truss output:
/10:    open("/var/run/smedia_svc", O_RDONLY)           = 36
/10: door_call(36, 0xD244DDB8) = 0
/10: close(36) = 0

Here's the df output:
# df -h | grep cdrom
/vol/dev/dsk/c1t0d0/wgr614v5 239M 239M 0K 100% /cdrom/wgr614v5
________________
Technorati tag: |


Thursday, September 22, 2005
 
Code Coverage Analysis with tcov

It is a good practice to do code coverage analysis in the development environment, to find the cold and hot spots of the application. Cold spots are those parts of code that are rarely used (or executed); and the Hot spots are those parts of the code, that are very frequently used. Doing this analysis in the development stage, helps: (i) improving the performance of the application by tuning the hot spots, and (ii) removing the dead code and/or rewriting or moving the cold spots, in such a way that the code will be executed for decent number of times in the life time of the program. The tricky part is creating extensive test cases to cover all the branches of the source code. Since we may not be able to generate test cases for all scenarios that can exercise all parts of the code, this analysis also helps detecting new test cases that we could invent to cover certain features. For example, code coverage analysis on regular builds certainly helps us discover the blocks of code that was never executed; so it gives us an opportunity to come up with better test cases to run against profile feedback collect builds during training run.

Sun Microsystems bundled a tool called tcov (short for: test coverage), with Sun Studio compiler suite, for code coverage analysis. tcov gives line-by-line information on how a program executes. Most of this information is annotated in a copy of the source file {with .toc extension}, when the {tcov} tool is run against the source file, along with profile data from the execution of test cases.

Steps to do the Code Coverage Analysis with tcov
  1. Compile the program with additional option -xprofile=tcov

  2. Run the program and all the test cases against the application. Since it was compiled with -xprofile option, {by default} the run-time will create a directory called <executable>.profile in the same directory from where the executable is run. This behavior can be overridden by setting SUN_PROFDATA_DIR or SUN_PROFDATA environment variables.

    <executable>.profile directory contains a file called tcovd. tcovd holds the information about the line numbers, and the execution count. This is a plain text file.

  3. Run tcov with -x option, over each source file to generate the annotated source file
Here's an example. Example was taken from the article on profile feedback optimization.

eg.,
% cat tcovex.c
#include <stdio.h>
#include <stdlib.h>

static unsigned _sum (unsigned *a0, unsigned *a1, unsigned *a2) {

unsigned result = 0;

if (a0 == NULL) {
printf("a0 == NULL");
} else {
result += (*a0);
}

if (a1 == NULL) {
printf("a1 == NULL");
} else {
result += (*a1);
}

if (a2 == NULL) {
printf("a2 == NULL");
} else {
result += (*a2);
}

return (result);
}

int main(int argc, const char *argv[]) {
int i, j, niters = 1, n=3;
unsigned sum, answer = 0, a[3];

niters = 1000000000;

if (argc == 2) {
niters = atoi(argv[1]);
}

for (j = 0; j < n; j++) {
a[j] = rand();
answer += a[j];
}

for (i = 0; i < niters; i++) {
sum = _sum (a+0, a+1, a+2);
}

if (sum == answer) {
printf("answer = %u\n", answer);
} else {
printf("error sum=%u, answer=%u", sum, answer);
}

return (0);
}

% cc -xO2 -xprofile=tcov -o tcovex tcovex.c

% setenv SUN_PROFDATA_DIR /tmp

% ls -ld /tmp/*.profile
No match

% ./tcovex 10000000
answer = 32709

% ls -ld /tmp/*.profile
drwxrwxrwx 2 build engr 179 Sep 22 19:25 /tmp/tcovex.profile/

% ls -lR /tmp/tcovex.profile/
/tmp/tcovex.profile/:
total 16
-rw-rw-rw- 1 build engr 318 Sep 22 19:25 tcovd

% tcov -x $SUN_PROFDATA_DIR/tcovex.profile tcovex.c

% ls -l $SUN_PROFDATA_DIR/*.tcov
-rw-rw-rw- 1 build engr 1857 Sep 22 19:27 /tmp/tcovex.c.tcov

% cat /tmp/tcovex.c.tcov
#include
#include

static unsigned _sum (unsigned *a0, unsigned *a1, unsigned *a2) {

10000000 -> unsigned result = 0;

if (a0 == NULL) {
##### -> printf("a0 == NULL");
} else {
10000000 -> result += (*a0);
}

10000000 -> if (a1 == NULL) {
##### -> printf("a1 == NULL");
} else {
10000000 -> result += (*a1);
}

10000000 -> if (a2 == NULL) {
##### -> printf("a2 == NULL");
} else {
10000000 -> result += (*a2);
}

return (result);
}

int main(int argc, const char *argv[]) {
1 -> int i, j, niters = 1, n=3;
unsigned sum, answer = 0, a[3];

niters = 1000000000;

if (argc == 2) {
1 -> niters = atoi(argv[1]);
}

1 -> for (j = 0; j < n; j++) {
3 -> a[j] = rand();
answer += a[j];
}

1 -> for (i = 0; i < niters; i++) {
10000000 -> sum = _sum (a+0, a+1, a+2);
}

1 -> if (sum == answer) {
1 -> printf("answer = %u\n", answer);
} else {
##### -> printf("error sum=%u, answer=%u", sum, answer);
}

return (0);
}


Top 10 Blocks

Line Count

6 10000000
11 10000000
14 10000000
17 10000000
20 10000000
23 10000000
45 10000000
40 3
30 1
36 1


18 Basic blocks in this file
14 Basic blocks executed
77.78 Percent of the file executed

70000009 Total basic block executions
3888889.25 Average executions per basic block
Note:
Lines with prefix "#####" were never executed.

Reference and suggested reading:
Sun Studio 10 man page of tcov
__________________
Technorati tags:




Monday, September 05, 2005
 
One Year {Blog} Anniversary

I cannot believe it is {already} one year, ever since I started posting some content in my web log. In the last 12 months, there were around 90 posts on various operating system, C/C++ compiler features. Since I have to work with Solaris, and Sun Studio C/C++ compilers in my day-to-day life, most of the technical content is restricted to Solaris operating system, and Sun Studio compilers. Here's a consolidated list of various posts:

HOW TOs

Solaris
  1. Fixing sound card woes
  2. Solaris 10: USB Digital Camera HOWTO
  3. Mounting a CD-ROM manually
  4. Hijacking a function call (interposing)
  5. Installing apps/packages with pkg-get
  6. Recovering from a Runtime Linker Failure I | II | III
  7. An Odyssey to Solaris 11 on Solaris Express 17
  8. Writing a Signal Handler
  9. lofiadm - Mounting an ISO image on Solaris file system
  10. Getting rid of [English/European] bar underneath every window
  11. Setting up a DHCP client
  12. Resetting Forgotten Root Password
  13. Build a Shared Library
  14. Building ICU 2.2 & Xerces 2.1 on Solaris 10 x86
  15. Building MPlayer
  16. Splitting and Merging Files
Linux
  1. Installing Sun Java Desktop System 2.0
  2. Find the amount of free & used memory
  3. Installing Dynamic Fonts
  4. Installing Source RPM (SRPM)
  5. Recovering from Frozen XWindows
  6. JDS Linux & Sony DSC-V1/W1 Digital Camera

About

Solaris
  1. MPSS: More performance with Large Pages (Solaris 9 and higher)
  2. malloc vs mtmalloc
  3. Solaris Virtual Memory System
  4. File permissions
  5. UNIX terminology
  6. Year 2038 rollover problem
  7. Know the process resource limits
  8. CPU hog with connections in CLOSE_WAIT
  9. 32-bits , fopen() and max number of open files
  10. Running Enterprise applications on Solaris 10
  11. Binary Compatibility
  12. Undocumented Thread Environment Variables
  13. Linker: -B {static | dynamic}
  14. Initialization & Termination routines in a dynamic object
  15. Printing Stack Trace with printstack()
  16. OpenSolaris: Open for download
  17. Some useful commands
  18. Some tips
Sun Studio C/C++
  1. Linker Scoping default scope | default scope contd. | symbolic scope | symbolic scope contd. | hidden scope | adv/disadv of global scope | Benefits of Linker Scoping
  2. Profile Feedback Optimization (PFO or FBO) I | II
  3. Investigating memory leaks with Collector/Analyzer
  4. global const variables, symbol collisions & symbolic scoping
  5. Behavior of Sun C++ Compiler While Compiling Templates
  6. Annotated listing (compiler commentary) with er_src
  7. Support for UTF-16 String Literals
  8. Sun Studio 9: RR or GA?
  9. Position Independent Code (PIC)
  10. Inlining routines
  11. #pragma pack
  12. cscope - an interactive program examiner
C/C++/Java
  1. Life Cycle of a C/C++ program
  2. C/C++/Java: ++ unary operator
  3. struct vs union
  4. Functions with variable numbers of arguments
  5. Virtual functions
  6. C/C++ & Object Oriented Jargon
  7. C++ name mangling
  8. External Linkage
  9. 2s complement
  10. Conditional compilation and #if 0
  11. Assertions
Miscellaneous
  1. Oracle Server architecture
  2. Sun achieves winning Siebel benchmark
  3. Activating Comcast High Speed Internet Account
  4. Csh: Arguments too long error
  5. MPlayer: Extracting Audio from a DVD
Music
  1. Favorite Music
________________
Technorati tag: | | |


Sunday, September 04, 2005
 
Sun Studio C/C++: Profile Feedback Optimization II

Most of the related information is already available at: Sun Studio C/C++: Profile Feedback Optimization. This blog post tries to cover the missing {from previous blog post} pieces of PFO (aka Feedback Based Optimization, FBO).

Compiling with multiple profiles

Even though it was not mentioned explicitly {in plain english} in the C++ compiler options, Sun C/C++ compilers accept multiple profiles on the compile line, with multiple -xprofile=use:<dir> options. -xprofile=use:<dir>:<dir>..<dir> results in a compilation error.

eg.,
CC -xO4 -xprofile=use:/tmp/prof1.profile -xprofile=/tmp/prof2.profile driver.cpp

When compiler encounters multiple profiles on the compile line, it merges all the data before proceeding to do optimizations based on the feedback data.

Building patches contd.,

In general, it is always recommended to collect profile data, whenever something gets changed in the source code. However it may not be feasible to do it, when very large applications were built with feedback optimization. So, organizations tend to skip the feedback data collection when the changes are limited to very few lines (Quick fixes); and to collect the data once the quick fixes become large enough to release a patch Cluster (aka Fix pack). Normally fix packs will have the binaries for the entire product, and all the old binaries will be replaced with the new ones when the patch was applied.

It is important to know, how a simple change in source code affects the feedback optimization, in the presence of old profile data. Assume that an application was linked with a library libstrimpl.so, that has implementation for string comparison (__strcmp) and for calculating the length of a string (__strlen).

eg.,
% cat strimpl.h
int __strcmp(const char *, const char *);
int __strlen(const char *);

% cat strimpl.c
#include <stdlib.h>
#include "strimpl.h"

int __strcmp(const char *str1, const char *str2 ) {
int rc = 0;

for(;;) {
rc = *str1 - *str2;
if(rc != 0 || *str1 == 0) {
return (rc);
}
++str1;
++str2;
}
}

int __strlen(const char *str) {
int length = 0;

for(;;) {
if (*str == 0) {
return (length);
} else {
++length;
++str;
}
}
}

% cat driver.c
#include <stdio.h>
#include "strimpl.h"

int main() {
int i;

for (i = 0; i < 50; ++i) {
printf("\nstrcmp(pod, podcast) = %d", __strcmp("pod", "podcast"));
printf("\nstrlen(Solaris10) = %d", __strlen("Solaris10"));
}

return (0);
}

Now let's assume that the driver was built with the feedback data, with the following commands:
cc -xO2 -xprofile=collect -G -o libstrimpl.so strimpl.c
cc -xO2 -xprofile=collect -lstrimpl -o driver driver.c
./driver
cc -xO2 -xprofile=use:driver -G -o libstrimpl.so strimpl.c
cc -xO2 -xprofile=use:driver -lstrimpl -o driver driver.c

For the next release of the driver, let's say the string library was extended by a routine to reverse the given string (__strreverse). Let's see what happens if we skip the profile data collection for this library, after integrating the code for __strreverse routine. The new code can be added anywhere (top, middle or at the end) in the source file.

Case 1: Assuming the routine was added at the bottom of the existing routines

% cat strimpl.c
#include <stdlib.h>
#include "strimpl.h"

int __strcmp(const char *str1, const char *str2 ) { ... }

int __strlen(const char *str) { ... }

char *__strreverse(const char *str) {
int i, length = 0;
char *revstr = NULL;

length = __strlen(str);
revstr = (char *) malloc (sizeof (char) * length);

for (i = length; i > 0; --i) {
*(revstr + i - 1) = *(str + length - i);
}

return (revstr);
}

% cc -xO2 -xprofile=use:driver -G -o libstrimpl.so strimpl.c
warning: Profile feedback data for function __strreverse is inconsistent. Ignored.

This (adding the new code at the bottom of the source file) is the recommended/wisest thing to do, if we don't want to collect the feedback data for the new code that we add. Doing so, the existing profile data remains consistent, and get optimized as before. Since there is no feedback data available for the new code, compiler simply does the optimizations as it usually does without -xprofile.

Case 2: Assuming the routine was added somewhere in the middle of the source file

% cat strimpl.c
#include <stdlib.h>
#include "strimpl.h"

int __strcmp(const char *str1, const char *str2 ) { ... }

char *__strreverse(const char *str) {
int i, length = 0;
char *revstr = NULL;

length = __strlen(str);
revstr = (char *) malloc (sizeof (char) * length);

for (i = length; i > 0; --i) {
*(revstr + i - 1) = *(str + length - i);
}

return (revstr);
}

int __strlen(const char *str) { ... }

% cc -xO2 -xprofile=use:driver -G -o libstrimpl.so strimpl.c
warning: Profile feedback data for function __strreverse is inconsistent. Ignored.
warning: Profile feedback data for function __strlen is inconsistent. Ignored.

As compiler keeps track of the routines by line numbers, introducing some code in a routine makes its profile data inconsistent. Also since the position of all other routines that are underneath the newly introduced code may change, their feedback data becomes inconsistent, and hence compiler ignores the profile data, to avoid introducing functional errors.

The same argument holds true, when the new code was added at the top of the existing routines; but it makes it even worse, since all the profile data for the routines of this object become unusable (inconsistent). Have a look at the warnings from the following example:

Case 3: Assuming the routine was added at the top of the source file

#include <stdlib.h>
#include "strimpl.h"

char *__strreverse(const char *str) {
int i, length = 0;
char *revstr = NULL;

length = __strlen(str);
revstr = (char *) malloc (sizeof (char) * length);


for (i = length; i > 0; --i) {
*(revstr + i - 1) = *(str + length - i);
}

return (revstr);
}

int __strcmp(const char *str1, const char *str2 ) { ... }

int __strlen(const char *str) { ... }

% cc -xO2 -xprofile=use:driver -G -o libstrimpl.so strimpl.c
warning: Profile feedback data for function __strreverse is inconsistent. Ignored.
warning: Profile feedback data for function __strcmp is inconsistent. Ignored.
warning: Profile feedback data for function __strlen is inconsistent. Ignored.

SPARC, x86/x64 compatibility

At this time, there is no compatibility between the way the profile data gets generated & gets processed on SPARC, and x86/x64 platforms. That is, it is not possible to share the feedback data generated by C/C++ compilers on SPARC, in x86/x64 platforms and vice-versa.

However there seems to be some plan in place to make it compatible in Sun Studio 12 release.

Asynchronous profile collection

Current profile data collection requires the process to be terminated, in order to dump the feedback data. Also with multi-threading processes, there will be some incomplete profile data generation, due to the lock contention between multiple threads. If the process dynamically loads, and unloads other libraries with the help of dlopen(), dlclose() system calls, it leads to indirect call profiling, and it has its share of problems in collecting the data.

Asynchronous profile collection eases all the problems mentioned above by letting the profiler thread to write the profile data it is collecting, periodically. With the asynchronous data collection, the probability of getting the proper feedback data is high.

This feature will be available by default in Sun Studio 11; and as a patch to Sun Studio 9 & 10 compilers. Stay tuned for the exact patch numbers for Studio 9 and 10.

Notes:
  1. When -xprofile=collect is used to compile a program for profile collection and -xprofile=use is used to compile a program with profile feedback, the source files and compiler options other than -xprofile=collect and -xprofile=use must be identical in both compilations

  2. If both -xprofile=collect and -xprofile=use are specified in the same command line, the rightmost -xprofile option in the command line is applied

  3. If the code was compiled with -g or -g0 options, with the help of er_src utility, we can see how the compiler is optimizing with the feedback data. Here's how to: Sun Studio C/C++: Annotated listing (compiler commentary) with er_src
Acknowledgements:
Chris Aoki, Sun Microsystems
__________________
Technorati tags: | |



2004-2014 

This page is powered by Blogger. Isn't yours?