Mandalika's scratchpad [ Work blog @Oracle | My Music Compositions ]

Old Posts: 09.04  10.04  11.04  12.04  01.05  02.05  03.05  04.05  05.05  06.05  07.05  08.05  09.05  10.05  11.05  12.05  01.06  02.06  03.06  04.06  05.06  06.06  07.06  08.06  09.06  10.06  11.06  12.06  01.07  02.07  03.07  04.07  05.07  06.07  08.07  09.07  10.07  11.07  12.07  01.08  02.08  03.08  04.08  05.08  06.08  07.08  08.08  09.08  10.08  11.08  12.08  01.09  02.09  03.09  04.09  05.09  06.09  07.09  08.09  09.09  10.09  11.09  12.09  01.10  02.10  03.10  04.10  05.10  06.10  07.10  08.10  09.10  10.10  11.10  12.10  01.11  02.11  03.11  04.11  05.11  07.11  08.11  09.11  10.11  11.11  12.11  01.12  02.12  03.12  04.12  05.12  06.12  07.12  08.12  09.12  10.12  11.12  12.12  01.13  02.13  03.13  04.13  05.13  06.13  07.13  08.13  09.13  10.13  11.13  12.13  01.14  02.14  03.14  04.14  05.14  06.14  07.14  09.14  10.14  11.14  12.14  01.15  02.15  03.15  04.15  06.15  09.15  12.15  01.16  03.16  04.16  05.16  06.16  07.16  08.16  09.16  12.16  01.17  02.17  03.17  04.17  06.17  07.17  08.17  09.17  10.17  12.17  01.18  02.18  03.18  04.18  05.18  06.18  07.18  08.18  09.18  11.18  12.18  01.19 

Sunday, January 27, 2019
Python Object Persistence Module, pickle — Quick Notes

Pickling or serialization is the process of converting a Python object to a byte stream; and Unpickling or deserialization is the process of re-creating the original in-memory Python object (not necessarily at the same memory address).

Python's pickle module has the necessary methods to pickle and unpickle Python object hierarchies.

pickle module:

It is possible to pickle a variety of data types including built-in types — numeric types (integer, float, complex numbers), sequence types (lists, tuples), text sequence type (strings), binary sequence types (bytes, bytearray), set types (set), mapping types (dictionary), classes and built-in functions defined at the top level of a module.

Any attempt to pickle an unpicklable object may trigger PicklingError exception.

Couple of gotchas:


A trivial example demonstrating the calls to pickle (save data to a binary file) and unpickle (load data from the binary file) a Python data structure.


import pickle

EMP = {}
EMP['name'] = 'Gary'
EMP['id'] = 12345

# pickle
with open('employee.db', 'wb') as f:
 pickle.dump(EMP, f, pickle.HIGHEST_PROTOCOL)

print '  Pickled data, EMP     ', EMP

# unpickle
with open('employee.db', 'rb') as f:
 EMP_REC = pickle.load(f)

print 'Unpickled data, EMP_REC ', EMP_REC, '\n'

print '(EMP_REC is EMP)? : ', (EMP_REC is EMP)
print '(EMP_REC == EMP)? : ', (EMP_REC == EMP)

Running the above code shows the following on stdout.

  Pickled data, EMP      {'name': 'Gary', 'id': 12345}
Unpickled data, EMP_REC  {'name': 'Gary', 'id': 12345} 

(EMP_REC is EMP)? :  False
(EMP_REC == EMP)? :  True

dump() method takes a serializable Python object as the first argument; and writes pickled representation of the object (serialized object) to a file. Second argument is the file handle that points to an open file. Rest of the arguments are optional.

load() method reads a pickled object representation (serialized data) from a file and returns the reconstructed object. The protocol version is detected automatically so it is not necessary to specify the protocol version during unpickling process.

In-Memory Pickling/Unpickling Operations

If persistence is not a requirement, dumps() and loads() methods in pickle module can be used to serialize (pickle) and deserialize (unpickle) a Python object in memory. This is useful when sending Python objects over network between compatible applications.


import pickle

EMP = {}
EMP['name'] = 'Gary'
EMP['id'] = 12345

# in-memory pickling
x = pickle.dumps(EMP, pickle.HIGHEST_PROTOCOL)

print '  Pickled data, EMP     ', EMP

# in-memory unpickling
EMP_REC = pickle.loads(x)

print 'Unpickled data, EMP_REC ', EMP_REC, '\n'

print '(EMP_REC is EMP)? : ', (EMP_REC is EMP)
print '(EMP_REC == EMP)? : ', (EMP_REC == EMP)

Running the above code shows output identical to the output produced by the previous code listing - just that there is no file involved this time.

  Pickled data, EMP      {'name': 'Gary', 'id': 12345}
Unpickled data, EMP_REC  {'name': 'Gary', 'id': 12345} 

(EMP_REC is EMP)? :  False
(EMP_REC == EMP)? :  True


As mentioned earlier, any attempt to pickle or unpickle objects that are not appropriate for serialization fail with an exception. Therefore, it is appropriate to safe guard the code with try-except blocks to handle unexpected failures.

Here is another trivial example demonstrating a pickling exception.


import sys
import pickle


 f = open('dummy.txt', 'a')
 x = pickle.dumps(f)
 print 'Pickled file handle'

except Exception, e:
 print 'Caught ', e.__class__.__name__, '-',  str(e)

Running the above code throws a TypeError as shown below.

Caught  TypeError - can't pickle file objects

(Credit: Various Sources including Python Documentation)


Sunday, December 30, 2018
Python Lambda Functions - Quick Notes

Lambda functions:


 lambda argument(s): expression


A trivial example.

def areaofcircle(radius):
 return math.pi * radius * radius

can be written as:

circlearea = lambda radius : math.pi * radius * radius

In this example, the lambda function accepts a lone argument radius; and the function evaluates the lambda expression (π * radius2) to calculate the area of a circle and returns the result to caller.

In case of lambda function, the identifier "circlearea" is assigned the function object the lambda expression creates so circlearea can be called like any normal function. eg., circlearea(5)

Another trivial example that converts first character in each word in a list to uppercase character.

>> fullname = ["john doe", "richard roe", "janie q"]
>>> fullname
['john doe', 'richard roe', 'janie q']
>>> map(lambda name : name.title(), fullname)
['John Doe', 'Richard Roe', 'Janie Q']


Saturday, December 08, 2018
Blast from the Past : The Weekend Playlist #15 — Cirque Du Soleil Special

This edition is dedicated to Cirque du Soleil, the entertainment group with many successful live shows under their belt. Live original music accompanies almost all of Cirque's carefully choreographed live performances.

Current playlist features music from Cirque du Soleil's various live shows — Saltimbanco (1992), Mystère (1993), Alegría (1994), Dralion (1999), Varekai (2002), Zumanity (2003), Kà (2004), Corteo (2005), Koozå (2007) and Luzia (2016).

Be aware that some of the songs are in imaginary/invented language.


Audio & Widget courtesy: Spotify

Old playlists:

    #1    #8   #14 (50s, 60s and 70s)    |    #2    #3    #4    #5 (80s)    |    #6    #7    #9 (90s)    |    #11    #12 (00s)    |    #13 (10s) |    #10 (Instrumental)


Friday, November 30, 2018
Programming in C: Few Tidbits #8

1) Function Pointers

Declaring Function Pointers

Similar to a variable declared as pointer to some data type, a variable can also be declared to be a pointer to a function. Such a variable stores the address of a function that can later be called using that function pointer. In other words, function pointers point to the executable code rather than data like typical pointers.

void (*func_ptr)();

In the above declaration, func_ptr is a variable that can point to a function that takes no arguments and returns nothing (void).

The parentheses around the function pointer cannot be removed. Doing so makes the declaration a function that returns a void pointer.

The declaration itself won't point to anything so a value has to be assigned to the function pointer which is typically the address of the target function to be executed.

Assigning Function Pointers

If a function by name dummy was already defined, the following assignment makes func_ptr variable to point to the function dummy.

void dummy() { ; }
func_ptr = dummy;

In the above example, function's name was used to assign that function's address to the function pointer. Using address-of / address operator (&) is another way.

void dummy() { ; }
func_ptr = &dummy;

Above two sample assignments highlight the fact that similar to arrays, a function's address can be obtained either by using address operator (&) or by simply specifying the function name - hence the use of address operator is optional. Here's an example proving that.

% cat funcaddr.c

#include <stdio.h>

void foo() { ; }

void main()
        printf("Address of function foo without using & operator = %p\n", foo);
        printf("Address of function foo         using & operator = %p\n", &foo);

% cc -o funcaddr funcaddr.c

% ./funcaddr
Address of function foo without using & operator = 10b6c
Address of function foo         using & operator = 10b6c

Using Function Pointers

Once we have a function pointer variable pointing to a function, we can call the function that it points to using that function pointer variable as if it is the actual function name. Dereferencing the function pointer is optional similar to using & operator during function pointer assignment. The dereferencing happens automatically if not done explicitly.


The following two function calls are equivalent, and exhibit the same behavior.


Complete Example

Here is one final example for the sake of completeness. This example demonstrate the execution of couple of arithmetic functions using function pointers. Same example also highlights the optional use of & operator and pointer dereferencing.

% cat funcptr.c
#include <stdio.h>

int add(int first, int second) {
        return (first + second);

int multiply(int first, int second) {
        return (first * second);

void main()
        int (*func_ptr)(int, int);                      /* declaration */
        func_ptr = add;                                 /* assignment (auto func address) */
        printf("100+200 = %d\n", (*func_ptr)(100,200)); /* execution  (dereferencing) */
        func_ptr = &multiply;                           /* assignment (func address using &) */
        printf("100*200 = %d\n", func_ptr(100,200));    /* execution  (auto dereferencing) */

% cc -o funcptr funcptr.c

% ./funcptr
100+200 = 300
100*200 = 20000

Few Practical Uses of Function Pointers

Function pointers are convenient and useful while writing functions that sort data. Standard C Library includes qsort() function to sort data of any type (integers, floats, strings). The last argument to qsort() is a function pointer pointing to the comparison function.

Function pointers are useful to write callback functions where a function (executable code) is passed as an argument to another function that is expected to execute the argument (call back the function sent as argument) at some point.

In both examples above function pointers are used to pass functions as arguments to other functions.

In some cases function pointers may make the code cleaner and readable. For example, array of function pointers may simplify a large switch statement.

2) Printing Unicode Characters

Here's one possible way.

Following rudimentary code sample prints random currency symbols and a name in Telugu script using both printf and wprintf function calls.

% cat -n unicode.c
     1  #include <wchar.h>
     2  #include <locale.h>
     3  #include <stdio.h>
     5  int main()
     6  {
     7          setlocale(LC_ALL,"en_US.UTF-8");
     8          wprintf(L"\u20AC\t\u00A5\t\u00A3\t\u00A2\t\u20A3\t\u20A4");
     9          wchar_t wide[4]={ 0x0C38, 0x0C30, 0x0C33, 0 };
    10          printf("\n%ls", wide);
    11          wprintf(L"\n%ls", wide);
    12          return 0;
    13  }

% cc -o unicode unicode.c

% ./unicode
€      ¥      £      ¢      ₣      ₤

Here is one website where numerical values for various Unicode characters can be found.


Saturday, September 29, 2018
Oracle SuperCluster: Brief Introduction to osc-interdom

Target audience: Oracle SuperCluster customers

The primary objective of this blog post is to provide some related information on this obscure tool to inquisitive users/customers as they might have noticed the osc-interdom service and the namesake package and wondered what is it for at some point.

SuperCluster InterDomain Communcation Tool, osc-interdom, is an infrastructure framework and a service that runs on Oracle SuperCluster products to provide flexible monitoring and management capabilities across SuperCluster domains. It provides means to inspect and enumerate the components of a SuperCluster so that other components can fulfill their roles in managing the SuperCluster. The framework also allows commands to be executed from a control domain to take effect across all domains on the server node (eg., a PDom on M8) and, optionally, across all servers (eg., other M8 PDoms in the cluster) on the system.

SuperCluster Virtual Assistant (SVA), ssctuner, exachk and Oracle Enterprise Manager (EM) are some of the consumers of the osc-interdom framework.

Installation and Configuration

Interdom framework requires osc-interdom package from exa-family repository be installed and enabled on all types of domains in the SuperCluster.

In order to enable communication between domains in the SuperCluster, interdom must be configured on all domains that need to be part of the inter-domain communication channel. In other words, it is not a requirement for all domains in the cluster to be part of the osc-interdom configuration. It is possible to exclude some domains from the comprehensive interdom directory either during initial configuration or at a later time. Also once the interdom directory configuration was built, it can be refreshed or rebuilt any time at will.

Since installing and configuring osc-interdom was automated and made part of SuperCluster installation and configuration processes, it is unlikely that anyone from the customer site need to know or to perform those tasks manually.

# svcs osc-interdom
STATE          STIME    FMRI
online         22:24:13 svc:/site/application/sysadmin/osc-interdom:default

Domain Registry and Command Line Interface (CLI)

Configuring interdom results in a Domain Registry. The purpose of the registry is to provide an accurate and up-to-date database of all SuperCluster domains and their characteristics.

oidcli is a simple command line interface for the domain registry. The oidcli command line utility is located in /opt/oracle.supercluster/bin directory.

oidcli utility can be used to query interdom domain registry for data that is associated with different components in the SuperCluster. Each component maps to a domain in the SuperCluster; and each component is uniquely identified by a UUID.

The SuperCluster Domain Registry is stored on Master Control Domain (MCD). The "master" is usually the first control domain in the SuperCluster. Since the domain registry is on the master control domain, it is expected to run oidcli on MCD to query the data. When running from other domains, option -a must be specified along with the management IP address of the master control domain.

Keep in mind that the data returned by oidcli is meant for other SuperCluster tools that have the ability to interpret the data correctly and coherently. Therefore humans who are looking at the same data may need some extra effort to digest and understand.


# cd /opt/oracle.supercluster/bin

# ./oidcli -h
Usage: oidcli [options] dir |  [options]   [...]
            , other component ID or 'all'
          (e.g. 'hostname', 'control_uuid') or 'all'
  NOTE: get_value must request single 

  -h, --help  show this help message and exit
  -p          Output in PrettyPrinter format
  -a ADDR     TCP address/hostname (and optional ',') for connection
  -d          Enable debugging output
  -w W        Re-try for up to  seconds for success. Use 0 for no wait.
              Default: 1801.0.

List all components (domains)

# ./oidcli -p dir
[   ['db8c979d-4149-452f-8737-c857e0dc9eb0', True],
    ['4651ac93-924e-4990-8cf9-83be556eb667', True],
    ['945696fb-97f1-48e3-aa20-8c8baf198ea8', True],
    ['4026d670-61db-425e-834a-dfc45ff9a533', True]]

List the hostname of all domains

# ./oidcli -p get_data all hostname
{   'hostname': {   'mtime': 1538089861, 'name': 'hostname', 'value': 'alpha'}}
{   'hostname': {   'mtime': 1538174309,
                    'name': 'hostname',
                    'value': 'beta'}}
List all available properties for all domains

# ./oidcli -p get_data all all
{   'banner_name': {   'mtime': 1538195164,
                       'name': 'banner_name',
                       'value': 'SPARC M7-4'},
    'comptype': {   'mtime': 1538195164, 'name': 'comptype', 'value': 'LDom'},
    'control_uuid': {   'mtime': 1538195164,
                        'name': 'control_uuid',
                        'value': 'Unknown'},
    'guests': {   'mtime': 1538195164, 'name': 'guests', 'value': None},
    'host_domain_chassis': {   'mtime': 1538195164,
                               'name': 'host_domain_chassis',
                               'value': 'AK00251676'},
    'host_domain_name': {   'mtime': 1538284541,
                            'name': 'host_domain_name',
                            'value': 'ssccn1-io-alpha'},

Query a specific property from a specific domain

# ./oidcli -p get_data 4651ac93-924e-4990-8cf9-83be556eb667 mgmt_ipaddr
{   'mtime': 1538143865,
    'name': 'mgmt_ipaddr',
    'value': ['', 20, 'scm_ipmp0']}

The domain registry is persistent and updated hourly. When accurate and up-to-date is needed, it is recommended to query the registry with --no-cache option.

# ./oidcli -p get_data --no-cache 4651ac93-924e-4990-8cf9-83be556eb667 load_average
{   'mtime': 1538285043,
    'name': 'load_average',
    'value': [0.01171875, 0.0078125, 0.0078125]}

The mtime attribute in all examples above represent the UNIX timestamp.

Debug Mode

By default, osc-interdom service runs in non-debug mode. Running the service in debug mode enables logging more details to osc-interdom service log.

In general, if osc-interdom service is transitioning to maintenance state, switching to the debug mode may provide few additional clues.

To check if debug mode was enabled, run:

svcprop -c -p config osc-interdom | grep debug

To enable debug mode, run:

svccfg -s sysadmin/osc-interdom:default setprop config/oidd_debug '=' true
svcadm restart osc-interdom

Finally check the service log for debug messages. svcs -L osc-interdom command output points to the location of osc-interdom service log.


Similar to SuperCluster resource allocation engine, osc-resalloc, interdom framework is mostly meant for automated tools with little or no human interaction. Consequently there are no references to osc-interdom in SuperCluster Documentation Set.



  Tim Cook


Friday, August 31, 2018
Random Solaris & Shell Command Tips — kstat, tput, sed, digest

The examples shown in this blog post were created and executed on a Solaris system. Some of these tips and examples are applicable to all *nix systems.

Digest of a File

One of the typical uses of computed digest is to check if a file has been compromised or tampered. The digest utility can be used to calculate the digest of files.

On Solaris, -l option lists out all cryptographic hash algorithms available on the system.

% digest -l

-a option can be used to specify the hash algorithm while computing the digest.

% digest -v -a sha1 /usr/lib/
sha1 (/usr/lib/ = 89a588f447ade9b1be55ba8b0cd2edad25513619

Multiline Shell Script Comments

Shell treats any line that start with '#' symbol as a comment and ignores such lines completely. (The line on top that start with #! is an exception).

From what I understand there is no multiline comment mechanism in shell. While # symbol is useful to mark single line comments, it becomes laborious and not well suited to comment quite a number of contiguous lines.

One possible way to achieve multiline comments in a shell script is to rely on a combination of shell built-in ':' and Here-Document code block. It may not be the most attractive solution but gets the work done.

Shell ignores the lines that start with a ":" (colon) and returns true.

% cat -n
     1  #!/bin/bash
     3  echo 'not a commented line'
     4  #echo 'a commented line'
     5  echo 'not a commented line either'
     7  echo 'beginning of a multiline comment'
     8  echo 'second line of a multiline comment'
     9  echo 'last line of a multiline comment'
    11  echo 'yet another "not a commented line"'

% ./
not a commented line
not a commented line either
yet another "not a commented line"

tput Utility to Jazz Up Command Line User Experience

The tput command can help make the command line terminals look interesting. tput can be used to change the color of text, apply effects (bold, underline, blink, ..), move cursor around the screen, get information about the status of the terminal and so on.

In addition to improving the command line experience, tput can also be used to improve the interactive experience of scripts by showing different colors and/or text effects to users.

% tput bold <= bold text
% date
Thu Aug 30 17:02:57 PDT 2018
% tput smul <= underline text
% date
Thu Aug 30 17:03:51 PDT 2018
% tput sgr0 <= turn-off all attributes (back to normal)
% date
Thu Aug 30 17:04:47 PDT 2018

Check the man page of terminfo for a complete list of capabilities to be used with tput.

Processor Marketing Name

On systems running Solaris, processor's marketing or brand name can be extracted with the help of kstat utility. cpu_info module provides information related to the processor(s) on the system.


% kstat -p cpu_info:1:cpu_info1:brand
cpu_info:1:cpu_info1:brand      SPARC-M8

On x86/x64:

% kstat -p cpu_info:1:cpu_info1:brand
cpu_info:1:cpu_info1:brand      Intel(r) Xeon(r) CPU           L5640  @ 2.27GHz

In the above example, cpu_info is the module. 1 is the instance number. cpu_info1 is the name of the section and brand is the statistic in focus. Note that cpu_info module has only one section cpu_info1. Therefore it is fine to skip the section name portion (eg., cpu_info:1::brand).

To see the complete list of statistics offered by cpu_info module, simply run kstat cpu_info:1.

Consolidating Multiple sed Commands

sed utility allows specifying multiple editing commands on the same command line. (in other words, it is not necessary to pipe multiple sed commands). The editing commands need to be separated with a semicolon (;)


The following two commands are equivalent and yield the same output.

% prtconf | grep Memory | sed 's/Megabytes/MB/g' | sed 's/ size//g'
Memory: 65312 MB

% prtconf | grep Memory | sed 's/Megabytes/MB/g;s/ size//g'
Memory: 65312 MB


Tuesday, July 31, 2018
Solaris 11: High-Level Steps to Create an IPS Package

Keywords: Solaris package IPS+Repository pkg

1Work on Directory Structure

Start with organizing the package contents (files) into the same directory structure that you want on the installed system.

In the following example the directory was organized in such a manner that when the package was installed, it results in software being copied to /opt/myutils directory.


# tree opt

`-- myutils
    |-- docs
    |   |-- README.txt
    |   `-- util_description.html

Create a directory to hold the software in the desired layout. Let us call this "workingdir", and this directory will be specified in subsequent steps to generate the package manifest and finally the package itself. Move the top level software directory to the "workingdir".

# mkdir workingdir
# mv opt workingdir

# tree -fai workingdir/

2Generate Package Manifest

Package manifest provides metadata such as package name, description, version, classification & category along with the files and directories included, and the dependencies, if any, need to be installed for the target package.

The manifest for an existing package can be examined with the help of pkg contents subcommand.

pkgsend generate command generates the manifest. It takes "workingdir" as input. Piping the output through pkgfmt makes the manifest readable.

# pkgsend generate workingdir | pkgfmt > myutilspkg.p5m.1

# cat myutilspkg.p5m.1

3Add Metadata to Package Manifest

Note that the package manifest is currently missing attributes such as name and description (metadata). Those attributes can be added directly to the generated manifest. However the recommended approach is to rely on pkgmogrify utility to make changes to an existing manifest.

Create a text file with the missing package attributes.

# cat mypkg_attr
set name=pkg.fmri value=myutils@3.0,5.11-0
set name=pkg.summary value="Utilities package"
set name=pkg.description value="Utilities package"
set name=variant.arch value=sparc
set value=global

set value=global action restricts the package installation to global zone. To make the package installable in both global and non-global zones, either specify set value=global value=nonglobal action in the package manifest, or do not have any references to variant at all in the manifest.

Now merge the metadata with the manifest generated in previous step.

# pkgmogrify myutilspkg.p5m.1 mypkg_attr | pkgfmt > myutilspkg.p5m.2

# cat myutilspkg.p5m.2

4Evaluate & Generate Dependencies

Generate the dependencies so they will be part of the manifest. It is recommended to rely on pkgdepend utility for this task rather than declaring depend actions manually to minimize inaccuracies.

# pkgdepend generate -md workingdir myutilspkg.p5m.2 | pkgfmt > myutilspkg.p5m.3

At this point, ensure that the manifest has all the dependencies listed. If not, declare the missing dependencies manually.

5Resolve Package Dependencies

This step might take a while to complete.

# pkgdepend resolve -m myutilspkg.p5m.3

6Verify the Package

By this time the package manifest should pretty much be complete. Check and validate it manually or using pkglint utility (recommended) for consistency and any possible errors.

# pkglint myutilspkg.p5m.3.res

7Publish the Package

For the purpose of demonstration let's go with the simplest option to publish the package, local file-based repository.

Create the local file based repository using pkgrepo command, and set the default publisher for the newly created repository.

# pkgrepo create my-repository
# pkgrepo -s my-repository set publisher/prefix=mypublisher

Finally publish the target package with the help of pkgsend command.

# pkgsend -s my-repository publish -d workingdir myutilspkg.p5m.3.res

# pkgrepo info -s my-repository
mypublisher 1        online           2018-07-04T01:41:57.414014Z

8Validate the Package

Finally validate whether the published package has been packaged properly by test installing it.

# pkg set-publisher -p my-repository
# pkg publisher
# pkg install myutils

# pkg info myutils
             Name: myutils
          Summary: Utilities package
      Description: Utilities package
            State: Installed
        Publisher: mypublisher
          Version: 3.0
    Build Release: 5.11
           Branch: 0
   Packaging Date: Wed Jul 04 01:41:57 2018
Last Install Time: Wed Jul 04 01:45:05 2018
             Size: 49.00 B
             FMRI: pkg://mypublisher/myutils@3.0,5.11-0:20180704T014157Z


Saturday, June 30, 2018
Python: Exclusive File Locking on Solaris

Solaris doesn't lock open files automatically (not just Solaris - most of *nix operating systems behave this way).

In general, when a process is about to update a file, the process is responsible for checking existing locks on target file, acquiring a lock and releasing it after updating the file. However given that not all processes cooperate and adhere to this mechanism (advisory locking) due to various reasons, such non-conforming practice may lead to problems such as inconsistent or invalid data mainly triggered by race condition(s). Serialization is one possible solution to prevent this, where only one process is allowed to update the target file at any time. It can be achieved with the help of file locking mechanism on Solaris as well as majority of other operating systems.

On Solaris, a file can be locked for exclusive access by any process with the help of fcntl() system call. fcntl() function provides for control over open files. It can be used for finer-grained control over the locking -- for instance, we can specify whether or not to make the call block while requesting exclusive or shared lock.

The following rudimentary Python code demonstrates how to acquire an exclusive lock on a file that makes all other processes wait to get access to the file in focus.


% cat -n
     1  #!/bin/python
     2  import fcntl, time
     3  f = open('somefile', 'a')
     4  print 'waiting for exclusive lock'
     5  fcntl.flock(f, fcntl.LOCK_EX)
     6  print 'acquired lock at %s' % time.strftime('%Y-%m-%d %H:%M:%S')
     7  time.sleep(10)
     8  f.close()
     9  print 'released lock at %s' % time.strftime('%Y-%m-%d %H:%M:%S')

Running the above code in two terminal windows at the same time shows the following.

Terminal 1:

% ./
waiting for exclusive lock
acquired lock at 2018-06-30 22:25:36
released lock at 2018-06-30 22:25:46

Terminal 2:

% ./
waiting for exclusive lock
acquired lock at 2018-06-30 22:25:46
released lock at 2018-06-30 22:25:56

Notice that the process running in second terminal was blocked waiting to acquire the lock until the process running in first terminal released the exclusive lock.

Non-Blocking Attempt

If the requirement is not to block on exclusive lock acquisition, it can be achieved with LOCK_EX (acquire exclusive lock) and LOCK_NB (do not block when locking) operations by performing a bitwise OR on them. In other words, the statement fcntl.flock(f, fcntl.LOCK_EX) becomes fcntl.flock(f, fcntl.LOCK_EX | fcntl.LOCK_NB) so the process will either get the lock or move on without blocking.

Be aware that an IOError will be raised when a lock cannot be acquired in non-blocking mode. Therefore, it is the responsibility of the application developer to catch the exception and properly deal with the situation.

The behavior changes as shown below after the inclusion of fcntl.LOCK_NB in the sample code above.

Terminal 1:

% ./
waiting for exclusive lock
acquired lock at 2018-06-30 22:42:34
released lock at 2018-06-30 22:42:44

Terminal 2:

% ./
waiting for exclusive lock
Traceback (most recent call last):
  File "./", line 5, in 
    fcntl.flock(f, fcntl.LOCK_EX | fcntl.LOCK_NB)
IOError: [Errno 11] Resource temporarily unavailable


Thursday, May 31, 2018
Solaris 11.4: 10 Good-to-Know Features, Enhancements or Changes

  1. [Admins] Device Removal From a ZFS Storage Pool

    In addition to removing hot spares, cache and log devices, Solaris 11.4 has support for removal of top-level virtual data devices (vdev) from a zpool with the exception of a RAID-Z pool. It is possible to cancel a remove operation that's in progress too.

    This enhancement will come in handy especially when dealing with overprovisioned and/or misconfigured pools.

    Ref: ZFS: Removing Devices From a Storage Pool for examples.

  2. [Developers & Admins] Bundled Software

    Bundled software packages include Python 3.5, Oracle instant client 12.2, MySQL 5.7, Cython (C-Extensions for Python), cx_Oracle Python module, Go compiler, clang (C language family frontend for LLVM) and so on.

    cx_Oracle is a Python module that enables accessing Oracle Database 12c and 11g from Python applications. The Solaris packaged version 5.2 can be used with Python 2.7 and 3.4.

    Depending on the type of Solaris installation, not every software package may get installed by default but the above mentioned packages can be installed from the package repository on demand.


    # pkg install pkg:/developer/golang-17
    # go version
    go version devel a30c3bd1a7fcc6a48acfb74936a19b4c Fri Dec 22 01:41:25 GMT 2017 solaris/sparc64
  3. [Security] Isolating Applications with Sandboxes

    Sandboxes are isolated environments where users can run applications to protect them from other processes on the system while not giving full access to the rest of the system. Put another way, application sandboxing is one way to protect users, applications and systems by limiting the privileges of an application to its intended functionality there by reducing the risk of system compromise.

    Sandboxing joins Logical Domains (LDoms) and Zones in extending the isolation mechanisms available on Solaris.

    Sandboxes are suitable for constraining both privileged and unprivileged applications. Temporary sandboxes can be created to execute untrusted processes. Only administrators with the Sandbox Management rights profile (privileged users) can create persistent, uniquely named sandboxes with specific security attributes.

    The unprivileged command sandbox can be used to create temporary or named sandboxes to execute applications in a restricted environment. The privileged command sandbox can be used to create and manage named sandboxes.

    To install security/sandboxing package, run:

    # pkg install sandboxing
    # pkg install pkg:/security/sandboxing

    Ref: Configuring Sandboxes for Project Isolation for details.

  4. New Way to Find SRU Level

    uname -v was enhanced to include SRU level. Starting with the release of Solaris 11.4, uname -v reports Solaris patch version in the format "11.<update>.<sru>.<build>.<patch>".

    # uname -v

    Above output translates to Solaris 11 Update 4 SRU 0 Build 12 Patch 0.

  5. [Cloud] Service to Perform Initial Configuration of Guest Operating Systems

    cloudbase-init service on Solaris will help speed up the guest VM deployment in a cloud infrastructure by performing initial configuration of the guest OS. Initial configuration tasks typically include user creation, password generation, networking configuration, SSH keys and so on.

    cloudbase-init package is not installed by default on Solaris 11.4. Install the package only into VM images that will be deployed in cloud environments by running:

    # pkg install cloudbase-init
  6. Device Usage Information

    The release of Solaris 11.4 makes it easy to identify the consumers of busy devices. Busy devices are those devices that are opened or held by a process or kernel module.

    Having access to the device usage information helps with certain hotplug or fault management tasks. For example, if a device is busy, it cannot be hotplugged. If users are provided with the knowledge of how a device is currently being used, it helps them in resolving related issue(s).

    On Solaris 11.4, prtconf -v shows pids of processes using different devices.


    # prtconf -v
        Device Minor Nodes:
                    spectype=blk type=minor nodetype=ddi_block:channel
                    spectype=chr type=minor nodetype=ddi_block:channel
                Device Minor Opened By:
                    proc='fmd' pid=1516
  7. [Developers] Support for C11 (C standard revision)

    Solaris 11.4 includes support for the C11 programming language standard: ISO/IEC 9899:2011 Information technology - Programming languages - C.

    Note that C11 standard is not part of the Single UNIX Specification yet. Solaris 11.4 has support for C11 in addition to C99 to provide customers with C11 support ahead of its inclusion in a future UNIX specification. That means developers can write C programs using the newest available C programming language standard on Solaris 11.4 (and later).

  8. pfiles on a coredump

    pfiles, a /proc debugging utility, has been enhanced in Solaris 11.4 to provide details about the file descriptors opened by a crashed process in addition to the files opened by a live process.

    In other words, "pfiles core" now works.

  9. Privileged Command Execution History

    A new command, admhist, was included in Solaris 11.4 to show successful system administration related commands which are likely to have modified the system state, in human readable form. This is similar to the shell builtin "history".


    The following command displays the system administration events that occurred on the system today.

    # admhist -d "today" -v
    2018-05-31 17:43:21.957-07:00 cwd=/ /usr/bin/sparcv9/python2.7 /usr/bin/64/python2.7 /usr/bin/pkg -R /zonepool/p6128-z1/root/ --runid=12891 remote --ctlfd=8 --progfd=13
    2018-05-31 17:43:21.959-07:00 cwd=/ /usr/lib/rad/rad -m /usr/lib/rad/transport -m /usr/lib/rad/protocol -m /usr/lib/rad/module -m /usr/lib/rad/site-modules -t pipe:fd=3,exit -e 180 -i 1
    2018-05-31 17:43:22.413-07:00 cwd=/ /usr/bin/sparcv9/pkg /usr/bin/64/python2.7 /usr/bin/pkg install sandboxing
    2018-05-31 17:43:22.415-07:00 cwd=/ /usr/lib/rad/rad -m /usr/lib/rad/transport -m /usr/lib/rad/protocol -m /usr/lib/rad/module -m /usr/lib/rad/site-modules -t pipe:fd=3,exit -e 180 -i 1
    2018-05-31 18:59:52.821-07:00 cwd=/root /usr/bin/sparcv9/pkg /usr/bin/64/python2.7 /usr/bin/pkg search cloudbase-init

    It is possible to narrow the results by date, time, zone and audit-tag

    Ref: man page of admhist(8)

  10. [Developers] Process Control Library

    Solaris 11.4 includes a new process control library, libproc, which provides high-level interface to features of the /proc interface. This library also provides access to information such as symbol tables which are useful while examining and control of processes and threads.

    A controlling process using libproc can typically:

    • Grab another process by suspending its execution
    • Examine the state of that process
    • Examine or modify the address space of the grabbed process
    • Make that process execute system calls on behalf of the controlling process, and
    • Release the grabbed process to continue execution

    Ref: man page of libproc(3LIB) for an example and details.


Wednesday, April 25, 2018
Solaris 11.4: Three Zones Related Changes in 3 Minutes or Less

[ 1 ] Automatic Live Migration of Kernel Zones using sysadm Utility

Live migrate (evacuate) all kernel zones from a host system onto other systems temporarily or permanently with the help of new sysadm(8) utility. In addition, it is possible to evacuate all zones including kernel zones that are not running and native solaris zones in the installed state.

  1. If the target host (that is, the host the zone will be migrated to) meets all evacuation requirements, set it as destination host for one or more migrating kernel zones by setting the SMF service property evacuation/target.

    svccfg -s svc:/system/zones/zone:<migrating-zone> setprop evacuation/target=ssh://<dest-host>
  2. Put the source host in maintenance mode using sysadm utility to prevent non-running zones from attaching, booting, or migrating in zones from other hosts.

    sysadm maintain <options>
  3. Migrate the zones to their destination host(s) by running sysadm's evacuate subcommand.

    sysadm evacuate <options>
  4. Complete system maintenance work and end the maintenance mode on source host

    sysadm maintain -e
  5. Optionally bring back evacuated zones to the source host

Please refer to Evacuating Oracle Solaris Kernel Zones for detailed steps.

[ 2 ] Moving Solaris Zones across Different Storage URIs

Starting with the release of Solaris 11.4, zoneadm's move subcommand can be used to change the zonepath without moving the Solaris zone installation. In addition, the same command can be used to move a zone from:

[ 3 ] ZFS Dataset Live Zone Reconfiguration

Live Zone Reconfiguration (LZR) is the ability to make changes to a running Solaris native zone configuration permanently or temporarily. In other words, LZR avoids rebooting the target zone.

Solaris 11.3 already has support for reconfiguring resources such as dedicated cpus, capped memory and automatic network (anets). Solaris 11.4 extends the LZR support to ZFS datasets.

With the release of Solaris 11.4, privileged users should be able to add or remove ZFS datasets dynamically to and from a Solaris native zone without the need to reboot the zone.

# zoneadm list -cv
  ID NAME             STATUS      PATH                         BRAND      IP
   0 global           running     /                            solaris    shared
   1 tstzone          running     /zonepool/tstzone           solaris    excl

    Add a ZFS filesystem to the running zone, tstzone

# zfs create zonepool/testfs

# zonecfg -z tstzone "info dataset"

# zonecfg -z tstzone "add dataset; set name=zonepool/testfs; end; verify; commit"

# zonecfg -z tstzone "info dataset"
        name: zonepool/testfs
        alias: testfs

# zoneadm -z tstzone apply
zone 'tstzone': Checking: Modifying anet linkname=net0
zone 'tstzone': Checking: Adding dataset name=zonepool/testfs
zone 'tstzone': Applying the changes

# zlogin tstzone "zfs list testfs"
cannot open 'testfs': filesystem does not exist

# zlogin tstzone "zpool import testfs"

# zlogin tstzone "zfs list testfs"
testfs   31K  1.63T    31K  /testfs

    Remove a ZFS filesystem from the running zone, tstzone

# zonecfg -z tstzone "remove dataset name=zonepool/testfs; verify; commit"

# zonecfg -z tstzone "info dataset"

# zlogin tstzone "zpool export testfs"

# zoneadm -z tstzone apply
zone 'tstzone': Checking: Modifying anet linkname=net0
zone 'tstzone': Checking: Removing dataset name=zonepool/testfs
zone 'tstzone': Applying the changes

# zlogin tstzone "zfs list testfs"
cannot open 'testfs': filesystem does not exist

# zfs destroy zonepool/testfs

A summary of LZR support for resources and properties in native and kernel zones can be found in this page.



This page is powered by Blogger. Isn't yours?