Mandalika's scratchpad [ Work blog @Oracle | My Music Compositions ]

Old Posts: 09.04  10.04  11.04  12.04  01.05  02.05  03.05  04.05  05.05  06.05  07.05  08.05  09.05  10.05  11.05  12.05  01.06  02.06  03.06  04.06  05.06  06.06  07.06  08.06  09.06  10.06  11.06  12.06  01.07  02.07  03.07  04.07  05.07  06.07  08.07  09.07  10.07  11.07  12.07  01.08  02.08  03.08  04.08  05.08  06.08  07.08  08.08  09.08  10.08  11.08  12.08  01.09  02.09  03.09  04.09  05.09  06.09  07.09  08.09  09.09  10.09  11.09  12.09  01.10  02.10  03.10  04.10  05.10  06.10  07.10  08.10  09.10  10.10  11.10  12.10  01.11  02.11  03.11  04.11  05.11  07.11  08.11  09.11  10.11  11.11  12.11  01.12  02.12  03.12  04.12  05.12  06.12  07.12  08.12  09.12  10.12  11.12  12.12  01.13  02.13  03.13  04.13  05.13  06.13  07.13  08.13  09.13  10.13  11.13  12.13  01.14  02.14  03.14  04.14  05.14  06.14  07.14  09.14  10.14  11.14  12.14  01.15  02.15  03.15  04.15  06.15  09.15  12.15  01.16  03.16  04.16  05.16  06.16  07.16  08.16  09.16  12.16  01.17  02.17  03.17  04.17  06.17  07.17  08.17  09.17  10.17  12.17  01.18  02.18  03.18  04.18  05.18  06.18  07.18  08.18  09.18  11.18  12.18 


Saturday, December 08, 2018
 
Blast from the Past : The Weekend Playlist #15 — Cirque Du Soleil Special

This edition is dedicated to Cirque du Soleil, the entertainment group with many successful live shows under their belt. Live original music accompanies almost all of Cirque's carefully choreographed live performances.

Current playlist features music from Cirque du Soleil's various live shows — Saltimbanco (1992), Mystère (1993), Alegría (1994), Dralion (1999), Varekai (2002), Zumanity (2003), Kà (2004), Corteo (2005), Koozå (2007) and Luzia (2016).

Be aware that some of the songs are in imaginary/invented language.

Enjoy!

Audio & Widget courtesy: Spotify

Old playlists:

    #1    #8   #14 (50s, 60s and 70s)    |    #2    #3    #4    #5 (80s)    |    #6    #7    #9 (90s)    |    #11    #12 (00s)    |    #13 (10s) |    #10 (Instrumental)

Labels:




Friday, November 30, 2018
 
Programming in C: Few Tidbits #8

1) Function Pointers

Declaring Function Pointers

Similar to a variable declared as pointer to some data type, a variable can also be declared to be a pointer to a function. Such a variable stores the address of a function that can later be called using that function pointer. In other words, function pointers point to the executable code rather than data like typical pointers.

eg.,
void (*func_ptr)();

In the above declaration, func_ptr is a variable that can point to a function that takes no arguments and returns nothing (void).

The parentheses around the function pointer cannot be removed. Doing so makes the declaration a function that returns a void pointer.

The declaration itself won't point to anything so a value has to be assigned to the function pointer which is typically the address of the target function to be executed.


Assigning Function Pointers

If a function by name dummy was already defined, the following assignment makes func_ptr variable to point to the function dummy.

eg.,
void dummy() { ; }
func_ptr = dummy;

In the above example, function's name was used to assign that function's address to the function pointer. Using address-of / address operator (&) is another way.

eg.,
void dummy() { ; }
func_ptr = &dummy;

Above two sample assignments highlight the fact that similar to arrays, a function's address can be obtained either by using address operator (&) or by simply specifying the function name - hence the use of address operator is optional. Here's an example proving that.

% cat funcaddr.c

#include <stdio.h>

void foo() { ; }

void main()
{
        printf("Address of function foo without using & operator = %p\n", foo);
        printf("Address of function foo         using & operator = %p\n", &foo);
}

% cc -o funcaddr funcaddr.c

% ./funcaddr
Address of function foo without using & operator = 10b6c
Address of function foo         using & operator = 10b6c

Using Function Pointers

Once we have a function pointer variable pointing to a function, we can call the function that it points to using that function pointer variable as if it is the actual function name. Dereferencing the function pointer is optional similar to using & operator during function pointer assignment. The dereferencing happens automatically if not done explicitly.

eg.,

The following two function calls are equivalent, and exhibit the same behavior.

func_ptr();
(*func_ptr)();

Complete Example

Here is one final example for the sake of completeness. This example demonstrate the execution of couple of arithmetic functions using function pointers. Same example also highlights the optional use of & operator and pointer dereferencing.

% cat funcptr.c
#include <stdio.h>

int add(int first, int second) {
        return (first + second);
}

int multiply(int first, int second) {
        return (first * second);
}

void main()
{
        int (*func_ptr)(int, int);                      /* declaration */
        func_ptr = add;                                 /* assignment (auto func address) */
        printf("100+200 = %d\n", (*func_ptr)(100,200)); /* execution  (dereferencing) */
        func_ptr = &multiply;                           /* assignment (func address using &) */
        printf("100*200 = %d\n", func_ptr(100,200));    /* execution  (auto dereferencing) */
}

% cc -o funcptr funcptr.c

% ./funcptr
100+200 = 300
100*200 = 20000

Few Practical Uses of Function Pointers

Function pointers are convenient and useful while writing functions that sort data. Standard C Library includes qsort() function to sort data of any type (integers, floats, strings). The last argument to qsort() is a function pointer pointing to the comparison function.

Function pointers are useful to write callback functions where a function (executable code) is passed as an argument to another function that is expected to execute the argument (call back the function sent as argument) at some point.

In both examples above function pointers are used to pass functions as arguments to other functions.

In some cases function pointers may make the code cleaner and readable. For example, array of function pointers may simplify a large switch statement.

2) Printing Unicode Characters

Here's one possible way.

Following rudimentary code sample prints random currency symbols and a name in Telugu script using both printf and wprintf function calls.

% cat -n unicode.c
     1  #include <wchar.h>
     2  #include <locale.h>
     3  #include <stdio.h>
     4
     5  int main()
     6  {
     7          setlocale(LC_ALL,"en_US.UTF-8");
     8          wprintf(L"\u20AC\t\u00A5\t\u00A3\t\u00A2\t\u20A3\t\u20A4");
     9          wchar_t wide[4]={ 0x0C38, 0x0C30, 0x0C33, 0 };
    10          printf("\n%ls", wide);
    11          wprintf(L"\n%ls", wide);
    12          return 0;
    13  }

% cc -o unicode unicode.c

% ./unicode
€      ¥      £      ¢      ₣      ₤
సరళ
సరళ

Here is one website where numerical values for various Unicode characters can be found.

Labels:




Saturday, September 29, 2018
 
Oracle SuperCluster: Brief Introduction to osc-interdom

Target audience: Oracle SuperCluster customers

The primary objective of this blog post is to provide some related information on this obscure tool to inquisitive users/customers as they might have noticed the osc-interdom service and the namesake package and wondered what is it for at some point.

SuperCluster InterDomain Communcation Tool, osc-interdom, is an infrastructure framework and a service that runs on Oracle SuperCluster products to provide flexible monitoring and management capabilities across SuperCluster domains. It provides means to inspect and enumerate the components of a SuperCluster so that other components can fulfill their roles in managing the SuperCluster. The framework also allows commands to be executed from a control domain to take effect across all domains on the server node (eg., a PDom on M8) and, optionally, across all servers (eg., other M8 PDoms in the cluster) on the system.

SuperCluster Virtual Assistant (SVA), ssctuner, exachk and Oracle Enterprise Manager (EM) are some of the consumers of the osc-interdom framework.

Installation and Configuration

Interdom framework requires osc-interdom package from exa-family repository be installed and enabled on all types of domains in the SuperCluster.

In order to enable communication between domains in the SuperCluster, interdom must be configured on all domains that need to be part of the inter-domain communication channel. In other words, it is not a requirement for all domains in the cluster to be part of the osc-interdom configuration. It is possible to exclude some domains from the comprehensive interdom directory either during initial configuration or at a later time. Also once the interdom directory configuration was built, it can be refreshed or rebuilt any time at will.

Since installing and configuring osc-interdom was automated and made part of SuperCluster installation and configuration processes, it is unlikely that anyone from the customer site need to know or to perform those tasks manually.

# svcs osc-interdom
STATE          STIME    FMRI
online         22:24:13 svc:/site/application/sysadmin/osc-interdom:default

Domain Registry and Command Line Interface (CLI)

Configuring interdom results in a Domain Registry. The purpose of the registry is to provide an accurate and up-to-date database of all SuperCluster domains and their characteristics.

oidcli is a simple command line interface for the domain registry. The oidcli command line utility is located in /opt/oracle.supercluster/bin directory.

oidcli utility can be used to query interdom domain registry for data that is associated with different components in the SuperCluster. Each component maps to a domain in the SuperCluster; and each component is uniquely identified by a UUID.

The SuperCluster Domain Registry is stored on Master Control Domain (MCD). The "master" is usually the first control domain in the SuperCluster. Since the domain registry is on the master control domain, it is expected to run oidcli on MCD to query the data. When running from other domains, option -a must be specified along with the management IP address of the master control domain.

Keep in mind that the data returned by oidcli is meant for other SuperCluster tools that have the ability to interpret the data correctly and coherently. Therefore humans who are looking at the same data may need some extra effort to digest and understand.

eg.,

# cd /opt/oracle.supercluster/bin

# ./oidcli -h
Usage: oidcli [options] dir |  [options]   [...]
           invalidate|get_data|get_value
            , other component ID or 'all'
          (e.g. 'hostname', 'control_uuid') or 'all'
  NOTE: get_value must request single 

Options:
  -h, --help  show this help message and exit
  -p          Output in PrettyPrinter format
  -a ADDR     TCP address/hostname (and optional ',') for connection
  -d          Enable debugging output
  -w W        Re-try for up to  seconds for success. Use 0 for no wait.
              Default: 1801.0.

List all components (domains)

# ./oidcli -p dir
[   ['db8c979d-4149-452f-8737-c857e0dc9eb0', True],
    ['4651ac93-924e-4990-8cf9-83be556eb667', True],
 ..
    ['945696fb-97f1-48e3-aa20-8c8baf198ea8', True],
    ['4026d670-61db-425e-834a-dfc45ff9a533', True]]

List the hostname of all domains

# ./oidcli -p get_data all hostname
db8c979d-4149-452f-8737-c857e0dc9eb0:
{   'hostname': {   'mtime': 1538089861, 'name': 'hostname', 'value': 'alpha'}}
3cfc9039-2157-4b62-ac69-ea3d85f2a19f:
{   'hostname': {   'mtime': 1538174309,
                    'name': 'hostname',
                    'value': 'beta'}}
...
List all available properties for all domains

# ./oidcli -p get_data all all
db8c979d-4149-452f-8737-c857e0dc9eb0:
{   'banner_name': {   'mtime': 1538195164,
                       'name': 'banner_name',
                       'value': 'SPARC M7-4'},
    'comptype': {   'mtime': 1538195164, 'name': 'comptype', 'value': 'LDom'},
    'control_uuid': {   'mtime': 1538195164,
                        'name': 'control_uuid',
                        'value': 'Unknown'},
    'guests': {   'mtime': 1538195164, 'name': 'guests', 'value': None},
    'host_domain_chassis': {   'mtime': 1538195164,
                               'name': 'host_domain_chassis',
                               'value': 'AK00251676'},
    'host_domain_name': {   'mtime': 1538284541,
                            'name': 'host_domain_name',
                            'value': 'ssccn1-io-alpha'},
 ...

Query a specific property from a specific domain

# ./oidcli -p get_data 4651ac93-924e-4990-8cf9-83be556eb667 mgmt_ipaddr
mgmt_ipaddr:
{   'mtime': 1538143865,
    'name': 'mgmt_ipaddr',
    'value': ['xx.xxx.xxx.xxx', 20, 'scm_ipmp0']}

The domain registry is persistent and updated hourly. When accurate and up-to-date is needed, it is recommended to query the registry with --no-cache option.

eg.,
# ./oidcli -p get_data --no-cache 4651ac93-924e-4990-8cf9-83be556eb667 load_average
load_average:
{   'mtime': 1538285043,
    'name': 'load_average',
    'value': [0.01171875, 0.0078125, 0.0078125]}

The mtime attribute in all examples above represent the UNIX timestamp.

Debug Mode

By default, osc-interdom service runs in non-debug mode. Running the service in debug mode enables logging more details to osc-interdom service log.

In general, if osc-interdom service is transitioning to maintenance state, switching to the debug mode may provide few additional clues.

To check if debug mode was enabled, run:

svcprop -c -p config osc-interdom | grep debug

To enable debug mode, run:

svccfg -s sysadmin/osc-interdom:default setprop config/oidd_debug '=' true
svcadm restart osc-interdom

Finally check the service log for debug messages. svcs -L osc-interdom command output points to the location of osc-interdom service log.

Documentation

Similar to SuperCluster resource allocation engine, osc-resalloc, interdom framework is mostly meant for automated tools with little or no human interaction. Consequently there are no references to osc-interdom in SuperCluster Documentation Set.

Related:

Acknowledgments/Credit:

  Tim Cook

Labels:




Friday, August 31, 2018
 
Random Solaris & Shell Command Tips — kstat, tput, sed, digest

The examples shown in this blog post were created and executed on a Solaris system. Some of these tips and examples are applicable to all *nix systems.

Digest of a File

One of the typical uses of computed digest is to check if a file has been compromised or tampered. The digest utility can be used to calculate the digest of files.

On Solaris, -l option lists out all cryptographic hash algorithms available on the system.

eg.,
% digest -l
sha1
md5
sha224
sha256
..
sha3_224
sha3_256
..

-a option can be used to specify the hash algorithm while computing the digest.

eg.,
% digest -v -a sha1 /usr/lib/libc.so.1
sha1 (/usr/lib/libc.so.1) = 89a588f447ade9b1be55ba8b0cd2edad25513619

Multiline Shell Script Comments

Shell treats any line that start with '#' symbol as a comment and ignores such lines completely. (The line on top that start with #! is an exception).

From what I understand there is no multiline comment mechanism in shell. While # symbol is useful to mark single line comments, it becomes laborious and not well suited to comment quite a number of contiguous lines.

One possible way to achieve multiline comments in a shell script is to rely on a combination of shell built-in ':' and Here-Document code block. It may not be the most attractive solution but gets the work done.

Shell ignores the lines that start with a ":" (colon) and returns true.

eg.,
% cat -n multiblock_comment.sh
     1  #!/bin/bash
     2
     3  echo 'not a commented line'
     4  #echo 'a commented line'
     5  echo 'not a commented line either'
     6  : <<'MULTILINE-COMMENT'
     7  echo 'beginning of a multiline comment'
     8  echo 'second line of a multiline comment'
     9  echo 'last line of a multiline comment'
    10  MULTILINE-COMMENT
    11  echo 'yet another "not a commented line"'

% ./multiblock_comment.sh
not a commented line
not a commented line either
yet another "not a commented line"

tput Utility to Jazz Up Command Line User Experience

The tput command can help make the command line terminals look interesting. tput can be used to change the color of text, apply effects (bold, underline, blink, ..), move cursor around the screen, get information about the status of the terminal and so on.

In addition to improving the command line experience, tput can also be used to improve the interactive experience of scripts by showing different colors and/or text effects to users.

eg.,
% tput bold <= bold text
% date
Thu Aug 30 17:02:57 PDT 2018
% tput smul <= underline text
% date
Thu Aug 30 17:03:51 PDT 2018
% tput sgr0 <= turn-off all attributes (back to normal)
% date
Thu Aug 30 17:04:47 PDT 2018

Check the man page of terminfo for a complete list of capabilities to be used with tput.

Processor Marketing Name

On systems running Solaris, processor's marketing or brand name can be extracted with the help of kstat utility. cpu_info module provides information related to the processor(s) on the system.

eg.,
On SPARC:

% kstat -p cpu_info:1:cpu_info1:brand
cpu_info:1:cpu_info1:brand      SPARC-M8

On x86/x64:

% kstat -p cpu_info:1:cpu_info1:brand
cpu_info:1:cpu_info1:brand      Intel(r) Xeon(r) CPU           L5640  @ 2.27GHz

In the above example, cpu_info is the module. 1 is the instance number. cpu_info1 is the name of the section and brand is the statistic in focus. Note that cpu_info module has only one section cpu_info1. Therefore it is fine to skip the section name portion (eg., cpu_info:1::brand).

To see the complete list of statistics offered by cpu_info module, simply run kstat cpu_info:1.

Consolidating Multiple sed Commands

sed utility allows specifying multiple editing commands on the same command line. (in other words, it is not necessary to pipe multiple sed commands). The editing commands need to be separated with a semicolon (;)

eg.,

The following two commands are equivalent and yield the same output.

% prtconf | grep Memory | sed 's/Megabytes/MB/g' | sed 's/ size//g'
Memory: 65312 MB

% prtconf | grep Memory | sed 's/Megabytes/MB/g;s/ size//g'
Memory: 65312 MB

Labels:




Tuesday, July 31, 2018
 
Solaris 11: High-Level Steps to Create an IPS Package

Keywords: Solaris package IPS+Repository pkg


1Work on Directory Structure


Start with organizing the package contents (files) into the same directory structure that you want on the installed system.

In the following example the directory was organized in such a manner that when the package was installed, it results in software being copied to /opt/myutils directory.

eg.,

# tree opt

opt
`-- myutils
    |-- docs
    |   |-- README.txt
    |   `-- util_description.html
    |-- mylib.py
    |-- util1.sh
    |-- util2.sh
    `-- util3.sh

Create a directory to hold the software in the desired layout. Let us call this "workingdir", and this directory will be specified in subsequent steps to generate the package manifest and finally the package itself. Move the top level software directory to the "workingdir".

# mkdir workingdir
# mv opt workingdir

# tree -fai workingdir/
workingdir
workingdir/opt
workingdir/opt/myutils
workingdir/opt/myutils/docs
workingdir/opt/myutils/docs/README.txt
workingdir/opt/myutils/docs/util_description.html
workingdir/opt/myutils/mylib.py
workingdir/opt/myutils/util1.sh
workingdir/opt/myutils/util2.sh
workingdir/opt/myutils/util3.sh

2Generate Package Manifest


Package manifest provides metadata such as package name, description, version, classification & category along with the files and directories included, and the dependencies, if any, need to be installed for the target package.

The manifest for an existing package can be examined with the help of pkg contents subcommand.

pkgsend generate command generates the manifest. It takes "workingdir" as input. Piping the output through pkgfmt makes the manifest readable.

# pkgsend generate workingdir | pkgfmt > myutilspkg.p5m.1

# cat myutilspkg.p5m.1


3Add Metadata to Package Manifest


Note that the package manifest is currently missing attributes such as name and description (metadata). Those attributes can be added directly to the generated manifest. However the recommended approach is to rely on pkgmogrify utility to make changes to an existing manifest.

Create a text file with the missing package attributes.

eg.,
# cat mypkg_attr
set name=pkg.fmri value=myutils@3.0,5.11-0
set name=pkg.summary value="Utilities package"
set name=pkg.description value="Utilities package"
set name=variant.arch value=sparc
set name=variant.opensolaris.zone value=global

set name=variant.opensolaris.zone value=global action restricts the package installation to global zone. To make the package installable in both global and non-global zones, either specify set name=variant.opensolaris.zone value=global value=nonglobal action in the package manifest, or do not have any references to variant.opensolaris.zone variant at all in the manifest.

Now merge the metadata with the manifest generated in previous step.

# pkgmogrify myutilspkg.p5m.1 mypkg_attr | pkgfmt > myutilspkg.p5m.2

# cat myutilspkg.p5m.2


4Evaluate & Generate Dependencies


Generate the dependencies so they will be part of the manifest. It is recommended to rely on pkgdepend utility for this task rather than declaring depend actions manually to minimize inaccuracies.

eg.,
# pkgdepend generate -md workingdir myutilspkg.p5m.2 | pkgfmt > myutilspkg.p5m.3

At this point, ensure that the manifest has all the dependencies listed. If not, declare the missing dependencies manually.


5Resolve Package Dependencies


This step might take a while to complete.

eg.,
# pkgdepend resolve -m myutilspkg.p5m.3

6Verify the Package


By this time the package manifest should pretty much be complete. Check and validate it manually or using pkglint utility (recommended) for consistency and any possible errors.

# pkglint myutilspkg.p5m.3.res

7Publish the Package


For the purpose of demonstration let's go with the simplest option to publish the package, local file-based repository.

Create the local file based repository using pkgrepo command, and set the default publisher for the newly created repository.

# pkgrepo create my-repository
# pkgrepo -s my-repository set publisher/prefix=mypublisher

Finally publish the target package with the help of pkgsend command.

# pkgsend -s my-repository publish -d workingdir myutilspkg.p5m.3.res
pkg://mypublisher/myutils@3.0,5.11-0:20180704T014157Z
PUBLISHED

# pkgrepo info -s my-repository
PUBLISHER   PACKAGES STATUS           UPDATED
mypublisher 1        online           2018-07-04T01:41:57.414014Z

8Validate the Package


Finally validate whether the published package has been packaged properly by test installing it.

# pkg set-publisher -p my-repository
# pkg publisher
# pkg install myutils

# pkg info myutils
             Name: myutils
          Summary: Utilities package
      Description: Utilities package
            State: Installed
        Publisher: mypublisher
          Version: 3.0
    Build Release: 5.11
           Branch: 0
   Packaging Date: Wed Jul 04 01:41:57 2018
Last Install Time: Wed Jul 04 01:45:05 2018
             Size: 49.00 B
             FMRI: pkg://mypublisher/myutils@3.0,5.11-0:20180704T014157Z

Labels:




Saturday, June 30, 2018
 
Python: Exclusive File Locking on Solaris

Solaris doesn't lock open files automatically (not just Solaris - most of *nix operating systems behave this way).

In general, when a process is about to update a file, the process is responsible for checking existing locks on target file, acquiring a lock and releasing it after updating the file. However given that not all processes cooperate and adhere to this mechanism (advisory locking) due to various reasons, such non-conforming practice may lead to problems such as inconsistent or invalid data mainly triggered by race condition(s). Serialization is one possible solution to prevent this, where only one process is allowed to update the target file at any time. It can be achieved with the help of file locking mechanism on Solaris as well as majority of other operating systems.

On Solaris, a file can be locked for exclusive access by any process with the help of fcntl() system call. fcntl() function provides for control over open files. It can be used for finer-grained control over the locking -- for instance, we can specify whether or not to make the call block while requesting exclusive or shared lock.

The following rudimentary Python code demonstrates how to acquire an exclusive lock on a file that makes all other processes wait to get access to the file in focus.

eg.,

% cat -n xflock.py
     1  #!/bin/python
     2  import fcntl, time
     3  f = open('somefile', 'a')
     4  print 'waiting for exclusive lock'
     5  fcntl.flock(f, fcntl.LOCK_EX)
     6  print 'acquired lock at %s' % time.strftime('%Y-%m-%d %H:%M:%S')
     7  time.sleep(10)
     8  f.close()
     9  print 'released lock at %s' % time.strftime('%Y-%m-%d %H:%M:%S')

Running the above code in two terminal windows at the same time shows the following.

Terminal 1:

% ./xflock.py
waiting for exclusive lock
acquired lock at 2018-06-30 22:25:36
released lock at 2018-06-30 22:25:46

Terminal 2:

% ./xflock.py
waiting for exclusive lock
acquired lock at 2018-06-30 22:25:46
released lock at 2018-06-30 22:25:56

Notice that the process running in second terminal was blocked waiting to acquire the lock until the process running in first terminal released the exclusive lock.

Non-Blocking Attempt

If the requirement is not to block on exclusive lock acquisition, it can be achieved with LOCK_EX (acquire exclusive lock) and LOCK_NB (do not block when locking) operations by performing a bitwise OR on them. In other words, the statement fcntl.flock(f, fcntl.LOCK_EX) becomes fcntl.flock(f, fcntl.LOCK_EX | fcntl.LOCK_NB) so the process will either get the lock or move on without blocking.

Be aware that an IOError will be raised when a lock cannot be acquired in non-blocking mode. Therefore, it is the responsibility of the application developer to catch the exception and properly deal with the situation.

The behavior changes as shown below after the inclusion of fcntl.LOCK_NB in the sample code above.

Terminal 1:

% ./xflock.py
waiting for exclusive lock
acquired lock at 2018-06-30 22:42:34
released lock at 2018-06-30 22:42:44

Terminal 2:

% ./xflock.py
waiting for exclusive lock
Traceback (most recent call last):
  File "./xflock.py", line 5, in 
    fcntl.flock(f, fcntl.LOCK_EX | fcntl.LOCK_NB)
IOError: [Errno 11] Resource temporarily unavailable

Labels:




Thursday, May 31, 2018
 
Solaris 11.4: 10 Good-to-Know Features, Enhancements or Changes

  1. [Admins] Device Removal From a ZFS Storage Pool

    In addition to removing hot spares, cache and log devices, Solaris 11.4 has support for removal of top-level virtual data devices (vdev) from a zpool with the exception of a RAID-Z pool. It is possible to cancel a remove operation that's in progress too.

    This enhancement will come in handy especially when dealing with overprovisioned and/or misconfigured pools.

    Ref: ZFS: Removing Devices From a Storage Pool for examples.

  2. [Developers & Admins] Bundled Software

    Bundled software packages include Python 3.5, Oracle instant client 12.2, MySQL 5.7, Cython (C-Extensions for Python), cx_Oracle Python module, Go compiler, clang (C language family frontend for LLVM) and so on.

    cx_Oracle is a Python module that enables accessing Oracle Database 12c and 11g from Python applications. The Solaris packaged version 5.2 can be used with Python 2.7 and 3.4.

    Depending on the type of Solaris installation, not every software package may get installed by default but the above mentioned packages can be installed from the package repository on demand.

    eg.,

    # pkg install pkg:/developer/golang-17
    
    # go version
    go version devel a30c3bd1a7fcc6a48acfb74936a19b4c Fri Dec 22 01:41:25 GMT 2017 solaris/sparc64
    
  3. [Security] Isolating Applications with Sandboxes

    Sandboxes are isolated environments where users can run applications to protect them from other processes on the system while not giving full access to the rest of the system. Put another way, application sandboxing is one way to protect users, applications and systems by limiting the privileges of an application to its intended functionality there by reducing the risk of system compromise.

    Sandboxing joins Logical Domains (LDoms) and Zones in extending the isolation mechanisms available on Solaris.

    Sandboxes are suitable for constraining both privileged and unprivileged applications. Temporary sandboxes can be created to execute untrusted processes. Only administrators with the Sandbox Management rights profile (privileged users) can create persistent, uniquely named sandboxes with specific security attributes.

    The unprivileged command sandbox can be used to create temporary or named sandboxes to execute applications in a restricted environment. The privileged command sandbox can be used to create and manage named sandboxes.

    To install security/sandboxing package, run:

    # pkg install sandboxing
    
    -OR-
    
    # pkg install pkg:/security/sandboxing
    

    Ref: Configuring Sandboxes for Project Isolation for details.

  4. New Way to Find SRU Level

    uname -v was enhanced to include SRU level. Starting with the release of Solaris 11.4, uname -v reports Solaris patch version in the format "11.<update>.<sru>.<build>.<patch>".

    # uname -v
    11.4.0.12.0
    

    Above output translates to Solaris 11 Update 4 SRU 0 Build 12 Patch 0.

  5. [Cloud] Service to Perform Initial Configuration of Guest Operating Systems

    cloudbase-init service on Solaris will help speed up the guest VM deployment in a cloud infrastructure by performing initial configuration of the guest OS. Initial configuration tasks typically include user creation, password generation, networking configuration, SSH keys and so on.

    cloudbase-init package is not installed by default on Solaris 11.4. Install the package only into VM images that will be deployed in cloud environments by running:

    # pkg install cloudbase-init
    
  6. Device Usage Information

    The release of Solaris 11.4 makes it easy to identify the consumers of busy devices. Busy devices are those devices that are opened or held by a process or kernel module.

    Having access to the device usage information helps with certain hotplug or fault management tasks. For example, if a device is busy, it cannot be hotplugged. If users are provided with the knowledge of how a device is currently being used, it helps them in resolving related issue(s).

    On Solaris 11.4, prtconf -v shows pids of processes using different devices.

    eg.,

    # prtconf -v
     ...
        Device Minor Nodes:
            dev=(214,72)
                dev_path=/pci@300/pci@2/usb@0/hub@4/storage@2/disk@0,0:a
                    spectype=blk type=minor nodetype=ddi_block:channel
                    dev_link=/dev/dsk/c2t0d0s0
                dev_path=/pci@300/pci@2/usb@0/hub@4/storage@2/disk@0,0:a,raw
                    spectype=chr type=minor nodetype=ddi_block:channel
                    dev_link=/dev/rdsk/c2t0d0s0
                Device Minor Opened By:
                    proc='fmd' pid=1516
                        cmd='/usr/lib/fm/fmd/fmd'
                        user='root[0]'
     ...
    
  7. [Developers] Support for C11 (C standard revision)

    Solaris 11.4 includes support for the C11 programming language standard: ISO/IEC 9899:2011 Information technology - Programming languages - C.

    Note that C11 standard is not part of the Single UNIX Specification yet. Solaris 11.4 has support for C11 in addition to C99 to provide customers with C11 support ahead of its inclusion in a future UNIX specification. That means developers can write C programs using the newest available C programming language standard on Solaris 11.4 (and later).

  8. pfiles on a coredump

    pfiles, a /proc debugging utility, has been enhanced in Solaris 11.4 to provide details about the file descriptors opened by a crashed process in addition to the files opened by a live process.

    In other words, "pfiles core" now works.

  9. Privileged Command Execution History

    A new command, admhist, was included in Solaris 11.4 to show successful system administration related commands which are likely to have modified the system state, in human readable form. This is similar to the shell builtin "history".

    eg.,

    The following command displays the system administration events that occurred on the system today.

    # admhist -d "today" -v
    ...
    2018-05-31 17:43:21.957-07:00 root@pitcher.dom.com cwd=/ /usr/bin/sparcv9/python2.7 /usr/bin/64/python2.7 /usr/bin/pkg -R /zonepool/p6128-z1/root/ --runid=12891 remote --ctlfd=8 --progfd=13
    2018-05-31 17:43:21.959-07:00 root@pitcher.dom.com cwd=/ /usr/lib/rad/rad -m /usr/lib/rad/transport -m /usr/lib/rad/protocol -m /usr/lib/rad/module -m /usr/lib/rad/site-modules -t pipe:fd=3,exit -e 180 -i 1
    2018-05-31 17:43:22.413-07:00 root@pitcher.dom.com cwd=/ /usr/bin/sparcv9/pkg /usr/bin/64/python2.7 /usr/bin/pkg install sandboxing
    2018-05-31 17:43:22.415-07:00 root@pitcher.dom.com cwd=/ /usr/lib/rad/rad -m /usr/lib/rad/transport -m /usr/lib/rad/protocol -m /usr/lib/rad/module -m /usr/lib/rad/site-modules -t pipe:fd=3,exit -e 180 -i 1
    2018-05-31 18:59:52.821-07:00 root@pitcher.dom.com cwd=/root /usr/bin/sparcv9/pkg /usr/bin/64/python2.7 /usr/bin/pkg search cloudbase-init
    ..
    

    It is possible to narrow the results by date, time, zone and audit-tag

    Ref: man page of admhist(8)

  10. [Developers] Process Control Library

    Solaris 11.4 includes a new process control library, libproc, which provides high-level interface to features of the /proc interface. This library also provides access to information such as symbol tables which are useful while examining and control of processes and threads.

    A controlling process using libproc can typically:

    • Grab another process by suspending its execution
    • Examine the state of that process
    • Examine or modify the address space of the grabbed process
    • Make that process execute system calls on behalf of the controlling process, and
    • Release the grabbed process to continue execution

    Ref: man page of libproc(3LIB) for an example and details.

Labels:




Wednesday, April 25, 2018
 
Solaris 11.4: Three Zones Related Changes in 3 Minutes or Less

[ 1 ] Automatic Live Migration of Kernel Zones using sysadm Utility

Live migrate (evacuate) all kernel zones from a host system onto other systems temporarily or permanently with the help of new sysadm(8) utility. In addition, it is possible to evacuate all zones including kernel zones that are not running and native solaris zones in the installed state.

  1. If the target host (that is, the host the zone will be migrated to) meets all evacuation requirements, set it as destination host for one or more migrating kernel zones by setting the SMF service property evacuation/target.

    svccfg -s svc:/system/zones/zone:<migrating-zone> setprop evacuation/target=ssh://<dest-host>
    
  2. Put the source host in maintenance mode using sysadm utility to prevent non-running zones from attaching, booting, or migrating in zones from other hosts.

    sysadm maintain <options>
    
  3. Migrate the zones to their destination host(s) by running sysadm's evacuate subcommand.

    sysadm evacuate <options>
    
  4. Complete system maintenance work and end the maintenance mode on source host

    sysadm maintain -e
    
  5. Optionally bring back evacuated zones to the source host

Please refer to Evacuating Oracle Solaris Kernel Zones for detailed steps.

[ 2 ] Moving Solaris Zones across Different Storage URIs

Starting with the release of Solaris 11.4, zoneadm's move subcommand can be used to change the zonepath without moving the Solaris zone installation. In addition, the same command can be used to move a zone from:

[ 3 ] ZFS Dataset Live Zone Reconfiguration

Live Zone Reconfiguration (LZR) is the ability to make changes to a running Solaris native zone configuration permanently or temporarily. In other words, LZR avoids rebooting the target zone.

Solaris 11.3 already has support for reconfiguring resources such as dedicated cpus, capped memory and automatic network (anets). Solaris 11.4 extends the LZR support to ZFS datasets.

With the release of Solaris 11.4, privileged users should be able to add or remove ZFS datasets dynamically to and from a Solaris native zone without the need to reboot the zone.

eg.,
# zoneadm list -cv
  ID NAME             STATUS      PATH                         BRAND      IP
   0 global           running     /                            solaris    shared
   1 tstzone          running     /zonepool/tstzone           solaris    excl

    Add a ZFS filesystem to the running zone, tstzone

# zfs create zonepool/testfs

# zonecfg -z tstzone "info dataset"

# zonecfg -z tstzone "add dataset; set name=zonepool/testfs; end; verify; commit"

# zonecfg -z tstzone "info dataset"
dataset:
        name: zonepool/testfs
        alias: testfs

# zoneadm -z tstzone apply
zone 'tstzone': Checking: Modifying anet linkname=net0
zone 'tstzone': Checking: Adding dataset name=zonepool/testfs
zone 'tstzone': Applying the changes

# zlogin tstzone "zfs list testfs"
cannot open 'testfs': filesystem does not exist

# zlogin tstzone "zpool import testfs"

# zlogin tstzone "zfs list testfs"
NAME    USED  AVAIL  REFER  MOUNTPOINT
testfs   31K  1.63T    31K  /testfs

    Remove a ZFS filesystem from the running zone, tstzone

# zonecfg -z tstzone "remove dataset name=zonepool/testfs; verify; commit"

# zonecfg -z tstzone "info dataset"

# zlogin tstzone "zpool export testfs"

# zoneadm -z tstzone apply
zone 'tstzone': Checking: Modifying anet linkname=net0
zone 'tstzone': Checking: Removing dataset name=zonepool/testfs
zone 'tstzone': Applying the changes

# zlogin tstzone "zfs list testfs"
cannot open 'testfs': filesystem does not exist

# zfs destroy zonepool/testfs
#

A summary of LZR support for resources and properties in native and kernel zones can be found in this page.

Labels:




Sunday, March 25, 2018
 
Solaris 11.4: Brief Introduction to Solaris Analytics

This is something I can take some credit for even though I haven't contributed in any significant way other than filing a timely enhancement request. :-)

Overview

On a high level: Solaris has quite a few observability and diagnostic tools and utilities such as vmstat, mprstat, iostat, prstat, pgstat, lockstat, dtrace to observe and diagnose CPU/Core/memory/disk IO/network utilization, locks, busy processes and threads, interrupts and so on. However except for power users, majority of normal users and application & system administrators are not much familiar with those tools, or savvy enough to read man pages and documentation to figure the best ways to extract diagnostic or performance data/information that they want or need (this is likely the case across all operating environments not just Solaris).

Solaris 11.4 attempts to improve the usability of these tools and utilities by providing an interactive browser user interface (BUI) called "Oracle Solaris Analytics". Solaris Analytics gather event information and data samples from a variety of system & application sources. Consolidated view of the statistics, faults and administrative change requests are presented in a simple easy-to-digest manner. Users will be guided through health and performance analysis to diagnose problems.

Ultimately OS users, application and system administrators benefit with the visual representation of performance and diagnostic data, and system events. For instance, with the help of Solaris Analytics users will be able to view historical information about system performance, contrast it with current performance, and correlate statistics and events from multiple sources.

Check Using Oracle Solaris 11.4 Analytics for more information and details.

Accessing the Analytics BUI

Analytics services are enabled by default, and the Solaris Web UI can be accessed on ports 443 and 6787.

Access Analytics BUI at https://<s11.4host>:<port>/solaris/ where "s11.4host" is the hostname of the system running Solaris 11.4 and port=[443|6787].

Log in as any Solaris user that is configured to log into "s11.4host".

Those who are familiar with the Oracle ZFS Storage Appliance BUI may find some similarities between these two browser interfaces.

Troubleshooting: Unable to access Analytics BUI?

Make sure that:

Screenshots

Note that the system is almost idle to show any interesting data. Click on each image to see the image in original size.

Default Dashboard Home

Solaris Analytics BUI

Available Views

Solaris Analytics BUI

Sample View - SMF Services

Solaris Analytics BUI

Labels:




Tuesday, February 27, 2018
 
Steps to Upgrade from Solaris 11.3 to 11.4 Beta

Recently I updated one of our lab systems running Solaris 11.3 SRU 16 to Solaris 11.4 beta. Just wanted to share my experience along with the steps I ran and the outputs/stdout/stderr messages that I captured. I followed Updating Your Operating System to Oracle Solaris 11.4 document in Solaris 11.4 documentation library and the instructions worked flawlessly without a hitch.

My target system has one non-global zone running, and it took a little over one hour to complete the upgrade from start to finish. I recommend setting at least couple of hours aside for this upgrade as various factors such as the current Solaris 11.3 SRU, number of non-global zones running, and how Solaris 11.4 Beta packages are being accessed have a direct impact on the overall time it takes to complete the upgrade exercise.

Step 1: Prepare the System for Upgrade

Oracle recommends that the target system is at least at Solaris 11.3 SRU 23 level for the upgrade to succeed so the first step is to make sure that the system is running Solaris 11.3 SRU 23 or later.

eg.,
# pkg info entire | grep -i branch
           Branch: 0.175.3.16.0.3.0

The above output indicates that my lab system is running Solaris 11.3 SRU 16 - so, I have no choice but to upgrade 11.3 SRU first. Those with systems already at 11.3 SRU 23 or later can skip to Step 2: Get access to Solaris 11.4 Beta packages.

Following listing indicates that existing/configured publishers allow me to upgrade 11.3 to the latest SRU (which is 29).

# pkg list -af entire@0.5.11-0.175.3
NAME (PUBLISHER)             VERSION                    IFO
entire (solaris)             0.5.11-0.175.3.29.0.5.0    ---
entire (solaris)             0.5.11-0.175.3.28.0.4.0    ---
...

An attempt to update my system to the latest SRU met with a failure.

# pkg update --be-name s11.3.sru29 pkg:/entire@0.5.11-0.175.3.29.0.5.0
            Packages to remove:  37
           Packages to install:  12
            Packages to update: 510
       Create boot environment: Yes
Create backup boot environment:  No

Planning linked: 0/1 done; 1 working: zone:some-ngz
Linked progress: /pkg: update failed (linked image exception(s)):

A 'sync-linked' operation failed for child 'zone:some-ngz' with an 
unexpected return value of 1 and generated the following output:

pkg sync-linked: One or more client key and certificate files have 
expired. Please update the configuration for the publishers or origins 
listed below:

Publisher: solarisstudio
  Origin URI:
    https://pkg.oracle.com/solarisstudio/release/
    ...

Root cause of this failure is that a seperate repo for Solaris Studio has been configured in the non-global zone. Fixed it by removing associated publisher from the non-global zone.

root@some-ngz:~# pkg publisher
PUBLISHER                   TYPE     STATUS P LOCATION
solaris        (syspub)     origin   online T 
solarisstudio  (syspub)     origin   online T 
solarisstudio  (syspub)     origin   online F https://pkg.oracle.com/solarisstudio/release/

root@some-ngz:~# pkg set-publisher -G https://pkg.oracle.com/solarisstudio/release/ solarisstudio

SRU update has no problem afterward.

# pkg update --be-name s11.3.sru29 pkg:/entire@0.5.11-0.175.3.29.0.5.0
            Packages to remove:  37
           Packages to install:  12
            Packages to update: 510
       Create boot environment: Yes
Create backup boot environment:  No

Planning linked: 0/1 done; 1 working: zone:some-ngz
Linked image 'zone:some-ngz' output:
|  Packages to remove:  25
| Packages to install:  12
|  Packages to update: 237
|  Services to change:   9
`
...
Updating image state                            Done
Creating fast lookup database                   Done
Executing linked: 0/1 done; 1 working: zone:some-ngz
Executing linked: 1/1 done
Updating package cache                           3/3

A clone of s11.3.sru.16.3.0 exists and has been updated and activated.
On the next boot the Boot Environment s11.3.sru29 will be
mounted on '/'.  Reboot when ready to switch to this updated BE.

Now that the system was upgraded to 11.3 SRU 29, it is time to boot the new boot environment (BE) in preparation for a subsequent upgrade to 11.4 beta.

# beadm list | grep sru29
s11.3.sru29        R     -          98.05G  static 2018-02-26 17:13

# shutdown -y -g 30 -i6

# pkg info entire | grep -i branch
           Branch: 0.175.3.29.0.5.0

Step 2: Get access to Solaris 11.4 Beta packages

Unless the target system has no connectivity to the public pkg.oracle.com repo, probably the simplest way is to use the public package repository that was set up by Oracle to support 11.4 beta. If the preference is to have a local repository for any reason (say no exernal connectivity, control what gets installed for example), download the Solaris 11.4 Beta package repository file and follow the instructions in README to set up the beta repository locally.

Rest of this section shows the steps involved in getting access to the public repository.

Step 3: Perform the Upgrade

Optionally perform a dry run update to check if there are any issues.

pkg update -nv

Finally perform the actual update to Solaris 11.4. This is the most time consuming step in this whole exercise.

# pkg update --accept --be-name 11.4.0 --ignore-missing \
  --reject system/input-method/ibus/anthy \
  --reject system/input-method/ibus/pinyin \
  --reject system/input-method/ibus/sunpinyin \
  --reject system/input-method/library/m17n/contrib entire@latest
            ...
            Packages to remove: 190
           Packages to install: 304
            Packages to update: 716
           Mediators to change:   8
       Create boot environment: Yes
Create backup boot environment:  No
           ...

Reboot the system to boot the new 11.4 boot environment and check the OS version.

# uname -v
11.3

# beadm list | grep R
11.4.0             R     -          113.20G static 2018-02-26 20:10

# shutdown -y -g 30 -i6

# uname -v
11.4.0.12.0

# pkg info entire | grep Version
          Version: 11.4 (Oracle Solaris 11.4.0.0.0.12.1)

Labels:





2004-2018 

This page is powered by Blogger. Isn't yours?