Mandalika's scratchpad [ Work blog @Oracle | My Music Compositions ]

Old Posts: 09.04  10.04  11.04  12.04  01.05  02.05  03.05  04.05  05.05  06.05  07.05  08.05  09.05  10.05  11.05  12.05  01.06  02.06  03.06  04.06  05.06  06.06  07.06  08.06  09.06  10.06  11.06  12.06  01.07  02.07  03.07  04.07  05.07  06.07  08.07  09.07  10.07  11.07  12.07  01.08  02.08  03.08  04.08  05.08  06.08  07.08  08.08  09.08  10.08  11.08  12.08  01.09  02.09  03.09  04.09  05.09  06.09  07.09  08.09  09.09  10.09  11.09  12.09  01.10  02.10  03.10  04.10  05.10  06.10  07.10  08.10  09.10  10.10  11.10  12.10  01.11  02.11  03.11  04.11  05.11  07.11  08.11  09.11  10.11  11.11  12.11  01.12  02.12  03.12  04.12  05.12  06.12  07.12  08.12  09.12  10.12  11.12  12.12  01.13  02.13  03.13  04.13  05.13  06.13  07.13  08.13  09.13  10.13  11.13  12.13  01.14  02.14  03.14  04.14  05.14  06.14  07.14  09.14  10.14  11.14  12.14  01.15  02.15  03.15  04.15  06.15  09.15  12.15  01.16  03.16  04.16  05.16  06.16  07.16  08.16  09.16  12.16  01.17  02.17  03.17  04.17  06.17  07.17  08.17  09.17  10.17  12.17  01.18  02.18  03.18  04.18  05.18  06.18  07.18  08.18  09.18  11.18  12.18  01.19  02.19  05.19  06.19 


Saturday, June 29, 2019
 
Parsing JSON on Command Line or in a Shell Script

Couple of options:

1. If you do not have the flexibility of installing new utilities, or simply prefer working with existing tools, one option to consider is to make use of Python's JSON module.

One or more statements can be executed as a unit with the help of Python interpreter's -c option.

eg.,

Following example fetches Coordinated Universal Time with the help of an API published on web, and parses the returned JSON with the help of Python's JSON module to extract the current timestamp.

$ curl -s -X GET  -H "Content-Type: application/json" http://worldclockapi.com/api/json/utc/now | json_reformat
{
    "$id": "1",
    "currentDateTime": "2019-05-21T00:00Z",
    "utcOffset": "00:00:00",
    "isDayLightSavingsTime": false,
    "dayOfTheWeek": "Tuesday",
    "timeZoneName": "UTC",
    "currentFileTime": 132028704477448626,
    "ordinalDate": "2019-141",
    "serviceResponse": null
}

% UTC_DATA=$(curl -s -X GET  -H "Content-Type: application/json" http://worldclockapi.com/api/json/utc/now)
% CURTIMESTAMP=$(python -c "import sys, json; props = json.loads('$UTC_DATA'); print props['currentDateTime']")

% echo $CURTIMESTAMP
2019-05-21T00:01Z

Here's another example that extracts the value of more than one attribute in a shell.

UTC_ATTR=$(python -c "import sys, json; props = json.loads('$UTC_DATA'); print 'TIMESTAMP={0} TZ={1} DAY={2} DayLightSavings={3}' \
   .format(props['currentDateTime'], props['timeZoneName'], props['dayOfTheWeek'], props['isDayLightSavingsTime'])")

UTC_ATTR_ARR=($UTC_ATTR)

$ for ATTR in "${UTC_ATTR_ARR[@]}"
> do
>  echo $ATTR
> done
TIMESTAMP=2019-05-21T00:01Z
TZ=UTC
DAY=Tuesday
DayLightSavings=False
To be continued ..

Labels:




Saturday, May 25, 2019
 
Blast from the Past : The Weekend Playlist #16

Audio & Widget courtesy: Spotify

Old playlists:

    #1    #8   #14 (50s, 60s and 70s)    |    #2    #3    #4    #5 (80s)    |    #6    #7    #9 (90s)    |    #11    #12 (00s)    |    #13 (10s) |    #10 (Instrumental) |    #15 (Cirque Du Soleil)

Labels:




Thursday, February 28, 2019
 
Perl: Enabling Features

By default new features are not enabled in Perl mainly to retain backward compatibility. Starting with release 5.10, latest features have to be enabled explicitly before using them. Here are a few ways to enable Perl features.

  1. Enable new features with the help of -E switch.

    • Introduced in v5.10, -E is similar to -e switch except that -E also enables all latest features. In other words, -E switch on the Perl command line enables the feature bundle for that version of Perl in the main compilation unit. (see #4 below for enabling feature bundles explicitly).

    eg.,

    % perl -e 'print $^V;'
    v5.16.3
    
    % perl -e "print q(Hello)"
    Hello% ⏎
    % perl -E "print q(Hello)"
    Hello% ⏎
    % perl -e "say q(Hello)"
    syntax error at -e line 1, near "say q(Hello)"
    Execution of -e aborted due to compilation errors.
    % perl -E "say q(Hello)"
    Hello
    %
    
  2. Enable one or more features selectively with the help of use feature pragma.

    • use feature pragma minimizes the risk of breaking existing programs as the specified feature will only be enabled within the scope of the pragma.

    eg.,

    % cat -n enable_say.pl
         1  #!/bin/perl
         2  say("Hello")
    
    % ./enable_say.pl
    Undefined subroutine &main::say called at ./enable_say.pl line 2.
    
    % cat -n enable_say.pl
         1  #!/bin/perl
         2  use feature say;
         3  say("Hello")
    
    % ./enable_say.pl
    Hello
    

    To enable multiple features, spearate each one with spaces. eg., use feature qw(say state switch unicode_strings).

  3. Enable all features available in the requested version with the help of use <VERSION> directive.

    • use <VERSION> disables any features not in the requested version's feature bundle. (see #4 below for enabling feature bundles explicitly). This is useful to check the current Perl version before using library modules that won't work with older versions of Perl.

    eg.,

    % cat -n enable_say.pl
         1  #!/bin/perl
         2  say("Hello")
    
    % ./enable_say.pl
    Undefined subroutine &main::say called at ./enable_say.pl line 2.
    
    % cat -n enable_say.pl
         1  #!/bin/perl
         2  use v5.10;
         3  say("Hello")
    
    % ./enable_say.pl
    Hello
    
  4. Similar to use <VERSION> directive, feature bundles help load multiple features together. Any feature bundle can be enabled with the help of use feature ":<FEATURE_BUNDLE>" directive.

    The colon that prefixes the feature bundle differentiates the bundle from an actual feature.

    eg.,

    To enable say, state, switch, unicode_strings, unicode_eval, evalbytes, current_sub and fc features, enable feature bundle 5.16 as shown below.

    #!/bin/perl
    ..
    use feature ":5.16";
    ..
    

    Perl features available in various feature bundles are listed here.

Credit: various sources

Labels:




Sunday, January 27, 2019
 
Python Object Persistence Module, pickle — Quick Notes

Pickling or serialization is the process of converting a Python object to a byte stream; and Unpickling or deserialization is the process of re-creating the original in-memory Python object (not necessarily at the same memory address).

Python's pickle module has the necessary methods to pickle and unpickle Python object hierarchies.

pickle module:

It is possible to pickle a variety of data types including built-in types — numeric types (integer, float, complex numbers), sequence types (lists, tuples), text sequence type (strings), binary sequence types (bytes, bytearray), set types (set), mapping types (dictionary), classes and built-in functions defined at the top level of a module.

Any attempt to pickle an unpicklable object may trigger PicklingError exception.

Couple of gotchas:

eg.,

A trivial example demonstrating the calls to pickle (save data to a binary file) and unpickle (load data from the binary file) a Python data structure.

#!/usr/bin/python

import pickle

EMP = {}
EMP['name'] = 'Gary'
EMP['id'] = 12345

# pickle
with open('employee.db', 'wb') as f:
 pickle.dump(EMP, f, pickle.HIGHEST_PROTOCOL)

print '  Pickled data, EMP     ', EMP

# unpickle
with open('employee.db', 'rb') as f:
 EMP_REC = pickle.load(f)

print 'Unpickled data, EMP_REC ', EMP_REC, '\n'

print '(EMP_REC is EMP)? : ', (EMP_REC is EMP)
print '(EMP_REC == EMP)? : ', (EMP_REC == EMP)

Running the above code shows the following on stdout.

  Pickled data, EMP      {'name': 'Gary', 'id': 12345}
Unpickled data, EMP_REC  {'name': 'Gary', 'id': 12345} 

(EMP_REC is EMP)? :  False
(EMP_REC == EMP)? :  True

dump() method takes a serializable Python object as the first argument; and writes pickled representation of the object (serialized object) to a file. Second argument is the file handle that points to an open file. Rest of the arguments are optional.

load() method reads a pickled object representation (serialized data) from a file and returns the reconstructed object. The protocol version is detected automatically so it is not necessary to specify the protocol version during unpickling process.

In-Memory Pickling/Unpickling Operations

If persistence is not a requirement, dumps() and loads() methods in pickle module can be used to serialize (pickle) and deserialize (unpickle) a Python object in memory. This is useful when sending Python objects over network between compatible applications.

eg.,
#!/usr/bin/python

import pickle

EMP = {}
EMP['name'] = 'Gary'
EMP['id'] = 12345

# in-memory pickling
x = pickle.dumps(EMP, pickle.HIGHEST_PROTOCOL)

print '  Pickled data, EMP     ', EMP

# in-memory unpickling
EMP_REC = pickle.loads(x)

print 'Unpickled data, EMP_REC ', EMP_REC, '\n'

print '(EMP_REC is EMP)? : ', (EMP_REC is EMP)
print '(EMP_REC == EMP)? : ', (EMP_REC == EMP)

Running the above code shows output identical to the output produced by the previous code listing - just that there is no file involved this time.

  Pickled data, EMP      {'name': 'Gary', 'id': 12345}
Unpickled data, EMP_REC  {'name': 'Gary', 'id': 12345} 

(EMP_REC is EMP)? :  False
(EMP_REC == EMP)? :  True

Exceptions

As mentioned earlier, any attempt to pickle or unpickle objects that are not appropriate for serialization fail with an exception. Therefore, it is appropriate to safe guard the code with try-except blocks to handle unexpected failures.

Here is another trivial example demonstrating a pickling exception.

#!/usr/bin/python

import sys
import pickle

try:

 f = open('dummy.txt', 'a')
 x = pickle.dumps(f)
 print 'Pickled file handle'

except Exception, e:
 print 'Caught ', e.__class__.__name__, '-',  str(e)

Running the above code throws a TypeError as shown below.

Caught  TypeError - can't pickle file objects

(Credit: Various Sources including Python Documentation)

Labels:




Sunday, December 30, 2018
 
Python Lambda Functions - Quick Notes

Lambda functions:

Syntax:

 lambda argument(s): expression

eg.,

A trivial example.

def areaofcircle(radius):
 return math.pi * radius * radius

can be written as:

circlearea = lambda radius : math.pi * radius * radius

In this example, the lambda function accepts a lone argument radius; and the function evaluates the lambda expression (π * radius2) to calculate the area of a circle and returns the result to caller.

In case of lambda function, the identifier "circlearea" is assigned the function object the lambda expression creates so circlearea can be called like any normal function. eg., circlearea(5)

Another trivial example that converts first character in each word in a list to uppercase character.

>> fullname = ["john doe", "richard roe", "janie q"]
>>> fullname
['john doe', 'richard roe', 'janie q']
>>> map(lambda name : name.title(), fullname)
['John Doe', 'Richard Roe', 'Janie Q']

Labels:




Saturday, December 08, 2018
 
Blast from the Past : The Weekend Playlist #15 — Cirque Du Soleil Special

This edition is dedicated to Cirque du Soleil, the entertainment group with many successful live shows under their belt. Live original music accompanies almost all of Cirque's carefully choreographed live performances.

Current playlist features music from Cirque du Soleil's various live shows — Saltimbanco (1992), Mystère (1993), Alegría (1994), Dralion (1999), Varekai (2002), Zumanity (2003), Kà (2004), Corteo (2005), Koozå (2007) and Luzia (2016).

Be aware that some of the songs are in imaginary/invented language.

Enjoy!

Audio & Widget courtesy: Spotify

Old playlists:

    #1    #8   #14 (50s, 60s and 70s)    |    #2    #3    #4    #5 (80s)    |    #6    #7    #9 (90s)    |    #11    #12 (00s)    |    #13 (10s) |    #10 (Instrumental)

Labels:




Friday, November 30, 2018
 
Programming in C: Few Tidbits #8

1) Function Pointers

Declaring Function Pointers

Similar to a variable declared as pointer to some data type, a variable can also be declared to be a pointer to a function. Such a variable stores the address of a function that can later be called using that function pointer. In other words, function pointers point to the executable code rather than data like typical pointers.

eg.,
void (*func_ptr)();

In the above declaration, func_ptr is a variable that can point to a function that takes no arguments and returns nothing (void).

The parentheses around the function pointer cannot be removed. Doing so makes the declaration a function that returns a void pointer.

The declaration itself won't point to anything so a value has to be assigned to the function pointer which is typically the address of the target function to be executed.


Assigning Function Pointers

If a function by name dummy was already defined, the following assignment makes func_ptr variable to point to the function dummy.

eg.,
void dummy() { ; }
func_ptr = dummy;

In the above example, function's name was used to assign that function's address to the function pointer. Using address-of / address operator (&) is another way.

eg.,
void dummy() { ; }
func_ptr = &dummy;

Above two sample assignments highlight the fact that similar to arrays, a function's address can be obtained either by using address operator (&) or by simply specifying the function name - hence the use of address operator is optional. Here's an example proving that.

% cat funcaddr.c

#include <stdio.h>

void foo() { ; }

void main()
{
        printf("Address of function foo without using & operator = %p\n", foo);
        printf("Address of function foo         using & operator = %p\n", &foo);
}

% cc -o funcaddr funcaddr.c

% ./funcaddr
Address of function foo without using & operator = 10b6c
Address of function foo         using & operator = 10b6c

Using Function Pointers

Once we have a function pointer variable pointing to a function, we can call the function that it points to using that function pointer variable as if it is the actual function name. Dereferencing the function pointer is optional similar to using & operator during function pointer assignment. The dereferencing happens automatically if not done explicitly.

eg.,

The following two function calls are equivalent, and exhibit the same behavior.

func_ptr();
(*func_ptr)();

Complete Example

Here is one final example for the sake of completeness. This example demonstrate the execution of couple of arithmetic functions using function pointers. Same example also highlights the optional use of & operator and pointer dereferencing.

% cat funcptr.c
#include <stdio.h>

int add(int first, int second) {
        return (first + second);
}

int multiply(int first, int second) {
        return (first * second);
}

void main()
{
        int (*func_ptr)(int, int);                      /* declaration */
        func_ptr = add;                                 /* assignment (auto func address) */
        printf("100+200 = %d\n", (*func_ptr)(100,200)); /* execution  (dereferencing) */
        func_ptr = &multiply;                           /* assignment (func address using &) */
        printf("100*200 = %d\n", func_ptr(100,200));    /* execution  (auto dereferencing) */
}

% cc -o funcptr funcptr.c

% ./funcptr
100+200 = 300
100*200 = 20000

Few Practical Uses of Function Pointers

Function pointers are convenient and useful while writing functions that sort data. Standard C Library includes qsort() function to sort data of any type (integers, floats, strings). The last argument to qsort() is a function pointer pointing to the comparison function.

Function pointers are useful to write callback functions where a function (executable code) is passed as an argument to another function that is expected to execute the argument (call back the function sent as argument) at some point.

In both examples above function pointers are used to pass functions as arguments to other functions.

In some cases function pointers may make the code cleaner and readable. For example, array of function pointers may simplify a large switch statement.

2) Printing Unicode Characters

Here's one possible way.

Following rudimentary code sample prints random currency symbols and a name in Telugu script using both printf and wprintf function calls.

% cat -n unicode.c
     1  #include <wchar.h>
     2  #include <locale.h>
     3  #include <stdio.h>
     4
     5  int main()
     6  {
     7          setlocale(LC_ALL,"en_US.UTF-8");
     8          wprintf(L"\u20AC\t\u00A5\t\u00A3\t\u00A2\t\u20A3\t\u20A4");
     9          wchar_t wide[4]={ 0x0C38, 0x0C30, 0x0C33, 0 };
    10          printf("\n%ls", wide);
    11          wprintf(L"\n%ls", wide);
    12          return 0;
    13  }

% cc -o unicode unicode.c

% ./unicode
€      ¥      £      ¢      ₣      ₤
సరళ
సరళ

Here is one website where numerical values for various Unicode characters can be found.

Labels:




Saturday, September 29, 2018
 
Oracle SuperCluster: Brief Introduction to osc-interdom

Target audience: Oracle SuperCluster customers

The primary objective of this blog post is to provide some related information on this obscure tool to inquisitive users/customers as they might have noticed the osc-interdom service and the namesake package and wondered what is it for at some point.

SuperCluster InterDomain Communcation Tool, osc-interdom, is an infrastructure framework and a service that runs on Oracle SuperCluster products to provide flexible monitoring and management capabilities across SuperCluster domains. It provides means to inspect and enumerate the components of a SuperCluster so that other components can fulfill their roles in managing the SuperCluster. The framework also allows commands to be executed from a control domain to take effect across all domains on the server node (eg., a PDom on M8) and, optionally, across all servers (eg., other M8 PDoms in the cluster) on the system.

SuperCluster Virtual Assistant (SVA), ssctuner, exachk and Oracle Enterprise Manager (EM) are some of the consumers of the osc-interdom framework.

Installation and Configuration

Interdom framework requires osc-interdom package from exa-family repository be installed and enabled on all types of domains in the SuperCluster.

In order to enable communication between domains in the SuperCluster, interdom must be configured on all domains that need to be part of the inter-domain communication channel. In other words, it is not a requirement for all domains in the cluster to be part of the osc-interdom configuration. It is possible to exclude some domains from the comprehensive interdom directory either during initial configuration or at a later time. Also once the interdom directory configuration was built, it can be refreshed or rebuilt any time at will.

Since installing and configuring osc-interdom was automated and made part of SuperCluster installation and configuration processes, it is unlikely that anyone from the customer site need to know or to perform those tasks manually.

# svcs osc-interdom
STATE          STIME    FMRI
online         22:24:13 svc:/site/application/sysadmin/osc-interdom:default

Domain Registry and Command Line Interface (CLI)

Configuring interdom results in a Domain Registry. The purpose of the registry is to provide an accurate and up-to-date database of all SuperCluster domains and their characteristics.

oidcli is a simple command line interface for the domain registry. The oidcli command line utility is located in /opt/oracle.supercluster/bin directory.

oidcli utility can be used to query interdom domain registry for data that is associated with different components in the SuperCluster. Each component maps to a domain in the SuperCluster; and each component is uniquely identified by a UUID.

The SuperCluster Domain Registry is stored on Master Control Domain (MCD). The "master" is usually the first control domain in the SuperCluster. Since the domain registry is on the master control domain, it is expected to run oidcli on MCD to query the data. When running from other domains, option -a must be specified along with the management IP address of the master control domain.

Keep in mind that the data returned by oidcli is meant for other SuperCluster tools that have the ability to interpret the data correctly and coherently. Therefore humans who are looking at the same data may need some extra effort to digest and understand.

eg.,

# cd /opt/oracle.supercluster/bin

# ./oidcli -h
Usage: oidcli [options] dir |  [options]   [...]
           invalidate|get_data|get_value
            , other component ID or 'all'
          (e.g. 'hostname', 'control_uuid') or 'all'
  NOTE: get_value must request single 

Options:
  -h, --help  show this help message and exit
  -p          Output in PrettyPrinter format
  -a ADDR     TCP address/hostname (and optional ',') for connection
  -d          Enable debugging output
  -w W        Re-try for up to  seconds for success. Use 0 for no wait.
              Default: 1801.0.

List all components (domains)

# ./oidcli -p dir
[   ['db8c979d-4149-452f-8737-c857e0dc9eb0', True],
    ['4651ac93-924e-4990-8cf9-83be556eb667', True],
 ..
    ['945696fb-97f1-48e3-aa20-8c8baf198ea8', True],
    ['4026d670-61db-425e-834a-dfc45ff9a533', True]]

List the hostname of all domains

# ./oidcli -p get_data all hostname
db8c979d-4149-452f-8737-c857e0dc9eb0:
{   'hostname': {   'mtime': 1538089861, 'name': 'hostname', 'value': 'alpha'}}
3cfc9039-2157-4b62-ac69-ea3d85f2a19f:
{   'hostname': {   'mtime': 1538174309,
                    'name': 'hostname',
                    'value': 'beta'}}
...
List all available properties for all domains

# ./oidcli -p get_data all all
db8c979d-4149-452f-8737-c857e0dc9eb0:
{   'banner_name': {   'mtime': 1538195164,
                       'name': 'banner_name',
                       'value': 'SPARC M7-4'},
    'comptype': {   'mtime': 1538195164, 'name': 'comptype', 'value': 'LDom'},
    'control_uuid': {   'mtime': 1538195164,
                        'name': 'control_uuid',
                        'value': 'Unknown'},
    'guests': {   'mtime': 1538195164, 'name': 'guests', 'value': None},
    'host_domain_chassis': {   'mtime': 1538195164,
                               'name': 'host_domain_chassis',
                               'value': 'AK00251676'},
    'host_domain_name': {   'mtime': 1538284541,
                            'name': 'host_domain_name',
                            'value': 'ssccn1-io-alpha'},
 ...

Query a specific property from a specific domain

# ./oidcli -p get_data 4651ac93-924e-4990-8cf9-83be556eb667 mgmt_ipaddr
mgmt_ipaddr:
{   'mtime': 1538143865,
    'name': 'mgmt_ipaddr',
    'value': ['xx.xxx.xxx.xxx', 20, 'scm_ipmp0']}

The domain registry is persistent and updated hourly. When accurate and up-to-date is needed, it is recommended to query the registry with --no-cache option.

eg.,
# ./oidcli -p get_data --no-cache 4651ac93-924e-4990-8cf9-83be556eb667 load_average
load_average:
{   'mtime': 1538285043,
    'name': 'load_average',
    'value': [0.01171875, 0.0078125, 0.0078125]}

The mtime attribute in all examples above represent the UNIX timestamp.

Debug Mode

By default, osc-interdom service runs in non-debug mode. Running the service in debug mode enables logging more details to osc-interdom service log.

In general, if osc-interdom service is transitioning to maintenance state, switching to the debug mode may provide few additional clues.

To check if debug mode was enabled, run:

svcprop -c -p config osc-interdom | grep debug

To enable debug mode, run:

svccfg -s sysadmin/osc-interdom:default setprop config/oidd_debug '=' true
svcadm restart osc-interdom

Finally check the service log for debug messages. svcs -L osc-interdom command output points to the location of osc-interdom service log.

Documentation

Similar to SuperCluster resource allocation engine, osc-resalloc, interdom framework is mostly meant for automated tools with little or no human interaction. Consequently there are no references to osc-interdom in SuperCluster Documentation Set.

Related:

Acknowledgments/Credit:

  Tim Cook

Labels:




Friday, August 31, 2018
 
Random Solaris & Shell Command Tips — kstat, tput, sed, digest

The examples shown in this blog post were created and executed on a Solaris system. Some of these tips and examples are applicable to all *nix systems.

Digest of a File

One of the typical uses of computed digest is to check if a file has been compromised or tampered. The digest utility can be used to calculate the digest of files.

On Solaris, -l option lists out all cryptographic hash algorithms available on the system.

eg.,
% digest -l
sha1
md5
sha224
sha256
..
sha3_224
sha3_256
..

-a option can be used to specify the hash algorithm while computing the digest.

eg.,
% digest -v -a sha1 /usr/lib/libc.so.1
sha1 (/usr/lib/libc.so.1) = 89a588f447ade9b1be55ba8b0cd2edad25513619

Multiline Shell Script Comments

Shell treats any line that start with '#' symbol as a comment and ignores such lines completely. (The line on top that start with #! is an exception).

From what I understand there is no multiline comment mechanism in shell. While # symbol is useful to mark single line comments, it becomes laborious and not well suited to comment quite a number of contiguous lines.

One possible way to achieve multiline comments in a shell script is to rely on a combination of shell built-in ':' and Here-Document code block. It may not be the most attractive solution but gets the work done.

Shell ignores the lines that start with a ":" (colon) and returns true.

eg.,
% cat -n multiblock_comment.sh
     1  #!/bin/bash
     2
     3  echo 'not a commented line'
     4  #echo 'a commented line'
     5  echo 'not a commented line either'
     6  : <<'MULTILINE-COMMENT'
     7  echo 'beginning of a multiline comment'
     8  echo 'second line of a multiline comment'
     9  echo 'last line of a multiline comment'
    10  MULTILINE-COMMENT
    11  echo 'yet another "not a commented line"'

% ./multiblock_comment.sh
not a commented line
not a commented line either
yet another "not a commented line"

tput Utility to Jazz Up Command Line User Experience

The tput command can help make the command line terminals look interesting. tput can be used to change the color of text, apply effects (bold, underline, blink, ..), move cursor around the screen, get information about the status of the terminal and so on.

In addition to improving the command line experience, tput can also be used to improve the interactive experience of scripts by showing different colors and/or text effects to users.

eg.,
% tput bold <= bold text
% date
Thu Aug 30 17:02:57 PDT 2018
% tput smul <= underline text
% date
Thu Aug 30 17:03:51 PDT 2018
% tput sgr0 <= turn-off all attributes (back to normal)
% date
Thu Aug 30 17:04:47 PDT 2018

Check the man page of terminfo for a complete list of capabilities to be used with tput.

Processor Marketing Name

On systems running Solaris, processor's marketing or brand name can be extracted with the help of kstat utility. cpu_info module provides information related to the processor(s) on the system.

eg.,
On SPARC:

% kstat -p cpu_info:1:cpu_info1:brand
cpu_info:1:cpu_info1:brand      SPARC-M8

On x86/x64:

% kstat -p cpu_info:1:cpu_info1:brand
cpu_info:1:cpu_info1:brand      Intel(r) Xeon(r) CPU           L5640  @ 2.27GHz

In the above example, cpu_info is the module. 1 is the instance number. cpu_info1 is the name of the section and brand is the statistic in focus. Note that cpu_info module has only one section cpu_info1. Therefore it is fine to skip the section name portion (eg., cpu_info:1::brand).

To see the complete list of statistics offered by cpu_info module, simply run kstat cpu_info:1.

Consolidating Multiple sed Commands

sed utility allows specifying multiple editing commands on the same command line. (in other words, it is not necessary to pipe multiple sed commands). The editing commands need to be separated with a semicolon (;)

eg.,

The following two commands are equivalent and yield the same output.

% prtconf | grep Memory | sed 's/Megabytes/MB/g' | sed 's/ size//g'
Memory: 65312 MB

% prtconf | grep Memory | sed 's/Megabytes/MB/g;s/ size//g'
Memory: 65312 MB

Labels:




Tuesday, July 31, 2018
 
Solaris 11: High-Level Steps to Create an IPS Package

Keywords: Solaris package IPS+Repository pkg


1Work on Directory Structure


Start with organizing the package contents (files) into the same directory structure that you want on the installed system.

In the following example the directory was organized in such a manner that when the package was installed, it results in software being copied to /opt/myutils directory.

eg.,

# tree opt

opt
`-- myutils
    |-- docs
    |   |-- README.txt
    |   `-- util_description.html
    |-- mylib.py
    |-- util1.sh
    |-- util2.sh
    `-- util3.sh

Create a directory to hold the software in the desired layout. Let us call this "workingdir", and this directory will be specified in subsequent steps to generate the package manifest and finally the package itself. Move the top level software directory to the "workingdir".

# mkdir workingdir
# mv opt workingdir

# tree -fai workingdir/
workingdir
workingdir/opt
workingdir/opt/myutils
workingdir/opt/myutils/docs
workingdir/opt/myutils/docs/README.txt
workingdir/opt/myutils/docs/util_description.html
workingdir/opt/myutils/mylib.py
workingdir/opt/myutils/util1.sh
workingdir/opt/myutils/util2.sh
workingdir/opt/myutils/util3.sh

2Generate Package Manifest


Package manifest provides metadata such as package name, description, version, classification & category along with the files and directories included, and the dependencies, if any, need to be installed for the target package.

The manifest for an existing package can be examined with the help of pkg contents subcommand.

pkgsend generate command generates the manifest. It takes "workingdir" as input. Piping the output through pkgfmt makes the manifest readable.

# pkgsend generate workingdir | pkgfmt > myutilspkg.p5m.1

# cat myutilspkg.p5m.1


3Add Metadata to Package Manifest


Note that the package manifest is currently missing attributes such as name and description (metadata). Those attributes can be added directly to the generated manifest. However the recommended approach is to rely on pkgmogrify utility to make changes to an existing manifest.

Create a text file with the missing package attributes.

eg.,
# cat mypkg_attr
set name=pkg.fmri value=myutils@3.0,5.11-0
set name=pkg.summary value="Utilities package"
set name=pkg.description value="Utilities package"
set name=variant.arch value=sparc
set name=variant.opensolaris.zone value=global

set name=variant.opensolaris.zone value=global action restricts the package installation to global zone. To make the package installable in both global and non-global zones, either specify set name=variant.opensolaris.zone value=global value=nonglobal action in the package manifest, or do not have any references to variant.opensolaris.zone variant at all in the manifest.

Now merge the metadata with the manifest generated in previous step.

# pkgmogrify myutilspkg.p5m.1 mypkg_attr | pkgfmt > myutilspkg.p5m.2

# cat myutilspkg.p5m.2


4Evaluate & Generate Dependencies


Generate the dependencies so they will be part of the manifest. It is recommended to rely on pkgdepend utility for this task rather than declaring depend actions manually to minimize inaccuracies.

eg.,
# pkgdepend generate -md workingdir myutilspkg.p5m.2 | pkgfmt > myutilspkg.p5m.3

At this point, ensure that the manifest has all the dependencies listed. If not, declare the missing dependencies manually.


5Resolve Package Dependencies


This step might take a while to complete.

eg.,
# pkgdepend resolve -m myutilspkg.p5m.3

6Verify the Package


By this time the package manifest should pretty much be complete. Check and validate it manually or using pkglint utility (recommended) for consistency and any possible errors.

# pkglint myutilspkg.p5m.3.res

7Publish the Package


For the purpose of demonstration let's go with the simplest option to publish the package, local file-based repository.

Create the local file based repository using pkgrepo command, and set the default publisher for the newly created repository.

# pkgrepo create my-repository
# pkgrepo -s my-repository set publisher/prefix=mypublisher

Finally publish the target package with the help of pkgsend command.

# pkgsend -s my-repository publish -d workingdir myutilspkg.p5m.3.res
pkg://mypublisher/myutils@3.0,5.11-0:20180704T014157Z
PUBLISHED

# pkgrepo info -s my-repository
PUBLISHER   PACKAGES STATUS           UPDATED
mypublisher 1        online           2018-07-04T01:41:57.414014Z

8Validate the Package


Finally validate whether the published package has been packaged properly by test installing it.

# pkg set-publisher -p my-repository
# pkg publisher
# pkg install myutils

# pkg info myutils
             Name: myutils
          Summary: Utilities package
      Description: Utilities package
            State: Installed
        Publisher: mypublisher
          Version: 3.0
    Build Release: 5.11
           Branch: 0
   Packaging Date: Wed Jul 04 01:41:57 2018
Last Install Time: Wed Jul 04 01:45:05 2018
             Size: 49.00 B
             FMRI: pkg://mypublisher/myutils@3.0,5.11-0:20180704T014157Z

Labels:





2004-2019 

This page is powered by Blogger. Isn't yours?