taT4Py | Convert AutoSys Job Attributes into Python Dictionary

If you ever look at the definition of specific AutoSys Job, you would find that it contains attribute-value pairs (line-by-line), delimited by colon ‘:’ I thought it would be cool to parse the job definition, by creating python dictionary using the attribute-value pairs.

Let us take a look at sample job definition;

$> cat sample_jil
insert_job: A0001
command: echo "Hi"
condition: s(B0001, 03\:00) & v(SRVR) = "UP"
std_out_file: >/home/nvarun/outfile
std_err_file: >/home/nvarun/errfile
group: NV
$>

Getting Started

To convert this into Python Dictionary, execute the following command;

$> sed "s/^\([^:]*\):\(.*\)$/'\1':'\2'/" sample_jil > sample_pydict
$> cat sample_pydict
'insert_job':' A0001'
'command':' echo "Hi"'
'condition':' s(B0001, 03\:00) & v(SRVR) = "UP"'
'std_out_file':' >/home/nvarun/outfile'
'std_err_file':' >/home/nvarun/errfile'
'group':' NV'
$>

We are half-way through, to complete the conversion, write following steps in python script and populate dictionary as follows;

import string
jobDefn = {}
with open('sample_pydict', 'r') as f:
    for line in f.read().splitlines():
        colon = string.find(line, ':')
        key = string.replace(line[:colon], "'", "")
        val = line[colon+1:].strip()
        jobDefn[key] = val
print jobDefn

Summary

  1. Using f.read() reads the input file at one go and invoking splitlines() splits the input into list of several lines, resulting in creating an iterator.
  2. The foreach construct iterates over each line, wherein position of first occurrence of colon is found and used for extracting key, value based on slicing.
  3. At the end of the loop, the dictionary object jobDefn is printed.

Hope this helps.

code4Nix | Extract code blocks with ease

Many times, I come across scenario, wherein I need to split given file into sections and write it into different files or process them dynamically, as and when those sections are found.

I did solve this problem, although I find it to be rather inefficient, so I though of rewriting and came up with following generic function, which could be used by anyone to work on scenarios like extract function blocks from shell/perl/python scripts, or extract diff block from git diff output, etc.

Function Specification

function fn__extrt_block {

  test $# -ne 2 && exit 1
  test -f $2 && fv_iputFile=$2 || exit 2
  fv_srchPtrn="$1"

  fv_funcWorkDir=$HOME/${0}
  mkdir -p ${fv_funcWorkDir} && cd ${fv_funcWorkDir}

  grep -n "${fv_srchPtrn}" ${fv_iputFile} | cut -d':' -f1 > outfile

This grep command helps in finding out the line number to mark the beginning of block, using which following while loop will extract separate blocks.

  while true
  do
    echo $(head -2 outfile) | read LB UB
    test ! -z ${UB} && UB=`echo ${UB} - 1 | bc`

LB and UB represents the lower and upper bound of block, i.e. beginning and ending of specific block. For instance, if the outfile has contents like this;

1:Hi Varun

15:Hi Nischal

24:Hi Team

Every iteration, reads first two lines and assigns LB to 1, UB to 15. Next line of code, decrements UB by 1, so as to mark the end of block correctly.

    echo "${LB},${UB:-\$}p" > sed_scpt
    sed -n -f sed_scpt ${fv_iputFile} > file__${LB}_${UB:-$}

Once LB and UB are set, it becomes easy to extract block from input file using sed -n, as shown above. To continue iterating, it is important to keep removing first line, after every successful iteration.

    sed '1d' outfile > outfile.n
    mv outfile.n outfile

As this is an infinite loop, it is important to break the loop, once input file is completely processed.

    test ! -s outfile && break
  done
}

Usage

Let us say, you generate diff between two git commits using following command;

$> git diff versOne versTwo > ~/output__versOne_versTwo.diff

Then, execute the earlier defined function as follows;

$> fn__extrt_block '^diff' ~/output__versOne_versTwo.diff

This will generate block-specific files, with pattern file__*

GitHub Gist Code Reference

https://gist.github.com/nvarun/9575155

Hope this helps.

taT4Nix | Useful Git Aliases

Couple of months back, I mentioned that I started using Git at workplace, thanks to my colleague. So far, I have been creating local repositories at workplace and in the process, spent lot of time exploring as much as I can.

This made me use the command line extensively and I ended up writing 3-letter aliases as follows, which helped saving lot of time.

alias gbr='git branch'
alias gco='git checkout'
alias gci='git commit'
alias gcm='git checkout master'
alias glo='git log --oneline'
alias gst='git status'

This would help, only if you are using command-line interfaces, on Linux/Unix Server.

taT4Py | Extract words from Input String and Operate Functions

Working on AIX Servers with limited grep features, sometimes makes it difficult to use for particular scenarios. For instance, I want to split the lines (read from STDIN or FILE) into words, precisely. However, without grep -o option, I am clueless on how to get desired results. Since past few months, I have been investing time in learning Python and using its features to complement text processing tasks for shell scripts, I often write for automating several tasks and creating productivity tools.

Code Snippet #1

import sys, re
for line in sys.stdin.readlines():
    listofwords = [word for word in re.split('\W', line) if word]
    print listofwords

Looking at the above code snippet, four lines of code, did the trick. The features and constructs provided by Python, to implement scenarios like this, makes it look really cool. Let us understand that quickly, before we can operate functions on those words.

  1. Imports sys module for environment related capabilities and re module for regular expression capabilities.
  2. sys.stdin.readlines() reads the input from STDIN, unless EOF character has been entered.
    • After which, for loop construct iterates over the input lines, one-by-one.
  3. The special-construct present on the right-side of assignment operator, is representation of in-built function [1], filter()
    • re.split() generates [2] list of words, based on pattern given as first argument.
    • The for loop-construct iterates over the generated list and using if-construct, filtering is done.
    • This is done to make sure, there are no empty strings in the generated list using the special-construct.
    • filter(None, re.split(‘\W’, line)) can be used as an alternative, which by default takes care of empty strings, as they return false.

Code Snippet #2

What if the line contains a word, “Hi” and I want to replace all occurrences of “Hi” with “Hey”, while the list is getting generated using above approach. To make it possible, the below code snippet imports additional module string, and invokes string.replace() to do the needful.

import sys, re, string
for line in sys.stdin.readlines(): 
    listofwords = [string.replace(word, 'Hi', 'Hey') for word in re.split('\W', line) if word]
    print listofwords

This might look like not-so-useful variant, however one gets the idea to explore further and try various possibilities by themselves. Hope this helps.

In case, you feel there is any correction required, let me know and I will do the needful.

References

[1] http://docs.python.org/2/library/functions.html#filter [2] http://docs.python.org/2/library/re.html#re.split

taT4Nix | convert EBCDIC to ASCII and vice versa

So far, I have encountered this on data warehousing projects, probably this might happen in some other domains too. Anyways, if there’s an ebcdic file with you, mostly retrieved from Mainframe systems. Then, one would like to convert them to ASCII for making modifications using text editors on UNIX servers, like AIX.

I have used the following command several times for changing the file from ASCII to EBCDIC or vice-versa. So, this is how its done;

dd if=<ebcdic-file> of=<ascii-file> conv=ascii

Now, you can start modifying the ASCII version and once done, you may convert it back to EBCDIC to be used by your application.

dd if=<ascii-file> of=<ebcdic-file> conv=ebcdic

If you’re just replacing particular number of bytes with equivalent number of bytes having different characters, the conversion would be smooth and application reading the file should not have any issues.

However, I had some issues while adding/deleting records into/from ASCII version, as found when converted back to EBCDIC mode. The file was unreadable by the application and I had difficulty reverting without having backup of EBCDIC version.

Hope this helps :)