vbin

Vbin is a collection of Command Line Tools for Unix. They are used with other Unix Bash commands to form an arsenal of tools.

Source: https://github.com/dlink/vbin

Topics

10/27/2014: Not yet fully documented. see github

Installation clip col_sum csv lshead psgrep revver
sav savdiff take unsav      

SQL Utilities

12/18/2014: Not yet fully documented. see github

Usage Intro count desc explain selectall show_tables
sql_pretty t2c        

Installation

In a nutshell here it is:

   $ git clone [email protected]:dlink/vbin.git
   $ export PATH=$PATH:$HOME/vbin

You can add the PATH command to your .bash_profile.
Site Install

A good place to install vbin is in /usr/local/vbin, as root. Then add it to the Unix PATH in /etc/profile.d for everyone

Here is an example install as root:

   $ sudo su -
   # cd /usr/local
   # git clone [email protected]:dlink/vbin.git

   # cd /etc/profile.d
   # cat > vbin
   PATH=$PATH:/usr/local/vbin
   ^D  (cntrl-D)

Test it:
   $ lshead

See, Also: topics

clip

Clip the output on the screen horizontally, to avoid word wrap. You can in affect page right and left thru very wide outputs

    SOMECOMMAND | clip [long] [<PAGE_NUM>]

Clip defaults to 80 columns.

Example: For 80 column screens:

   $ cat log | clip      # show first page, ie. cut -c1-80)
   $ cat log | clip 2    # show second page, ie. cut -c81-160
   $ cat log | clip 3    # show third page, ie. cut -c161-240

For wider screens
   $ cat log | clip long      # show first long page, ie. cut -c1-158
   $ cat log | clip long 2    # show second long page, ie. cut -c159-316

See, Also: topics

col_sum

Add up a column of numbers. Works well with tables and csv files

   <STDOUT> | col_sum [ <COL> ]

	COL - Number of the column you want to sum up
              Zero base.
              Default is 0

Example:

df -m | col_sum +3 Sum available diskspace across all drives in MB.
   grep -c class *.py | cut -d':' -f2 | col_sum

Count number of classes all python programs: For wider screens
   $ cat log | clip long      # show first long page, ie. cut -c1-158
   $ cat log | clip long 2    # show second long page, ie. cut -c159-316

See, Also: topics, csv

csv

csv file utility. Process a csv file, treating it like a database table.

   csv [OPTIONS] <filename>
   csv [OPTIONS] -

   optional arguments:
     -               Treat <STDIN> instead of <filename>
     -h, --help      show this help message and exit
     -c <colspec>    List or range of column numbers.  
                     They can be in any order.  Eq.: 2,1,4-6
     -d <delimiter>  Set delimiter. Default is comma (,)
     -p              Pretty-Print output
     -s              show column headers

Example:

Show file header
   $ csv -s books.csv 
   1. id
   2. isbn
   3. name
   4. publicated
   5. type

List out contents in pretty form:
   $ csv -p books.csv
   id, isbn,          name,                                    publicated, type
   1,  9781557427960, The Picture of Dorian Gray,              1891,       Novel
   2,  9780140283297, On the Road,                             1957,       Novel
   3,  9781851243969, Frankenstein; or, The Modern Prometheus, 1818,       Novel
   4,  9780345347954, Childhood's End,                         1966,       Novel
   5,  9780451457998, A Clockwork Orange,                      1962,       Novel
   6,  9780440184621, Tai-Pan,                                 1982,       Novel
   7,  9780486266848, Another Turn of the Screw,               1898,       Novel
   8,  9780486280615, Adventures of Huckleberry Finn,          1884,       Novel
   9,  9780143104889, A Princess of Mars,                      1917,       Novel
   13, 9781614270621, The Prophet,                             1923,       Poetry
   21, 9780374528379, Brothers Karamazov,                      1880,       Novel

List publication year and book title where publication in the 1800s
   $ csv -c4,3 books.csv |grep ^18 |sort -n
   1818,Frankenstein; or, The Modern Prometheus
   1880,Brothers Karamazov
   1884,Adventures of Huckleberry Finn
   1891,The Picture of Dorian Gray
   1898,Another Turn of the Screw

List certain columns in pretty format:
   $ csv -c2,3,5 books.csv -p
   isbn,          name,                                    type
   9781557427960, The Picture of Dorian Gray,              Novel
   9780140283297, On the Road,                             Novel
   9781851243969, Frankenstein; or, The Modern Prometheus, Novel
   9780345347954, Childhood's End,                         Novel
   9780451457998, A Clockwork Orange,                      Novel
   9780440184621, Tai-Pan,                                 Novel
   9780486266848, Another Turn of the Screw,               Novel
   9780486280615, Adventures of Huckleberry Finn,          Novel
   9780143104889, A Princess of Mars,                      Novel
   9781614270621, The Prophet,                             Poetry
   9780374528379, Brothers Karamazov,                      Novel

See, Also: topics

lshead

List the top 10 most recently edited files in a subdirectory.
   lshead [<directory>]

The command simply does a ls -l | head <directory>. It is an excellent way to see what was being worked on recenly.

Example:

   $ lshead
   total 232
   -rwxrwxr-x  1 dlink fwk 55136 Sep 26 11:07 #bookserver.py#
   -rw-rw-r--  1 dlink fwk 29863 Sep 26 10:47 overlay.pyc
   -rw-rw-r--  1 dlink fwk 40520 Sep 26 10:19 bookserver.pyc
   drwxrwxr-x  2 dlink fwk  4096 Sep 26 10:19 alter_xmls
   -rwxrwxr-x  1 dlink fwk 55136 Sep 26 10:19 bookserver.py
   drwxrwxr-x  2 dlink fwk  4096 Sep 26 10:19 elements
   -rwxrwxr-x  1 dlink fwk 39698 Sep 16 16:46 overlay.py
   -rwxrwxr-x  1 dlink fwk  9567 Sep 16 08:09 bss.py
   -rw-rw-r--  1 dlink fwk  4214 Sep 16 07:53 fupload.pyc

See, Also: topics

psgrep

Search all running processes. It is a combination of ps and grep.
   psgrep PATTERN

The command simply does a ps -ef | grep PATTERN. It is an excellent way to see whats running. The command will aways return your current process, as well.

Examples:

Is MySQL running?:

   $ psgrep mysql
   mysql      917     1  0 Sep28 ?        00:26:44 /usr/sbin/mysqld
   dlink    16116 16114  0 22:53 pts/2    00:00:00 grep mysql

Yes.

Is Plague online?

   $ psgrep plague  
   plague      29510 29508  0 Sep26 ?        00:00:00 -bash -c cd /asphyxia/lib && ./asphyx.py 
   plague      29539 29510  0 Sep26 ?        02:50:51 /usr/local/bin/python ./asphx.py
   plague      29577 29576  0 Sep26 ?        00:00:00 -bash -c cd /asphyxia/lib && ./workflowd.py 
   plague      29606 29577  0 Sep26 ?        02:38:57 python ./workflowd.py

No, but he's running some daemons.

See, Also: topics

revver

Roll revision copies of a file with postfixes: _1, _2, etc. It is inspired by VMS O/S.

   revver [<options>] <filename>

        options:
           -q  quite mode
           -u  Unroll revisions

Example:

  $ revver book.xml
  mv -f book.xml book.xml_1

Running the same command a second time (after recreating the original):
  $ revver book.xml
  mv -f book.xml_1 book.xml_2
  mv -f book.xml book_xml_1

A maximum of 10 revisions will be kept.

The -u command unrolls revisions. It also makes a "_bak" copy of the original before it is replaced with the "_1"

  $ revver -u book.xml
  mv -f book.xml book.xml_bak
  mv -f book.xml_1 book.xml
  mv -f book.xml_2 book_xml_1

See, Also: topics, sav, savdiff, unsav

sav

Create a copy of a file, preserving timestamp and permissions. The new file will have the '.sav' file extenion.

   sav <filename>

Example:

   $ sav professors.csv
   cp -p professors.csv professors.csv.sav

See, Also: topics, savdiff, unsav, revver

savdiff

Compare a saved copy of a file with the original. This command assumes you have run the sav command previously.

   savdiff <filename>

Example:

   $ savdiff professors.csv
   diff  professors.csv.sav professors.csv.sav
   ...

See, Also: topics, sav, unsav, revver

take

"Take" some lines out of a file between two given line numbers (<FROM> and <TO>). This is useful for cutting groups of lines out of long files.

   <STDOUT> | take <FROM> <TO>

Example:

Returns 1000 lines from ids.csv from 1001 to 2000

   $ cat ids.csv | take 1001 2000

Display and processes the second set of 1000 ids

   $ cat ids.csv | take 1000 2000 | while read i; do echo $i ; ./myprocess.py $i ; done

topics

unsav

Copy a saved file back to the original. This is the reverse of the sav command.

   unsav <filename>

Example:

   $ unsav professors.csv
   cp -p professors.csv/sav professors.csv

See, Also: topics, sav, savdiff, revver

SQL Utilities

SQL Utilities Intro

The main idea of these utilities is to be able work in SQL on the Unix command Line.

Unix Command Line + SQL = Unbeatable Data Analysis and Data Discovery.

The main idea is to be able to pass sql command directly to your databases on the command line.

Like this:

   echo "select * from books where book_id = 5 \G" | devdb -t

And like this:

   cat books.sql | devdb -t

 
Database Aliases

First you must setup a set of aliases to your database connections. You can put this in a file called ~/bin/database_aliases.sh. And then source it in your ~/.bash_profile.

Something like this:

/home/dlink/bin/database_aliases.sh:

   alias devdb='mysql -udlink -pbojangles -hlocalhost --database=library'
   alias stagedb='mysql -uzimuser -psecrete -hstagedb.acme.com --database=library'
   alias proddb='mysql -uzimuser -psecrete -hdb01.izimbra.com --database=library'
   alias proddbro='mysql -uzimuser -psecrete -hdb02.izimbra.com --database=library'

You may need to install the mysql client. You can get more information about on the MySQL Command-Line Tool Page.

Notice the use of --database=<databasename> rather then simply <databasename>. Both work, but the first one allows us to tack other options on the end like -t (table format output).  

Putting it to use

Now we can go to work exploring our database. Here are some examples:

1. What tables are there in the database?

   $ show_tables | devdb -t

2. Is the status table singluar or plural?

   $ show_tables | devdb |grep -i status

3. What fields are in the books table?

   $ desc books | devdb -t

4. How many authors records are there?

   $ count authors | devdb -t

5. Show me one record from the books table

   $ selectsome books id 10 | devdb -t

6. What's the greatest id in the books table?

   $ selectmax books id | devdb -t

7. Put all the status data into a csv file

   $ selectall statuses | devdb | t2c > statuses.csv

8. Show me a frequency histogram of which status_ids are used most frequently in the books table

  $ tally books status_id | devdb -t

9. What indexes does the books table have?

  $ show_indexes books | devdb -t

10. What constraints does the books table have?

  $ show_create books | devdb -t

11. Why is this query so slow?

  $ explain customers_by_provider.sql | proddb -t

12. What tables have foreign key references to the authors table?

  $ fk_usage library authors | devdb -t
 
Classes of SQL Utilities

To get Schema related information about your data see:

    desc, fk_usage. show_create, show_indexes, show_tables

To query your data see:

    count, count_distinct, select5, selectall, selectmax, selectmin, selectsome, tally

Miscellaneous:

    explain. sql_pretty.


count

Return the count, or number of records for a given table

   count <filename> | <dbalias>

Examples:

   $ count pics | vpicsdb  -t

   +-------+
   | count |
   +-------+
   |    69 |
   +-------+

Create a list of all database tables and their row counts:

   $ show_tables | vrdb | grep -v Tables | \
   >    while read t; do echo -n "$t: "; count $t | vrdb | grep -v count ;  done

   City: 4079
   Country: 239
   CountryLanguage: 984
   authors: 10
   book_authors: 11
   book_genres: 22
   books: 25
   countries: 3
   genres: 17

See, Also: SQL Utils Intro, topics, show_tables

desc

Run the mysql description command for given table.

   desc <tablename> | <dbalias>

Examples:

   $ desc pages | vpicsdb -t

+--------------+------------------+------+-----+-------------------+-----------------------------+
| Field        | Type             | Null | Key | Default           | Extra                       |
+--------------+------------------+------+-----+-------------------+-----------------------------+
| page_id      | int(10) unsigned | NO   | PRI | NULL              | auto_increment              |
| name         | varchar(30)      | NO   | UNI | NULL              |                             |
| last_updated | timestamp        | NO   |     | CURRENT_TIMESTAMP | on update CURRENT_TIMESTAMP |
| created      | datetime         | YES  |     | NULL              |                             |
+--------------+------------------+------+-----+-------------------+-----------------------------+

What date columns are in this table?

   $ desc providers | devdb | grep -i date

   start_dated   datetime   YES      NULL   
   last_updated   timestamp   NO      CURRENT_TIMESTAMP   on update CURRENT_TIMESTAMP
   created   datetime   YES      NULL   

See, Also: SQL Utils Intro, topics

explain

Run the mysql Explain Plan on a given sql statement. You run this on an sql query stored in a file.

   explain <filename> | <dbalias>

Examples:

Why is my query slow? The following Mysql Explain Plan shows that no index is being used on the i2 table

   $ explain bad_customers.sql | penergy -t

+----+-------------+-------+------+----------------------+------------+---------+-------+----------+----------------------------------------------+
| id | select_type | table | type | possible_keys        | key        | key_len | ref   | rows     | Extra                                        |
+----+-------------+-------+------+----------------------+------------+---------+-------+----------+----------------------------------------------+
|  1 | SIMPLE      | i     | ref  | customerId,TransDate | customerId | 5       | const |   392878 | Using where; Using temporary; Using filesort |
|  1 | SIMPLE      | i2    | ALL  | customerId           | NULL       | NULL    | NULL  | 10225890 | Using where; Using join buffer               |
+----+-------------+-------+------+----------------------+------------+---------+-------+----------+----------------------------------------------+

See, Also: SQL Utils Intro, topics

selectall

Run a select * from command on a given table.

   selectall <tablename> | <dbalias>

Examples:

   $ selectall genres | vrdb -t

+----+-----------------------+
| id | name                  |
+----+-----------------------+
|  1 | Philosophical         |
|  2 | Gothic                |
|  3 | Speculative           |
|  4 | Roman à clef         |
|  5 | Prose Poetry          |
|  6 | Science Fiction       |
|  7 | Historical Fiction    |
|  8 | Romance               |
|  9 | Horror                |
| 10 | Novella               |
| 11 | Ghost story           |
| 12 | Satire                |
| 13 | Adventure             |
| 14 | Children's literature |
| 15 | Humour                |
| 16 | Fantasy               |
| 17 | Sword and planet      |
+----+-----------------------+
See, Also: SQL Utils Intro, topics

show_tables

Run the mysql show tables command.

   show_tables | <dbalias>

Examples:

   $ show_tables | wpdb -t

+-----------------------+
| Tables_in_wordpress   |
+-----------------------+
| wp_commentmeta        |
| wp_comments           |
| wp_links              |
| wp_options            |
| wp_postmeta           |
| wp_posts              |
| wp_term_relationships |
| wp_term_taxonomy      |
| wp_terms              |
| wp_usermeta           |
| wp_users              |
+-----------------------+
11 rows in set (0.00 sec)


What is the exact spelling of the status tables?

   $ show_tables | nadb -t | grep -i status

customer_statuses
order_statues
statuses

See, Also: SQL Utils Intro, topics

sql_pretty

Format SQL statements nicely.
   sql_pretty [OPTIONS] <filename>

       OPTIONS: -l  - lowercase keywords
                -h  - HTML Output

Examples:

Here is a query that lists Countries and their number of large population Cities:
   $ sql_pretty country_cities_by_population.sql
   select
      co.name as country, count(*) as count
   from
      City ci
      join Country co on ci.CountryCode = co.Code
   where
      ci.population > 5000000
   group by
      1
   order by
      2 desc;

See, Also: SQL Utils Intro, topics

t2c

Table-to-CSV. This command takes tab delinated files and converts them to csv files. Credit to Ignacio Vazquez-Abrams for the implementation.

This is very useful when working with mysql -t output

Example:

   echo "select id as book_id, title from books where read='Y'" | nadb | t2c > books_read.csv

See, Also: SQL Utils Intro, topics