Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Sprucing Up Your Shells, prepared and presented by Stephen Corbesero for Bucks County DevOps

Sprucing Up Your Shells, prepared and presented by Stephen Corbesero for Bucks County DevOps

Long, long ago, before the first Python script hatched, before the first Ruby gem was mined, even before the first Perl module was encapsulated, there was the shell. Early Unix administrators and developers made the most of their shell scripts by exploiting many useful utilities that adhered to the Unix philosophy -- do one thing and do it well.

This presentation will demonstrate a set of these utilities in conjunction with shell programming best practices that will help you get the most out of your scripts.

About Our Speaker

Mr. Corbesero, aka "Flash", is currently on the DevOps team at Synchronoss Technologies working with cloud infrastructures and configuration management. He graduated from Lehigh University with degrees in Computer Engineering and Computer Science. Previously, in addition to various consulting projects, he has held the positions of systems manager and adjunct lecturer at Lehigh University in the EECS department, Associate Professor of Computer Science at Moravian College, and systems administrator and systems programmer at PenTeleData.

Michael J. Smalley

August 20, 2014
Tweet

Other Decks in Technology

Transcript

  1. Overview 1. Introduction 2. Brief History of the Shell 3.

    Fundamentals 4. Tools 5. Power Tools 6. Best Good Practices 7. Closing
  2. Introduction • Synchronoss Technologies ◦ DevOps Engineer ◦ Performance Engineer

    • PenTeleData ◦ Systems Administrator ◦ Systems Programmer • Moravian College ◦ Associate Professor of Computer Science • Lehigh University ◦ Systems Administrator ◦ Adjunct Lecturer
  3. A Brief History of Shell ❖ sh ➢ Bourne ➢

    Unix V7 (1977) ❖ csh ➢ BSD unix ❖ ksh ➢ Korn (1983) ❖ bash ➢ GNU (1989)
  4. Shell Features ❖ Interactive ➢ Command Line Editing ➢ Aliases

    ➢ Job Control ❖ Programmable ➢ Structured ➢ Functions and Parameter Passing ➢ Looping ➢ Test ➢ Process Control ❖ Globbing (filename patterns) ❖ Variable Expansion ❖ Quoting ❖ I/O Redirection
  5. Variable Expansion ❖ $x or ${x} ❖ “colon” substitutions ➢

    ${x:-value} ➢ ${x:=value} ❖ Special Variables ➢ $0 $1..$9 $* $@ $# ➢ $? $$ $! Example #!/bin/bash TOOLSHOME=${TOOLSHOME:-/usr/local/tools} #!/bin/bash echo “TOOLSHOME is ${TOOLSHOME:=/usr/local/tools}”
  6. Quoting ❖ backslash \ ❖ single ‘$x’ ❖ double “$x”

    ❖ backquotes `cmd` or $( cmd ) echo ‘$x is’ $x echo “\$x is ” $x echo “$x” echo “now is” `date`
  7. Shell Functions ❖ parameter passing ❖ local variables ❖ must

    be defined before they are called ❖ hard to use getopts with a shell function function foo() { #$1=something $2=another local x=”$1” local y=”$2” local z # … return 1 } bar() { foo a b echo “xyzzy” }
  8. Pattern Matching case “$var” in abc*def ) … ;; *[0-9])

    … ;;; *) … ;; esac if [[ $var == abc*def ]] ; then … elif [[ $var == *[0-9] ; then … else …. fi
  9. Redirection and I/O Plumbing ❖ Basic ➢ < > >>

    ❖ Advanced ➢ | ➢ << ▪ a “here” document ▪ convenient way to embed a file in a script ▪ with or without expanded content ❖ Plumbing ➢ tee ➢ cat echo `date “+%F %T”` “Begin” | tee $LOGFILE cat $files | …
  10. Here Documents with << #!/bin/bash VERSION=1.0.0 VERBOSE=0 usage() { cat

    <<EOF usage: $0 [ opts] infile outfile -h help -v verbose [ $VERBOSE ] Version $VERSION EOF … if [ $# -ne 2 ]; then usage exit 1 fi
  11. Tools 1. getopts 2. expr and test 3. bc 4.

    tr 5. logger 6. grep, egrep, fgrep
  12. Option Processing # ... while getopts :d:vh OPT do case

    "$OPT" in d) dir=”$OPTARG";; v) VERBOSE=1 ;; h) usage ; exit 0 ;; [?]) usage exit 1;; esac done shift $(( $OPTIND-1 )
  13. Expressions ❖ numeric ➢ `expr ….` ➢ $(( … ))

    ▪ echo “1+1 is $(( 1 + 1 ))” ❖ test operator ➢ test … or [ ] ➢ string comparison: = != ➢ string existence: -z -n ➢ numeric comparisons: -eq -ne -lt -le -gt -ge ➢ regular expressions: match : ➢ file tests: -r -d -x ...
  14. Regular Expressions # match label value regex match() { if

    expr "$2" : "$3\$" > /dev/null ; then : else echo >&2 "$0:error: $2 for $1 does not match $3" exit 1 fi } match "user range start" "$start" '[0-9][0-9]*'
  15. Fixed Point Arithmetic MATH_LIB_SH_SCALE=4 export MATH_LIB_SH_SCALE=4 fpdiv() { # do

    a floating point division local dividend=$1 local divisor=$2 local result=`echo "scale=$MATH_LIB_SH_SCALE; $dividend / $divisor" | bc -q` echo $result } fpmult() { # do a floating point multiplication local multiplicand=$1 local multiplier=$2 local result=`echo "scale=$MATH_LIB_SH_SCALE; $multiplicand * $multiplier " | bc -q` echo $result } ans=$( fpmult 3.1415 2 )
  16. tr -- transliterate ❖ translate, delete, or squeeze characters %

    # uppercase % echo ‘hello’ | tr ‘a-z’ ’A-Z’ HELLO % # remove all but letters and numbers % echo “Hello, World!” | tr -d -c ‘a-zA-Z0-9’ HelloWorld%
  17. Logging ❖ syslog(3), rsyslog, syslog-ng ➢ standard logging framework ➢

    configurable ➢ provides local and remote logging ➢ facilities ▪ auth, authpriv, cron, daemon, ftp, kern, lpr, mail, syslog ▪ user, local0, …, local7 ➢ levels ▪ alert, crit, debug, emerg, err, info, notice, warning ➢ many scripting languages and libraries provides wrappers to syslog ▪ logger ➢ distributed loggers like splunk, flume, and logstash al provide standard collectors for syslog
  18. Simple Logger Usage ❖ SYNOPSIS ➢ logger [-is] [-f file]

    [-p pri] [-t tag] [--] [message ...] ➢ options ▪ -i ▪ -s ▪ -p facility.level logger “hello, world” logger -s “hello, world” logger -p user.emerg -t “reboot” “Goodbye, World” cmd | logger -t cmd -p local2.notice
  19. Logger Shell Functions #!/bin/bash TAG="`basename $0 .sh`[$$]" FACILITY=local1 loginfo() {

    logger -s -t $TAG -p ${FACILITY}.info -- "$*" } logerror() { logger -s -t $TAG -p ${FACILITY}.err -- "$*" } logwarn() { logger -s -t $TAG -p ${FACILITY}.warning -- "$*" }
  20. Using The Logger Functions #!/bin/bash . /usr/local/lib/logger.shlib FACILITY=local3 loginfo "starting...."

    if [ …. ]; then logerror "my bad!" fi last-cmd logwarn "goodbye, rc=$?"
  21. Logger Output # to stderr logger[24250]: starting.... logger[24250]: my bad!

    logger[24250]: goodbye, rc=0 # in /var/log/messages Aug 18 22:03:58 frodo logger[24250]: starting.... Aug 18 22:03:58 frodo logger[24250]: my bad! Aug 18 22:03:58 frodo logger[24250]: goodbye, rc=0
  22. Power Tools ❖ sed ❖ awk ❖ m4, a powerful

    macro processor ❖ jq, sed and awk for JSON ❖ flex, a lexical analyzer ❖ bison, a parser generator
  23. sed ❖ Features ➢ stream editor ➢ good for intra-line

    text processing and regular expressions ➢ for each input line, execute a series of editor commands ➢ a wrapper around the regex library
  24. sed Commands ❖ common ➢ absolute and context addressing ➢

    s (substitute) ➢ d (delete) ➢ p (print) ➢ i (insert) and a (append) ➢ c (change) ❖ other ➢ y (transliterate) ➢ q (quit) ➢ b (branch) ➢ t (test) and T (Test)
  25. sed examples sed ‘1d’ infile > outfile sed -e ‘1,5d’

    -e ‘$d’ infile > outfile sed -n ‘/^cat/,/dog$/p’ sed ‘s/cat/dog/’ infile sed ‘s/\(\([a-z]\+\)[[:space:]]\+\2\)/\2/g’ infile
  26. sed substitution script #!/bin/bash … VAR1=..... VAR2=..... # remove leading

    and trailing whitespace, and empty lines # perform variable substitution for f in $files; do sed -i.bak -e ‘s/^[[:space:]]\+//’ \ -e ‘s/[[:space:]]\+$//’ \ -e ‘s/##.*//’ \ -e ‘s/^$/d’ \ -e “s/{VAR1}/$VAR1/g” -e “s/{VAR2}/$VAR2/g” \ $f done
  27. awk ❖ Features ➢ small C interpreter optimized for string

    processing ➢ understands fields on input lines ➢ like sed, execute a series of commands for each input line ➢ automatic variable initialization ➢ associative arrays! ➢ many builtin special variables like NR and NF ➢ many builtin functions
  28. awk examples # cutting a field ps awx | awk

    ‘/ssh/ {print $1}’ # quick stats for a list of numbers # cat stats.awk NR==1 { min=$1; max=$1} $1<min { min=$1 } $1>max { max=$1 } { sum+=$1} END { printf “min=%.2f\n”, min; printf “max=%.2f\n”, max; printf “sum=%.2f\n”, sum; printf “avg=%.2f\n”, sum/NR }
  29. awk stats output % awk -f stats.awk 1 2 3

    4 5 min=1.00 max=5.00 sum=15.00 avg=3.00
  30. awk in a script #!/bin/bash usage() { cat << EOF

    usage: cluster-uptime host [ … ] EOF } [ $# -eq 0 ] && { usage; exit 1; } for f in $*; do ssh $f uptime done | tr -d ',' | \ awk ' { min1+=$10; min5+=$11; min15+=$12 } END { printf "%.2f %.2f %.2f\n", min1/NR, min5/NR, min15/NR}'
  31. output from cluster-uptime % for x in localhost faramir ;

    do ssh $x uptime ; done 22:42:36 up 25 days, 2:28, 2 users, load average: 0.68, 0.55, 0.56 10:42PM up 79 days, 11:37, 2 users, load averages: 0.00, 0.00, 0.00 % ./cluster-uptime localhost faramir 0.39 0.28 0.28
  32. awk Associative Array #!/bin/bash # print the number of processes

    by cpu time descending ps ax | awk ' NR==1 { next } $4!=/0:00/ { time[$4]++} END { for (t in time) printf "%010s %5d\n", t, time[t] }' | \ sort -r | \ sed 's/^0*//' | \ awk '{printf "%10s %5d\n", $1, $2}'
  33. ps ax PID TTY STAT TIME COMMAND 1 ? Ss

    0:00 init [5] 9 ? S< 0:01 [events/1] 16 ? S< 0:02 [kblockd/1] 208 ? S 0:04 [pdflush] 209 ? S 0:12 [pdflush] 210 ? S< 0:03 [kswapd0] 452 ? S< 0:02 [kjournald] 1674 ? S< 6:41 [kjournald] 1680 ? S< 0:01 [kjournald] 2163 ? Ss 1:13 syslogd 2222 ? Ss 0:13 irqbalance 2265 ? S< 2:00 [rpciod/1] 2322 ? Ss 3:02 dbus-daemon --system 2363 ? Ss 1:00 hald 2380 ? S 0:02 hald-addon-keyboard: listening on /dev/input/event0 2471 ? Ssl 0:27 automount --pid-file /var/run/autofs.pid 2698 ? Ss 0:02 avahi-daemon: running [frodo.local] 2842 tty7 Ss+ 439:13 /usr/bin/Xorg :0 -br -audit 0 -auth ...
  34. % ./psbytime 439:23 1 68:47 1 65:44 1 60:05 1

    9:03 1 6:41 1 3:02 1 2:23 1 2:08 1 2:05 1 2:00 1 1:56 1 1:13 1 1:06 1 1:03 1 1:00 1 :57 1 :48 1 :44 1 :27 2 :16 1 :13 1 :12 1 :09 1 :08 1 :07 1 :04 1 :03 2 :02 8 :01 5 psbytime output
  35. m4 input define(`BW_LOW', `250') define(`BW_MOD', `500') define(`BW_HIGH', `900') define(`I_MEDIUM', BW_MOD)

    define(`I_LARGE', BW_MOD) define(`I_XLARGE', BW_HIGH) define(`I_2XLARGE', BW_HIGH) define(`HI_WM_FRAC',`4/5') # 80% is the upper limit define(`LO_WM_FRAC',`1/5') # 20% is the lower limit define(`HIGH_WM', `eval($1 * HI_WM_FRAC / 8 * 60)000000') define(`LOW_WM', `eval($1 * LO_WM_FRAC / 8 * 60)000000') "BandwidthAlarmsPerInstanceType" : { "m3.medium" : { "high" : "HIGH_WM(I_MEDIUM)", "low" : "LOW_WM(I_MEDIUM)" }, "m3.large" : { "high" : "HIGH_WM(I_LARGE)", "low" : "LOW_WM(I_LARGE)" }, "m3.xlarge" : { "high" : "HIGH_WM(I_XLARGE)", "low" : "LOW_WM(I_XLARGE)" }, "m3.2xlarge" : { "high" : "HIGH_WM(I_2XLARGE)", "low" : "LOW_WM(I_2XLARGE)" } }
  36. m4 output BandwidthAlarmsPerInstanceType": { "m3.medium": { "high": "3000000000", "low": "720000000"

    }, "m3.large": { "high": "3000000000", "low": "720000000" }, "m3.xlarge": { "high": "5400000000", "low": "1320000000" }, "m3.2xlarge": { "high": "5400000000", "low": "1320000000" } }
  37. Best Good Practices ❖ consistency ❖ include usage() and getopts

    processing ❖ configure via environment variables and command line overrides ❖ overquoting is usually safer than underquoting ❖ define CONSTANTS ❖ use logger ❖ ensure reentrancy with $$ (pid) in temp file names ❖ choose good exit codes ❖ use good software development practices ➢ comments ➢ version control ➢ modularity ❖ know when to switch to Perl/Python/Ruby/… or even C/C++ ❖ use Emacs to edit scripts