% latex generated by the script '/home/project-web/bumble/cgi-bin/text2latex.cgi' % with help from the sed script 'text2latex.sed' % query string: 'books/awk/awk-book' % sFileName= 'awk-book' % sDirName = 'books/awk' % sName = '' --> % latex by http://bumble.sourceforge.net/scripts/text2latex.sed % this script is a work in progress \documentclass[11pt]{article} \usepackage[hmargin=2cm, vmargin=2cm]{geometry} \usepackage{url} \usepackage{hyperref} \begin{document} \\&\\& The Awk Text Processing Language -----------------------------------: Awk is a unix tool, or programming language designed to process and transform text files which are arranged in a series of 'fields' (chunks of text separated by spaces, or any other delimiter) and records. This document is mainly about the 'mawk' variant of 'awk'. * find out the version of mawk >> mawk -W version \title{} \author{bumble.sourceforge.net} \maketitle \tableofcontents helpful man pages for awk .. man gawk - the gnu awk man page .. man ed - contains regular expression examples .. man mawk - contains good examples .. man regex - regular expression syntax .. @@ \url{http://sparky.rice.edu/awk.html} more awk one liners GOTCHAS [+] The so-called "gotchas" are small but potentially frustating problems which arise and which stop a program from working or which make the awk program work in an unexpected way. Gotcha derives from the contraction of the english phrase "got you". 00- On a unix system the awk phrase <> doesnt work as expected because the unix (bash) shell expands or "interpolates" the "\\$1" variable. It is necessary to write "awk '\{print \\$1\}'" - * BEGIN and and variables such as FS must be uppercase >> begin\{FS=","\}\{print \\$2\} \\#\\#(No!! this doesnt work) SIMPLE USAGE * simple usage of awk on different operating systems. ----------------------------------------------------- Unix: awk '/pattern/ \{print "\\$1"\}' \\# standard Unix shells DOS/Win: awk '/pattern/ \{print "\\$1"\}' \\# compiled with DJGPP, Cygwin awk "/pattern/ \{print \\$\\backslash\\$"\\$1\\$\\backslash\\$"\}" \\# GnuWin32, UnxUtils, Mingw ,,, Users of MS-DOS or Microsoft Windows must remember that the percent sign (\\%) is used to indicate environment variables, so this symbol must be doubled (\\%\\%) to yield a single percent sign visible to awk. * run an awk script >> cat file1 | awk -f a.awk > file2 >> awk -f a.awk file1 > file2 \\#\\#(the same) STRINGS CONCATENATION OF STRING .... Concatenation is the fancy term for joining 2 strings (bits of text) together. * print the first two columns of the space/tab delimited file 'data.txt' >> awk '\{print \\$1 \\$2\}' data.txt \\#\\#(\\$1 and \\$2 are printed with no space between) >> awk '\{print \\$1\\$2\}' data.txt \\#\\#(the same, at least on my mawk version) >> awk '\{print \\$1 \\$2;\}' data.txt \\#\\#(the same again) >> awk '\{print \\$1 "" \\$2\}' data.txt \\#\\#(the same again, but why would you?) * awk doesnt have variable 'interpolation' in strings >> awk '\{print "\\$1 ..."\}' data.txt \\#\\#(this prints '\\$1 ...' literally) * print the first column of 'data.txt' with 3 dots '...' appended to it >> awk '\{print \\$1 "..."\}' data.txt * append a string to itself (string concatenation) >> s = s "xxx"; \\#\\#(this appends 3 x's to the end of the string 's') MATCHING PATTERNS .... * determine if the variable "s" contains the letter "r" >> s \\$\\verb|~| \texttt{/r/} * print the first field of each line if it does *not* contain "a" or "b" >> \\$1 !\\$\\verb|~| \texttt{/(a|b)/} \{ print \\$1 \} >> \\$1 !\\$\\verb|~| \texttt{/[ab]/} \{ print \\$1 \} \\#\\#(the same) * add an "X" between every letter of every line >> \{ gsub(//, "X") ; print \} * split the string "s" into the array A using the pattern "r" >> split(s, A, r) PRINTING STRINGS .... * make a multiline string.. --------------------------- print "\\$\\backslash\\$ \\$\\backslash\\$n\\$\\backslash\\$ \\$\\backslash\\$n\\$\\backslash\\$ " ,,, * print multiple expressions >> print "variable a is " a "." * Its not possible to break printing expressions across lines ------------------------------------------------------------- print "variable a is" a "."; ,,, \\#\\#(this doesnt work, at least not with mawk 1.3.3) NEWLINES .... * display the file 'days.txt' with all newline characters removed >> awk '\{ printf "\\%s", \\$0 \}' days.txt >> cat days.txt | awk '\{ printf "\\%s", \\$0 \}' \\#\\#(the same) * display 'days.txt' with newline characters replaced with spaces >> awk '\{ printf "\\%s ", \\$0 \}' days.txt >> cat days.txt | awk '\{ printf "\\%s ", \\$0 \}' ARRAYS * delete an array called record >> delete record * assign a value to an associative style array >> a["cars"] = 3 REGULAR EXPRESSIONS * regular expression meta-characters: \\^ \\$ . [ ] | ( ) * + ? * print all lines which start with an awk identifier ----------------------------------------------------- BEGIN \{ identifier = "[\\_a-zA-Z][\\_a-zA-Z0-9]*" \} \\$0 \\$\\verb|~| "\\^" identifier ,,, CASE INSENSITIVE MATCHING .... * use tolower >> tolower(\\$0) \\$\\verb|~| \texttt{/bhp/} \{print \\$0\} * set the ignorecase var >> BEGIN \{IGNORECASE=1\} \texttt{/bhp/} \{print \\$0\} LOOPS * loop through each field of each record >> awk '\{ for(i = 1 ; i <= NF ; i++) print \\$i \}' \texttt{/usr/share/dict/words} * print each element of an array >> for ( i in aa ) print aa[i] SPLITTING DATA FIELDS \\# The field separator variable FS is interpreted as a regular \\# expression * split fields with any character followed by a colon ":" character >> BEGIN \{FS=".:"\} * split quoted comma delimited fields (csv) >> BEGIN \{FS="\\$\\backslash\\$" *, *\\$\\backslash\\$""\} \title{} \author{bumble.sourceforge.net} \maketitle \tableofcontents awk built in variables .. ARGC - number of command line arguments. .. ARGV - array of command line arguments, 0..ARGC-1. .. CONVFMT - format for conversion of numbers to string, default "\\%.6g". .. ENVIRON - array indexed by environment variables. An environment string, var=value is stored as ENVIRON[var] = value. .. FILENAME - name of the current input file. .. FNR - current record number in FILENAME. .. FS - splits records into fields as a regular expression. .. NF - number of fields in the current record. .. NR - current record number in the total input stream. .. OFMT - format for printing numbers; initially = "\\%.6g". .. OFS - inserted between fields on output, initially = " ". .. ORS - terminates each record on output, initially = "\\$\\backslash\\$n". .. RLENGTH - length set by the last call to the built-in function, match(). .. RS - input record separator, initially = "\\$\\backslash\\$n". .. RSTART - index set by the last call to match(). .. SUBSEP - used to build multiple array subscripts, initially = "\\$\\backslash\\$034". .. RANGE OF FIELDS Awk has no simple way to print a range of fields such as \\$[1-4] A 'for' loop must be used to loop through the range and print each one. One may use cut instead * use 'cut' to print fields 1 to 5 from a comma delimited file >> cut -d, -f1-5 AWK ONE LINE RECIPES These one line scripts were taken from \url{http://www.pement.org/awk/awk1line.txt} * 30 April 2008, by Eric Pement - eric [at] pement.org, version 0.27 @@ \url{http://www.pement.org/awk/awk1line.txt} Latest version of the Eric Pement one line scripts (in English) @@ \url{http://ximix.org/translation/awk1line\\_zh-CN.txt} Chinese version of these one line scripts FILE SPACING * double space a file >> awk '1;\{print ""\}' >> awk 'BEGIN\{ORS="\\$\\backslash\\$n\\$\\backslash\\$n"\};1' \\#\\#(another way) * double space a file which already has blank lines in it. Output file * should contain no more than one blank line between lines of text. * NOTE: On Unix systems, DOS lines which have only CRLF (\\$\\backslash\\$r\\$\\backslash\\$n) are * often treated as non-blank, and thus 'NF' alone will return TRUE. >> awk 'NF\{print \\$0 "\\$\\backslash\\$n"\}' * triple space a file >> awk '1;\{print "\\$\\backslash\\$n"\}' SUMMING NUMERIC COLUMNS * sum up all the numbers in column 2 and print out the total at the end >> awk '\{ a+=\\$2 \} END \{ print "total=" a \}' data.txt * sum a column between 2 lines in a file (with help from sed) >> sed -n '/\\#1/,/\\#2/p' data.txt | awk -F, '\{a+=\\$2; print \\$2, a\}' | less LINE NUMBERING * precede each line by its line number FOR THAT FILE (left alignment). * Using a tab (\\$\\backslash\\$t) instead of space will preserve margins. >> awk '\{print FNR "\\$\\backslash\\$t" \\$0\}' files* * precede each line by its line number FOR ALL FILES TOGETHER, with tab. >> awk '\{print NR "\\$\\backslash\\$t" \\$0\}' files* * number each line of a file (number on left, right-aligned) * Double the percent signs if typing from the DOS command prompt. >> awk '\{printf("\\%5d : \\%s\\$\\backslash\\$n", NR,\\$0)\}' * number each line of file, but only print numbers if line is not blank * Remember caveats about Unix treatment of \\$\\backslash\\$r (mentioned above) >> awk 'NF\{\\$0=++a " :" \\$0\};1' >> awk '\{print (NF? ++a " :" :"") \\$0\}' * count lines (emulates "wc -l") >> awk 'END\{print NR\}' * print the sums of the fields of every line >> awk '\{s=0; for (i=1; i<=NF; i++) s=s+\\$i; print s\}' * add all fields in all lines and print the sum >> awk '\{for (i=1; i<=NF; i++) s=s+\\$i\}; END\{print s\}' * print every line after replacing each field with its absolute value >> awk '\{for (i=1; i<=NF; i++) if (\\$i < 0) \\$i = -\\$i; print \}' >> awk '\{for (i=1; i<=NF; i++) \\$i = (\\$i < 0) ? -\\$i : \\$i; print \}' * print the total number of fields ("words") in all lines >> awk '\{ total = total + NF \}; END \{print total\}' file * print the total number of lines that contain "Beth" >> awk '/Beth/\{n++\}; END \{print n+0\}' file * print the largest first field and the line that contains it * Intended for finding the longest string in field \\#1 >> awk '\\$1 > max \{max=\\$1; maxline=\\$0\}; END\{ print max, maxline\}' THE NUMBER OF FIELDS * print the number of fields in each line, followed by the line >> awk '\{ print NF ":" \\$0 \} ' * print the last field of each line >> awk '\{ print \\$NF \}' * print the last field of the last line >> awk '\{ field = \\$NF \}; END\{ print field \}' * print every line with more than 4 fields >> awk 'NF > 4' * print every line where the value of the last field is > 4 >> awk '\\$NF > 4' STRING CREATION * create a string of a specific length (e.g., generate 513 spaces) >> awk 'BEGIN\{while (a++<513) s=s " "; print s\}' * insert a string of specific length at a certain character position * Example: insert 49 spaces after column \\#6 of each input line. gawk --re-interval 'BEGIN\{while(a++<49)s=s " "\};\{sub(/\\^.\{6\}/,"\\&" s)\};1' ARRAY CREATION * These next 2 entries are not one-line scripts, but the technique * is so handy that it merits inclusion here. * create an array named "month", indexed by numbers, so that month[1] * is 'Jan', month[2] is 'Feb', month[3] is 'Mar' and so on. >> split("Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec", month, " ") * create an array named "mdigit", indexed by strings, so that * mdigit["Jan"] is 1, mdigit["Feb"] is 2, etc. Requires "month" array >> for (i=1; i<=12; i++) mdigit[month[i]] = i TEXT CONVERSION AND SUBSTITUTION * IN UNIX ENVIRONMENT: convert DOS newlines (CR/LF) to Unix format >> awk '\{sub(/\\$\\backslash\\$r\\$/,"")\};1' \\# assumes EACH line ends with Ctrl-M * IN UNIX ENVIRONMENT: convert Unix newlines (LF) to DOS format >> awk '\{sub(/\\$/,"\\$\\backslash\\$r")\};1' * IN DOS ENVIRONMENT: convert Unix newlines (LF) to DOS format >> awk 1 * IN DOS ENVIRONMENT: convert DOS newlines (CR/LF) to Unix format * Cannot be done with DOS versions of awk, other than gawk >> gawk -v BINMODE="w" '1' infile >outfile * Use "tr" instead. >> tr -d \\$\\backslash\\$r outfile \\#\\#( GNU tr version 1.22 or higher ) * delete leading whitespace (spaces, tabs) from front of each line >> awk '\{sub(/\\^[ \\$\\backslash\\$t]+/, "")\};1' \\#\\#(aligns all text flush left) * delete trailing whitespace (spaces, tabs) from end of each line >> awk '\{sub(/[ \\$\\backslash\\$t]+\\$/, "")\};1' * delete BOTH leading and trailing whitespace from each line >> awk '\{gsub(/\\^[ \\$\\backslash\\$t]+|[ \\$\\backslash\\$t]+\\$/,"")\};1' >> awk '\{\\$1=\\$1\};1' \\# also removes extra space between fields * insert 5 blank spaces at beginning of each line (make page offset) >> awk '\{sub(/\\^/, " ")\};1' * align all text flush right on a 79-column width >> awk '\{printf "\\%79s\\$\\backslash\\$n", \\$0\}' file* * center all text on a 79-character width >> awk '\{l=length();s=int((79-l)/2); printf "\\%"(s+l)"s\\$\\backslash\\$n",\\$0\}' file* * substitute (find and replace) "foo" with "bar" on each line >> awk '\{sub(/foo/,"bar")\}; 1' \\# replace only 1st instance >> gawk '\{\\$0=gensub(/foo/,"bar",4)\}; 1' \\# replace only 4th instance >> awk '\{gsub(/foo/,"bar")\}; 1' \\# replace ALL instances in a line * substitute "foo" with "bar" ONLY for lines which contain "baz" >> awk '/baz/\{gsub(/foo/, "bar")\}; 1' * substitute "foo" with "bar" EXCEPT for lines which contain "baz" >> awk '!/baz/\{gsub(/foo/, "bar")\}; 1' * change "scarlet" or "ruby" or "puce" to "red" >> awk '\{gsub(/scarlet|ruby|puce/, "red")\}; 1' * reverse order of lines (emulates "tac") >> awk '\{a[i++]=\\$0\} END \{for (j=i-1; j>=0;) print a[j--] \}' file* * if a line ends with a backslash, append the next line to it (fails if * there are multiple lines ending with backslash...) >> awk '/\\$\\backslash\\$\\$\\backslash\\$\\$/ \{sub(/\\$\\backslash\\$\\$\\backslash\\$\\$/,""); getline t; print \\$0 t; next\}; 1' file* * print and sort the login names of all users >> awk -F ":" '\{print \\$1 | "sort" \}' \texttt{/etc/passwd} REARRANGING FIELDS OR COLUMNS * print the first 2 fields, in opposite order, of every line >> awk '\{print \\$2, \\$1\}' file * switch the first 2 fields of every line >> awk '\{temp = \\$1; \\$1 = \\$2; \\$2 = temp\}' file * print every line, deleting the second field of that line >> awk '\{ \\$2 = ""; print \}' * print in reverse order the fields of every line >> awk '\{for (i=NF; i>0; i--) printf("\\%s ",\\$i);print ""\}' file * concatenate every 5 lines of input, using a comma separator between fields >> awk 'ORS=NR\\%5?",":"\\$\\backslash\\$n"' file SELECTIVE PRINTING OF CERTAIN LINES * print first 10 lines of file (emulates behavior of "head") >> awk 'NR < 11' * print first line of file (emulates "head -1") >> awk 'NR>1\{exit\};1' * print the last 2 lines of a file (emulates "tail -2") >> awk '\{y=x "\\$\\backslash\\$n" \\$0; x=\\$0\};END\{print y\}' * print the last line of a file (emulates "tail -1") >> awk 'END\{print\}' * print only lines which match regular expression (emulates "grep") >> awk '/regex/' * print only lines which do NOT match regex (emulates "grep -v") >> awk '!/regex/' * print any line where field \\#5 is equal to "abc123" >> awk '\\$5 == "abc123"' * print only those lines where field \\#5 is NOT equal to "abc123" * This will also print lines which have less than 5 fields. >> awk '\\$5 != "abc123"' >> awk '!(\\$5 == "abc123")' * matching a field against a regular expression >> awk '\\$7 \\$\\verb|~| \texttt{/\\^[a-f]/'} \\# print line if field \\#7 matches regex >> awk '\\$7 !\\$\\verb|~| \texttt{/\\^[a-f]/'} \\# print line if field \\#7 does NOT match regex * print the line immediately before a regex, but not the line * containing the regex >> awk '/regex/\{print x\};\{x=\\$0\}' >> awk '/regex/\{print (NR==1 ? "match on line 1" : x)\};\{x=\\$0\}' * print the line immediately after a regex, but not the line * containing the regex >> awk '/regex/\{getline;print\}' * grep for AAA and BBB and CCC (in any order on the same line) >> awk '/AAA/ \\&\\& \texttt{/BBB/} \\&\\& \texttt{/CCC/'} * grep for AAA and BBB and CCC (in that order) >> awk '/AAA.*BBB.*CCC/' * print only lines of 65 characters or longer >> awk 'length > 64' * print only lines of less than 65 characters >> awk 'length < 64' * print section of file from regular expression to end of file >> awk '/regex/,0' >> awk '/regex/,EOF' * print section of file based on line numbers (lines 8-12, inclusive) >> awk 'NR==8,NR==12' * print line number 52 >> awk 'NR==52' >> awk 'NR==52 \{print;exit\}' \\# more efficient on large files * print section of file between two regular expressions (inclusive) >> awk '/Iowa/,/Montana/' \\# case sensitive SELECTIVE DELETION OF CERTAIN LINES * delete ALL blank lines from a file (same as "grep '.' ") >> awk NF >> awk '/./' * remove duplicate, consecutive lines (emulates "uniq") >> awk 'a !\\$\\verb|~| \\$0; \{a=\\$0\}' * remove duplicate, nonconsecutive lines >> awk '!a[\\$0]++' \\# most concise script >> awk '!(\\$0 in a)\{a[\\$0];print\}' \\# most efficient script PIPE AWK OUTPUT TO THE SHELL This technique allows each line generated by an awk script to be executed by the shell * move files to the "iraf" folder and add .dat to the names >> ls junk* | awk '\{print "mv "\\$0" ../iraf/"\\$0".dat"\}' | sh MORE ONE LINE EXAMPLES * Print first two fields in opposite order >> awk '\{ print \\$2, \\$1 \}' file * Print lines longer than 72 characters >> awk 'length > 72' file * Print length of string in 2nd column >> awk '\{print length(\\$2)\}' file * Add up first column, print sum and average >> \{ s += \\$1 \} >> END \{ print "sum is", s, " average is", s/NR \} * Print fields in reverse order >> awk '\{ for (i = NF; i > 0; --i) print \\$i \}' file * Print the last line >> \{line = \\$0\} >> END \{print line\} * Print the total number of lines that contain the word Pat >> \texttt{/Pat/} \{nlines = nlines + 1\} >> END \{print nlines\} * Print all lines between start/stop pairs >> awk '/start/, \texttt{/stop/'} file * Print all lines whose first field is different from previous one >> awk '\\$1 != prev \{ print; prev = \\$1 \}' file * Print column 3 if column 1 > column 2 >> awk '\\$1 > \\$2 \{print \\$3\}' file * Print line if column 3 > column 2 >> awk '\\$3 > \\$2' file * Count number of lines where col 3 > col 1 >> awk '\\$3 > \\$1 \{print i + "1"; i++\}' file * Print sequence number and then column 1 of file >> awk '\{print NR, \\$1\}' file * Print every line after erasing the 2nd field >> awk '\{\\$2 = ""; print\}' file * Print hi 28 times >> yes | head -28 | awk '\{ print "hi" \}' * Print hi.0010 to hi.0099 (NOTE IRAF USERS!) >> yes | head -90 | awk '\{printf("hi00\\%2.0f \\$\\backslash\\$n", NR+9)\}' * Print out 4 random numbers between 0 and 1 >> yes | head -4 | awk '\{print rand()\}' * Print out 40 random integers modulo 5 >> yes | head -40 | awk '\{print int(100*rand()) \\% 5\}' * Replace every field by its absolute value >> \{ for (i = 1; i <= NF; i=i+1) if (\\$i < 0) \\$i = -\\$i print\} FIELD DELIMITER The field delimiter or separator, determines how awk divides up each line of the text file into 'fields' or 'columns' which can then be accessed with the \\$1, \\$2, ... variables. The delimiter can be a regular expression (unlike 'cut' for example) The default awk field delimiter is a single space " " or a tab. * use '|' as the field delimiter and print the 4th field >> awk -F"|" '\{print \\$4\}' filename >> awk -F'|' '\{print \\$4\}' filename \\#\\#(the same) >> awk -F\\$\\backslash\\$| '\{print \\$4\}' filename \\#\\#(should work) >> awk 'BEGIN \{FS="|"\} \{print \\$4\}' filename \\#\\#(the same) * set the field delimiter to be a comma followed by a space, print 2nd field >> awk -F', ' '\{print \\$2\}' data.txt * set the field delimiter to be a comma followed by any number of spaces >> awk -F', *' '\{print \\$2\}' data.txt >> awk 'BEGIN\{FS=", *"\}\{print \\$2\}' data.txt \\#\\#(the same) >> awk 'BEGIN\{FS=", *";\};\{print \\$2;\}' data.txt \\#\\#(the same again) * set the field delimiter to be the double quote character >> awk -F'"' '\{print \\$2\}' data.txt >> awk -F\\$\\backslash\\$" '\{print \\$2\}' data.txt \\#\\#(the same) * set the field delimiter to be any number of '+' plus signs >> awk -F'\\$\\backslash\\$+*' '\{print \\$2\}' data.txt * set the field delimiter to a space following by one or more '*' star signs >> awk -F' \\$\\backslash\\$*+' '\{print \\$2\}' data.txt * Some looping commands Remove a bunch of print jobs from the queue ------------------------------------------------------------------- BEGIN\{ for (i=875;i>833;i--)\{ printf "lprm -Plw \\%d\\$\\backslash\\$n", i \} exit \} ,,, \title{} \author{bumble.sourceforge.net} \maketitle \tableofcontents example format strings for 'printf' .. e.g. printf("howdy \\%-8s What it is bro. \\%.2f\\$\\backslash\\$n", \\$1, \\$2*\\$3) .. \\%s - string .. \\%-8s - 8 character string left justified .. \\%.2f - number with 2 places after . .. \\%6.2f - field 6 chars with 2 chars after . .. \\$\\backslash\\$n - newline .. \\$\\backslash\\$t - tab .. * Find maximum and minimum values present in column 1 ----------------------------------------------------- NR == 1 \{m=\\$1 ; p=\\$1\} \\$1 >= m \{m = \\$1\} \\$1 <= p \{p = \\$1\} END \{ print "Max = " m, " Min = " p \} ,,, * Example of defining variables, multiple commands on one line -------------------------------------------------------------- NR == 1 \{prev=\\$4; preva = \\$1; prevb = \\$2; n=0; sum=0\} \\$4 != prev \{print preva, prevb, prev, sum/n; n=0; sum=0; prev = \\$4; preva = \\$1; prevb = \\$2\} \\$4 == prev \{n++; sum=sum+\\$5/\\$6\} END \{print preva, prevb, prev, sum/n\} ,,, * Example of defining and using a function, inserting values into an array * and doing integer arithmetic mod(n). This script finds the number of days * elapsed since Jan 1, 1901. (from \url{http://www.netlib.org/research/awkbookcode/c} ------------------------------------------------------------------------------- h3) function daynum(y, m, d, days, i, n) \{ \\# 1 == Jan 1, 1901 split("31 28 31 30 31 30 31 31 30 31 30 31", days) \\# 365 days a year, plus one for each leap year n = (y-1901) * 365 + int((y-1901)/4) if (y \\% 4 == 0) \\# leap year from 1901 to 2099 days[2]++ for (i = 1; i < m; i++) n += days[i] return n + d \} \{ print daynum(\\$1, \\$2, \\$3) \} ,,, * Example of using substrings * substr(\\$2,9,7) picks out characters 9 thru 15 of column 2 ----------------------------------------------------------- \{print "imarith", substr(\\$2,1,7) " - " \\$3, "out."substr(\\$2,5,3)\} \{print "imarith", substr(\\$2,9,7) " - " \\$3, "out."substr(\\$2,13,3)\} \{print "imarith", substr(\\$2,17,7) " - " \\$3, "out."substr(\\$2,21,3)\} \{print "imarith", substr(\\$2,25,7) " - " \\$3, "out."substr(\\$2,29,3)\} ,,, PASSWORD GENERATION WITH AWK A useful capability of awk may be in the generation of password 'dictionary' files, for the use with security auditing programs ('password crackers') such as "john" and "aircrack-ng". Awk may be used to amplify and multiplex a text password dictionary file. * print each line joining to the preceding line >> awk '\{print last\\$0; last=\\$0\}' \texttt{/usr/share/dict/words} | less * as above but with all punctuation removed >> awk '\{gsub(/[[:punct:]]/,"");print last\\$0; last=\\$0\}' \texttt{/usr/share/dict/words} | less SOFTWARE WRITTEN WITH AWK \url{http://www.soimort.org/translate-shell/} a very good command line script to access Google translate ALTERNATIVES TO AWK awk is particularly good handling 'csv' (comma separated values) data or other tabular style data. cut - perl - sed - BOOKS ABOUT AWK "sed \\& awk, 2nd Edition," by Dale Dougherty and Arnold Robbins (O'Reilly, 1997) "UNIX Text Processing," by Dale Dougherty and Tim O'Reilly (Hayden Books, 1987) "GAWK: Effective awk Programming," 3d edition, by Arnold D. Robbins (O'Reilly, 2003) or at \url{http://www.gnu.org/software/gawk/manual/} "Mastering Regular Expressions, 3d edition" by Jeffrey Friedl (O'Reilly, 2006). The info and manual ("man") pages on Unix systems may be helpful (try "man awk", "man nawk", "man gawk", "man regexp", or the section on regular expressions in "man ed"). AWK CONTRIBUTORS Peter S. Tillier (U.K.); Daniel Jana; Yisu Dong HISTORY Awk was created by Aho, W? and Kernighan. Mr Aho has written some dense books about computer science. Brian Kernighan has been an important figure in the early development of Unix. Kernighan co-authored the first 'c' book- the C programming language, he also has maintained nroff (used for man pages) as well as other obscure unix tools. NOTES * Convert numbers to SI notation >> \\$ awk '\{ split(sprintf("\\%1.3e", \\$1), b, "e"); p = substr("yzafpnum\\_kMGTPEZY", (b[2]/3)+9, 1); o = sprintf("\\%f", b[1] * (10 \\^ (b[2]\\%3))); gsub(/\\$\\backslash\\$./, p, o); print substr( gensub(/\\_[[:digit:]]*/, "", "g", o), 1, 4); \}' < test.dat DOCUMENT-NOTES: \\# this section contains information about the document and \\# will not normally be printed. \\# A small (16x16) icon image to identify the book document-icon: \\# A larger image to identify or illustrate the title page document-image: \\# what sort of document is this document-type: book \\# in what kind of state (good or bad) is this document document-quality: \\# when was this document last updated last-revision: \\# who wrote this document-history: * 2009 book begun in a desultory fashion * 1 april 2010 upgraded the notes meta section * April 2015 some revision for the 'gx' script authors: mjbishop at fastmail dot fm \\# a short description of the contents, possibly used for doc lists short-description: \\# A computer language which is contained in the document, if any code-language: awk \\# the script which will be used to produce html (a webpage) make-html: ./book-html.sh \\# the script which will produce 'LaTeX' output (for printing, pdf etc) make-latex: ./booktolatex.cgi NOTES * print the size of the largest file or folder in the directory >> du -s * | awk '\\$1 > x \{x=\\$1\}END\{print \\$x\}' * transpose a file >> awk '\{ for (f = 1; f <= NF; f++) a[NR, f] = \\$f \} NF > nf \{ nf = NF \} END \{ for (f = 1; f <= nf; f++) for (r = 1; r <= NR; r++) printf a[r, f] (r==NR ? RS : FS) \}' * print all except first column >> awk '\{\\$1=""; print\}' * Fibonacci numbers with awk >> seq 50| awk 'BEGIN \{a=1; b=1\} \{print a; c=a+b; a=b; b=c\}' * reverse-print contents of a file >> nawk '\{line[NR]=\\$0\} END\{for (; NR>=1; NR--)\{print line[NR]\}\}' FILENAME >> awk '\{if (NR \\% 2 == 1) print \\$0\}' file.txt >> awk '\{if (NR == 3) print\}' * Awk: Perform a rolling average on a column of data >> awk 'BEGIN\{size=5\} \{mod=NR\\%size; if(NR<=size)\{count++\}else\{sum-=array[mod]\};sum+=\\$1;array[mod]=\\$1;print sum/count\}' file.dat HANDY ONE-LINE SCRIPTS FOR AWK Compiled by Eric Pement - eric [at] pement.org version 0.27 Latest version of this file (in English) is usually at: \url{http://www.pement.org/awk/awk1line.txt} USAGE: Unix: awk '/pattern/ \{print "\\$1"\}' \\# standard Unix shells DOS/Win: awk '/pattern/ \{print "\\$1"\}' \\# compiled with DJGPP, Cygwin awk "/pattern/ \{print \\$\\backslash\\$"\\$1\\$\\backslash\\$"\}" \\# GnuWin32, UnxUtils, Mingw If a script will not need to be quoted in Unix, DOS, or CMD, then I normally omit the quote marks. If an example is peculiar to GNU awk, the command 'gawk' will be used. Please notify me if you find errors or new commands to add to this list (total length under 65 characters). I usually try to put the shortest script first. To conserve space, I normally use '1' instead of '\{print\}' to print each line. Either one will work. FILE SPACING: \\# double space a file awk '1;\{print ""\}' awk 'BEGIN\{ORS="\\$\\backslash\\$n\\$\\backslash\\$n"\};1' \\# double space a file which already has blank lines in it. Output file \\# should contain no more than one blank line between lines of text. \\# NOTE: On Unix systems, DOS lines which have only CRLF (\\$\\backslash\\$r\\$\\backslash\\$n) are \\# often treated as non-blank, and thus 'NF' alone will return TRUE. awk 'NF\{print \\$0 "\\$\\backslash\\$n"\}' \\# triple space a file awk '1;\{print "\\$\\backslash\\$n"\}' NUMBERING AND CALCULATIONS: \\# precede each line by its line number FOR THAT FILE (left alignment). \\# Using a tab (\\$\\backslash\\$t) instead of space will preserve margins. awk '\{print FNR "\\$\\backslash\\$t" \\$0\}' files* \\# precede each line by its line number FOR ALL FILES TOGETHER, with tab. awk '\{print NR "\\$\\backslash\\$t" \\$0\}' files* \\# number each line of a file (number on left, right-aligned) \\# Double the percent signs if typing from the DOS command prompt. awk '\{printf("\\%5d : \\%s\\$\\backslash\\$n", NR,\\$0)\}' \\# number each line of file, but only print numbers if line is not blank \\# Remember caveats about Unix treatment of \\$\\backslash\\$r (mentioned above) awk 'NF\{\\$0=++a " :" \\$0\};1' awk '\{print (NF? ++a " :" :"") \\$0\}' \\# count lines (emulates "wc -l") awk 'END\{print NR\}' \\# print the sums of the fields of every line awk '\{s=0; for (i=1; i<=NF; i++) s=s+\\$i; print s\}' \\# add all fields in all lines and print the sum awk '\{for (i=1; i<=NF; i++) s=s+\\$i\}; END\{print s\}' \\# print every line after replacing each field with its absolute value awk '\{for (i=1; i<=NF; i++) if (\\$i < 0) \\$i = -\\$i; print \}' awk '\{for (i=1; i<=NF; i++) \\$i = (\\$i < 0) ? -\\$i : \\$i; print \}' \\# print the total number of fields ("words") in all lines awk '\{ total = total + NF \}; END \{print total\}' file \\# print the total number of lines that contain "Beth" awk '/Beth/\{n++\}; END \{print n+0\}' file \\# print the largest first field and the line that contains it \\# Intended for finding the longest string in field \\#1 awk '\\$1 > max \{max=\\$1; maxline=\\$0\}; END\{ print max, maxline\}' \\# print the number of fields in each line, followed by the line awk '\{ print NF ":" \\$0 \} ' \\# print the last field of each line awk '\{ print \\$NF \}' \\# print the last field of the last line awk '\{ field = \\$NF \}; END\{ print field \}' \\# print every line with more than 4 fields awk 'NF > 4' \\# print every line where the value of the last field is > 4 awk '\\$NF > 4' STRING CREATION: \\# create a string of a specific length (e.g., generate 513 spaces) awk 'BEGIN\{while (a++<513) s=s " "; print s\}' \\# insert a string of specific length at a certain character position \\# Example: insert 49 spaces after column \\#6 of each input line. gawk --re-interval 'BEGIN\{while(a++<49)s=s " "\};\{sub(/\\^.\{6\}/,"\\&" s)\};1' ARRAY CREATION: \\# These next 2 entries are not one-line scripts, but the technique \\# is so handy that it merits inclusion here. \\# create an array named "month", indexed by numbers, so that month[1] \\# is 'Jan', month[2] is 'Feb', month[3] is 'Mar' and so on. split("Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec", month, " ") \\# create an array named "mdigit", indexed by strings, so that \\# mdigit["Jan"] is 1, mdigit["Feb"] is 2, etc. Requires "month" array for (i=1; i<=12; i++) mdigit[month[i]] = i TEXT CONVERSION AND SUBSTITUTION: \\# IN UNIX ENVIRONMENT: convert DOS newlines (CR/LF) to Unix format awk '\{sub(/\\$\\backslash\\$r\\$/,"")\};1' \\# assumes EACH line ends with Ctrl-M \\# IN UNIX ENVIRONMENT: convert Unix newlines (LF) to DOS format awk '\{sub(/\\$/,"\\$\\backslash\\$r")\};1' \\# IN DOS ENVIRONMENT: convert Unix newlines (LF) to DOS format awk 1 \\# IN DOS ENVIRONMENT: convert DOS newlines (CR/LF) to Unix format \\# Cannot be done with DOS versions of awk, other than gawk: gawk -v BINMODE="w" '1' infile >outfile \\# Use "tr" instead. tr -d \\$\\backslash\\$r outfile \\# GNU tr version 1.22 or higher \\# delete leading whitespace (spaces, tabs) from front of each line \\# aligns all text flush left awk '\{sub(/\\^[ \\$\\backslash\\$t]+/, "")\};1' \\# delete trailing whitespace (spaces, tabs) from end of each line awk '\{sub(/[ \\$\\backslash\\$t]+\\$/, "")\};1' \\# delete BOTH leading and trailing whitespace from each line awk '\{gsub(/\\^[ \\$\\backslash\\$t]+|[ \\$\\backslash\\$t]+\\$/,"")\};1' awk '\{\\$1=\\$1\};1' \\# also removes extra space between fields \\# insert 5 blank spaces at beginning of each line (make page offset) awk '\{sub(/\\^/, " ")\};1' \\# align all text flush right on a 79-column width awk '\{printf "\\%79s\\$\\backslash\\$n", \\$0\}' file* \\# center all text on a 79-character width awk '\{l=length();s=int((79-l)/2); printf "\\%"(s+l)"s\\$\\backslash\\$n",\\$0\}' file* \\# substitute (find and replace) "foo" with "bar" on each line awk '\{sub(/foo/,"bar")\}; 1' \\# replace only 1st instance gawk '\{\\$0=gensub(/foo/,"bar",4)\}; 1' \\# replace only 4th instance awk '\{gsub(/foo/,"bar")\}; 1' \\# replace ALL instances in a line \\# substitute "foo" with "bar" ONLY for lines which contain "baz" awk '/baz/\{gsub(/foo/, "bar")\}; 1' \\# substitute "foo" with "bar" EXCEPT for lines which contain "baz" awk '!/baz/\{gsub(/foo/, "bar")\}; 1' \\# change "scarlet" or "ruby" or "puce" to "red" awk '\{gsub(/scarlet|ruby|puce/, "red")\}; 1' \\# reverse order of lines (emulates "tac") awk '\{a[i++]=\\$0\} END \{for (j=i-1; j>=0;) print a[j--] \}' file* \\# if a line ends with a backslash, append the next line to it (fails if \\# there are multiple lines ending with backslash...) awk '/\\$\\backslash\\$\\$\\backslash\\$\\$/ \{sub(/\\$\\backslash\\$\\$\\backslash\\$\\$/,""); getline t; print \\$0 t; next\}; 1' file* \\# print and sort the login names of all users awk -F ":" '\{print \\$1 | "sort" \}' \texttt{/etc/passwd} \\# print the first 2 fields, in opposite order, of every line awk '\{print \\$2, \\$1\}' file \\# switch the first 2 fields of every line awk '\{temp = \\$1; \\$1 = \\$2; \\$2 = temp\}' file \\# print every line, deleting the second field of that line awk '\{ \\$2 = ""; print \}' \\# print in reverse order the fields of every line awk '\{for (i=NF; i>0; i--) printf("\\%s ",\\$i);print ""\}' file \\# concatenate every 5 lines of input, using a comma separator \\# between fields awk 'ORS=NR\\%5?",":"\\$\\backslash\\$n"' file SELECTIVE PRINTING OF CERTAIN LINES: \\# print first 10 lines of file (emulates behavior of "head") awk 'NR < 11' \\# print first line of file (emulates "head -1") awk 'NR>1\{exit\};1' \\# print the last 2 lines of a file (emulates "tail -2") awk '\{y=x "\\$\\backslash\\$n" \\$0; x=\\$0\};END\{print y\}' \\# print the last line of a file (emulates "tail -1") awk 'END\{print\}' \\# print only lines which match regular expression (emulates "grep") awk '/regex/' \\# print only lines which do NOT match regex (emulates "grep -v") awk '!/regex/' \\# print any line where field \\#5 is equal to "abc123" awk '\\$5 == "abc123"' \\# print only those lines where field \\#5 is NOT equal to "abc123" \\# This will also print lines which have less than 5 fields. awk '\\$5 != "abc123"' awk '!(\\$5 == "abc123")' \\# matching a field against a regular expression awk '\\$7 \\$\\verb|~| \texttt{/\\^[a-f]/'} \\# print line if field \\#7 matches regex awk '\\$7 !\\$\\verb|~| \texttt{/\\^[a-f]/'} \\# print line if field \\#7 does NOT match regex \\# print the line immediately before a regex, but not the line \\# containing the regex awk '/regex/\{print x\};\{x=\\$0\}' awk '/regex/\{print (NR==1 ? "match on line 1" : x)\};\{x=\\$0\}' \\# print the line immediately after a regex, but not the line \\# containing the regex awk '/regex/\{getline;print\}' \\# grep for AAA and BBB and CCC (in any order on the same line) awk '/AAA/ \\&\\& \texttt{/BBB/} \\&\\& \texttt{/CCC/'} \\# grep for AAA and BBB and CCC (in that order) awk '/AAA.*BBB.*CCC/' \\# print only lines of 65 characters or longer awk 'length > 64' \\# print only lines of less than 65 characters awk 'length < 64' \\# print section of file from regular expression to end of file awk '/regex/,0' awk '/regex/,EOF' \\# print section of file based on line numbers (lines 8-12, inclusive) awk 'NR==8,NR==12' \\# print line number 52 awk 'NR==52' awk 'NR==52 \{print;exit\}' \\# more efficient on large files \\# print section of file between two regular expressions (inclusive) awk '/Iowa/,/Montana/' \\# case sensitive SELECTIVE DELETION OF CERTAIN LINES: \\# delete ALL blank lines from a file (same as "grep '.' ") awk NF awk '/./' \\# remove duplicate, consecutive lines (emulates "uniq") awk 'a !\\$\\verb|~| \\$0; \{a=\\$0\}' \\# remove duplicate, nonconsecutive lines awk '!a[\\$0]++' \\# most concise script awk '!(\\$0 in a)\{a[\\$0];print\}' \\# most efficient script CREDITS AND THANKS: Special thanks to the late Peter S. Tillier (U.K.) for helping me with the first release of this FAQ file, and to Daniel Jana, Yisu Dong, and others for their suggestions and corrections. For additional syntax instructions, including the way to apply editing commands from a disk file instead of the command line, consult: "sed \\& awk, 2nd Edition," by Dale Dougherty and Arnold Robbins (O'Reilly, 1997) "UNIX Text Processing," by Dale Dougherty and Tim O'Reilly (Hayden Books, 1987) "GAWK: Effective awk Programming," 3d edition, by Arnold D. Robbins (O'Reilly, 2003) or at \url{http://www.gnu.org/software/gawk/manual/} To fully exploit the power of awk, one must understand "regular expressions." For detailed discussion of regular expressions, see "Mastering Regular Expressions, 3d edition" by Jeffrey Friedl (O'Reilly, 2006). The info and manual ("man") pages on Unix systems may be helpful (try "man awk", "man nawk", "man gawk", "man regexp", or the section on regular expressions in "man ed"). USE OF '\\$\\backslash\\$t' IN awk SCRIPTS: For clarity in documentation, I have used '\\$\\backslash\\$t' to indicate a tab character (0x09) in the scripts. All versions of awk should recognize this abbreviation. \\#---end of file--- \end{document} %end generated latex