Bash Tips and Pitfalls: Difference between revisions
(→Pits) |
|||
Line 1,061: | Line 1,061: | ||
|'''true''' if ''VAR'' <u>set or null</u>; '''false''' if <u>unset</u>. |
|'''true''' if ''VAR'' <u>set or null</u>; '''false''' if <u>unset</u>. |
||
|} |
|} |
||
If we want to test that a '''set of variables''' are defined, we can use '''indirect expansion''': |
|||
<source lang=bash> |
|||
REFS="FOO BAR[0] BAR[1]" |
|||
for refs in $REFS; do |
|||
[ -n "${!refs}" ] || echo "Variable '$refs' is NOT defined" |
|||
done |
|||
</source> |
|||
As we see it also works nicely '''with arrays'''! |
|||
Alternatively type <code>echo $VAR</code>{{kb|TAB}}, Bash shall add a space if ''VAR'' is set or empty. |
Alternatively type <code>echo $VAR</code>{{kb|TAB}}, Bash shall add a space if ''VAR'' is set or empty. |
Revision as of 05:56, 29 June 2017
Reference
Local page:
External links:
Tips for Robust Scripts
Use set -u
This will detect uninitialized variable, the king of all evils!
#! /bin/bash
set -o nounset # Or "set -u"
chroot=$1
rm -r $chroot/etc # Will delete /etc if $1 is not given!!!
Use set -e
Script will exit if any command fails. But beware of the gotchas.
#! /bin/bash
set -o errexit # Or "set -e"
# Don't do
command # Will fail and exit!
if [ "$?"-ne 0]; then echo "command failed"; exit 1; fi
# But do instead:
command || { echo "command failed"; exit 1; } # Ok
# Temporarily disable the check for some code section
set +e
command1
command2
set -e
Expect space in filenames
if [ $filename = "foo" ]; # WRONG
if [ "$filename" = "foo" ]; # Correct
for i in $@; do echo $i; done # WRONG
for i in "$@"; do echo $i; done # Correct
find | xargs ls # WRONG
find -print0 | xargs -0 ls # Correct
for f in $(locate .pdf); do basename $f; done # WRONG
locate .pdf | xargs -d '\n' -n 1 basemane # Correct
for f in $(ls); do basename $f; done # WRONG
for f in *; do basemane $f; done # Correct
Use signals to fail cleanly
if [ ! -e $lockfile ]; then
trap "rm -f $lockfile; exit" INT TERM EXIT
touch $lockfile # !!! race-condition. gap between testing and file creation
critical-section
rm $lockfile
trap - INT TERM EXIT
else
echo "critical-section is already running"
fi
A better solution without TOCTTOU race condition:
if mkdir $lockdir; then # mkdir is atomic on all fs
trap "rmdir $lockdir; exit" INT TERM EXIT
critical-section
rmdir $lockdir
trap - INT TERM EXIT
else
echo "critical-section is already running"
fi
Create temp file and cleanup using signals
From [4]:
tempfiles=( )
cleanup() {
rm -f "${tempfiles[@]}"
}
trap cleanup EXIT
Create a temporary file with
temp_foo="$(mktemp -t foobar.XXXXXX)"
tempfiles+=( "$temp_foo" )
Beware of Race conditions
References:
- http://www.davidpashley.com/articles/writing-robust-shell-scripts.html
- http://stackoverflow.com/questions/325628/race-condition-in-the-common-lock-on-file
- https://unix.stackexchange.com/questions/22044/correct-locking-in-shell-scripts
- http://wiki.bash-hackers.org/howto/mutex
There is race condition between the test of file and its creation. If 2 processes run simultaneously, they might both pass the test successfully and think that they are running alone. To solve it, we need an operation that tests & create the file in an atomic way.
The safest solution is to use mkdir
, which is atomic on most filesystem [5]. It will fail if directory already exists, or create it otherwise, both atomically.
lockdir=/var/tmp/mylock
pidfile=/var/tmp/mylock/pid
if ( mkdir ${lockdir} ) 2> /dev/null; then
echo $$ > $pidfile
trap 'rm -rf "$lockdir"; exit $?' INT TERM EXIT
# do stuff here
# clean up after yourself, and release your trap
rm -rf "$lockdir"
trap - INT TERM EXIT
else
echo "Lock Exists: $lockdir owned by $(cat $pidfile)"
fi
The PID of locking script is stored in a file in locked directory. This way, another script can detect stale lock (by verifying that the owner script is still running).
Note that on exit, trap will be executed twice.
<source lang=bash>
lockdir=/var/tmp/mylock
pidfile=/var/tmp/mylock/pid
if ( mkdir ${lockdir} ) 2> /dev/null; then
echo $$ > $pidfile
trap 'trap - INT TERM EXIT; rm -rf "$lockdir"; exit $?' INT TERM EXIT
# do stuff here
# exit explicitly to call the trap
exit 0
else
echo "Lock Exists: $lockdir owned by $(cat $pidfile)"
fi
</source>
Here a complete example on how to manage lockdir and stale process [6]:
#!/bin/bash
# lock dirs/files
LOCKDIR="/tmp/statsgen-lock"
PIDFILE="${LOCKDIR}/PID"
# exit codes and text
ENO_SUCCESS=0; ETXT[0]="ENO_SUCCESS"
ENO_GENERAL=1; ETXT[1]="ENO_GENERAL"
ENO_LOCKFAIL=2; ETXT[2]="ENO_LOCKFAIL"
ENO_RECVSIG=3; ETXT[3]="ENO_RECVSIG"
###
### start locking attempt
###
trap 'ECODE=$?; echo "[statsgen] Exit: ${ETXT[ECODE]}($ECODE)" >&2' 0
echo -n "[statsgen] Locking: " >&2
if mkdir "${LOCKDIR}" &>/dev/null; then
# lock succeeded, install signal handlers before storing the PID just in case
# storing the PID fails
trap 'ECODE=$?;
echo "[statsgen] Removing lock. Exit: ${ETXT[ECODE]}($ECODE)" >&2
rm -rf "${LOCKDIR}"' 0
echo "$$" >"${PIDFILE}"
# the following handler will exit the script upon receiving these signals
# the trap on "0" (EXIT) from above will be triggered by this trap's "exit" command!
trap 'echo "[statsgen] Killed by a signal." >&2
exit ${ENO_RECVSIG}' 1 2 3 15
echo "success, installed signal handlers"
else
# lock failed, check if the other PID is alive
OTHERPID="$(cat "${PIDFILE}")"
# if cat isn't able to read the file, another instance is probably
# about to remove the lock -- exit, we're *still* locked
# Thanks to Grzegorz Wierzowiecki for pointing out this race condition on
# http://wiki.grzegorz.wierzowiecki.pl/code:mutex-in-bash
if [ $? != 0 ]; then
echo "lock failed, PID ${OTHERPID} is active" >&2
exit ${ENO_LOCKFAIL}
fi
if ! kill -0 $OTHERPID &>/dev/null; then
# lock is stale, remove it and restart
echo "removing stale lock of nonexistant PID ${OTHERPID}" >&2
rm -r "${LOCKDIR}"
if [ $? != 0 ]; then
echo "lock failed, another script is cleaning up stale lock" >&2
exit ${ENO_LOCKFAIL}
fi
echo "[statsgen] restarting myself" >&2
exec "$0" "$@"
else
# lock is valid and OTHERPID is active - exit, we're locked!
echo "lock failed, PID ${OTHERPID} is active" >&2
exit ${ENO_LOCKFAIL}
fi
fi
- Issue! — there is a race-condition when the lock is stale and two scripts are trying to clean up. Another script could remove the stale lock and create a new one, when first script still thinks lock is stale and remove it successfully with
rm -r
.
Another example in [7] and [8], is to use IO redirection and bash's noclobber mode, which won't redirect to an existing file:
if ( set -o noclobber; echo "$$" > "$lockfile") 2> /dev/null;
then
trap 'rm -f "$lockfile"; exit $?' INT TERM EXIT
# critical-section
rm -f "$lockfile"
trap - INT TERM EXIT
else
echo "Failed to acquire lockfile: $lockfile."
echo "Held by $(cat $lockfile)"
fi
The shortest solution [9]:
set -o noclobber
{ > file ; } &> /dev/null
A more thorough example below from [10]:
#!/bin/sh
# Lock (mutex) sample code for Bourne shell
#
# Stephen Thomas <flabdablet@gmail.com> 14-Oct-2009
#
# This is free software - do whatever you like with it
# except hold me accountable for any grief it causes you.
# Acquire specified lock
# Return 0 if successful, 1 if not
acquire_lock () {
local me=$(sh -c 'echo $PPID')
local owner
local shell
local status
local result
local flags=$-
set -o noclobber #make output redirection into atomic test-and-set
if echo $me $$ valid >"$1"
then
result=0
else
read owner shell status <"$1"
test "$owner $shell $status" = "$me $$ valid"
result=$?
fi 2>/dev/null
set +$- -$flags
return $result
}
# Remove specified lock if stale (valid, but neither the
# owning process nor the shell that spawned it are still
# running)
purge_stale_lock () {
local owner
local shell
local status
if
read owner shell status <"$1" &&
test "$status" = valid &&
! ps p "$shell" &&
! ps p "$owner"
then
rm -f "$1"
fi >/dev/null 2>&1
}
# Exercise locking functions
test_locking () {
local me=$(sh -c 'echo $PPID')
echo Process $me from shell $$ attempting to acquire lock $1
if acquire_lock "$1"
then
echo Process $me from shell $$ acquired lock - sleeping 5 seconds
sleep 5
echo Process $me from shell $$ attempting to re-acquire same lock
if acquire_lock "$1"
then
echo Process $me from shell $$ re-acquired same lock - sleeping 5 seconds
sleep 5
else
echo Process $me from shell $$ failed to re-acquire lock
fi
echo Process $me from shell $$ releasing lock
rm -f "$1"
else
echo Process $me from shell $$ locked out
fi
}
lock=~/test.lck
purge_stale_lock "$lock"
for i in $(seq 1 10)
do
test_locking "$lock" &
done
Alternate solutions using flock
:
exec 200>"$LOCK_FILE"
flock -e -n 200 || exit 1
# ...critical section...
rm "$LOCK_FILE" # Optional
Use unique variable names in functions
In bash, changing a variable in a function, change that variable in the parents as well, even if that variable was declared local
in the parent!
So to avoid conflicts, use unique variable names. But if all function calls are local, using local
in all child functions is enough, but potentially unsafe.
function achild() {
A=achild
MYSCRIPT_ACHILD=achild
echo $A $MYSCRIPT_ACHILD
}
function a() {
local A=a # Name too generic. Potential name clash!
local MYSCRIPT_A=a # Unique name, using script name as prefix
echo $A MYSCRIPT_A
achild
echo $A $MYSCRIPT_A
}
a # a a
# achild achild
# achild a
Tips for Fast Scripts
Avoid forking
Avoid calling an external program. Use Bash internal commands as much as possible. Here some common replacement:
don't | DO |
---|---|
cat $FILE | some_pgm
|
<$FILE some_pgm # Don't cat, use redirection!
|
basename $FILE
|
echo ${FILE/*\/} # Remove everything up to last slash
|
ps aux | grep ssh-agent && ...
|
[[ $(ps aux) =~ ssh-agent ]] && ... # Use built-in regex engine
[[ $(ps aux) == *ssh-agent* ]] && ... # Use built-in pattern matching
|
Tips
Parsing Command-Line Option Parameters
- To ease parsing, pre-parse with executable getopt (see here for more information and examples).
#!/bin/bash
# (old version)
args=`getopt abc: $*`
if test $? != 0
then
echo 'Usage: -a -b -c file'
exit 1
fi
set -- $args
for i
do
case "$i" in
-c) shift;echo "flag c set to $1";shift;;
-a) shift;echo "flag a set";;
-b) shift;echo "flag b set";;
esac
done
$ ./g -abc "foo"
flag a set
flag b set
flag c set to foo
- Better yet, parse using Bash/sh built-in getopts (see here for more information and examples).
#!/bin/bash
while getopts "abc:" flag
do
echo "$flag" $OPTIND $OPTARG
done
shift $((OPTIND-1))
echo $@
$ ./g -abc "foo" "bar"
a 1
b 1
c 3 foo
bar
- To parse option like --value=name ([11])
until [[ ! "$*" ]]; do
if [[ ${1:0:2} = '--' ]]; then
PAIR=${1:2}
PARAMETER=$(echo ${PAIR%=*} | tr [:lower:]- [:upper:]_)
eval P_$PARAMETER=${PAIR##*=}
fi
shift
done
Empty a file keeping permissions
Empty a file named filename, keeping the same permission and user/group:
>filename
Print multi-lines with echo
Print multi-lines text with echo:
$ echo -e "Some text\n...on 2 lines..." # Enable interpretation of backslash escapes (must be quoted!)
Some text
...on 2 lines...
Print multi-line variables with echo
One can save in a variable the multi-line output of a command. Later this variable can echoed while preserving the linefeeds if the variable is enclosed in quotes "...":
$ mymultilinevar=$(<myfile.txt sed -e'/first line/,/last line/')
$ echo "$mymultilinevar"
first line
second line
...
last line
Echo with colors
References:
The command echo can display colors thanks to escape sequence commands [13]:
echo -e "\033[35;1m Shocking \033[0m" #Display "shocking" in bright purple
The first character is the escape character 27 (033 in octal). One can also type directly ^[ (i.e. Ctrl-AltGr-[). The syntax is (where spaces were added for clarity)
\033 [ <command> m \033 [ <command> ; <command> m
Note that commands can be chained. The set of commands is given in the color table below:
code | style | code | foreground | code | foreground | code | background | code | background |
---|---|---|---|---|---|---|---|---|---|
0 | default colour | 90 | dark grey | 40 | black | 100 | dark grey | ||
1 | bold | 31 | red | 91 | light red | 41 | red | 101 | light red |
4 | underlined | 32 | green | 92 | light green | 42 | green | 102 | light green |
5 | flashing text | 33 | orange | 93 | yellow | 43 | orange | 103 | yellow |
7 | reverse field | 34 | blue | 94 | light blue | 44 | blue | 104 | light blue |
35 | purple | 95 | light purple | 45 | purple | 105 | light purple | ||
36 | cyan | 96 | turquoise | 46 | cyan | 106 | turquoise | ||
37 | grey | 47 | grey |
ANSI Color Code Variables
See [14]. Use echo -e "${Red}Red"
to use them:
# Reset
Color_Off='\e[0m' # Text Reset
# Regular Colors
Black='\e[0;30m' # Black
Red='\e[0;31m' # Red
Green='\e[0;32m' # Green
Yellow='\e[0;33m' # Yellow
Blue='\e[0;34m' # Blue
Purple='\e[0;35m' # Purple
Cyan='\e[0;36m' # Cyan
White='\e[0;37m' # White
# Bold
BBlack='\e[1;30m' # Black
BRed='\e[1;31m' # Red
BGreen='\e[1;32m' # Green
BYellow='\e[1;33m' # Yellow
BBlue='\e[1;34m' # Blue
BPurple='\e[1;35m' # Purple
BCyan='\e[1;36m' # Cyan
BWhite='\e[1;37m' # White
# Underline
UBlack='\e[4;30m' # Black
URed='\e[4;31m' # Red
UGreen='\e[4;32m' # Green
UYellow='\e[4;33m' # Yellow
UBlue='\e[4;34m' # Blue
UPurple='\e[4;35m' # Purple
UCyan='\e[4;36m' # Cyan
UWhite='\e[4;37m' # White
# Background
On_Black='\e[40m' # Black
On_Red='\e[41m' # Red
On_Green='\e[42m' # Green
On_Yellow='\e[43m' # Yellow
On_Blue='\e[44m' # Blue
On_Purple='\e[45m' # Purple
On_Cyan='\e[46m' # Cyan
On_White='\e[47m' # White
# High Intensty
IBlack='\e[0;90m' # Black
IRed='\e[0;91m' # Red
IGreen='\e[0;92m' # Green
IYellow='\e[0;93m' # Yellow
IBlue='\e[0;94m' # Blue
IPurple='\e[0;95m' # Purple
ICyan='\e[0;96m' # Cyan
IWhite='\e[0;97m' # White
# Bold High Intensty
BIBlack='\e[1;90m' # Black
BIRed='\e[1;91m' # Red
BIGreen='\e[1;92m' # Green
BIYellow='\e[1;93m' # Yellow
BIBlue='\e[1;94m' # Blue
BIPurple='\e[1;95m' # Purple
BICyan='\e[1;96m' # Cyan
BIWhite='\e[1;97m' # White
# High Intensty backgrounds
On_IBlack='\e[0;100m' # Black
On_IRed='\e[0;101m' # Red
On_IGreen='\e[0;102m' # Green
On_IYellow='\e[0;103m' # Yellow
On_IBlue='\e[0;104m' # Blue
On_IPurple='\e[10;95m' # Purple
On_ICyan='\e[0;106m' # Cyan
On_IWhite='\e[0;107m' # White
Get file size
The different ways to extract file size in a Bash script:
SIZE=$(stat -c%s "$FILENAME") # Using stat
SIZE=$(ls -l $FILENAME | awk -F" "'{ print $5 }') # Using ls / awk
SIZE=$(du -b $FILENAME | sed 's/\([0-9]*\)\(.*\)/\1/') # Using du
SIZE=$(cat $FILENAME | wc -c) # Using cat / wc
SIZE=$(ls -l $FILENAME | cut -d " " -f 6) # Using ls / cut
Read file content into env variable
Read the content of a file into an environment variable:
PID=`cat $PIDFILE`
read PID < $PIDFILE
Get the PID of a new process
Getting the pid of a new process (when other processes with same name are already running)
oldPID=`pidofproc /usr/bin/ssh`
/usr/bin/ssh -f -N -n -q -D 1080 noekeon
RETVAL=$?
newPID=`pidofproc /usr/bin/ssh`
uniqPID=`echo $oldPID $newPID|sed -e 's/ /\n/g'|sort|uniq -u`
echo $uniqPID
Get the PID of a running process
Getting the pid of a running process
pid=$(pidof -o $$ -o $PPID - o %PPID -x /bin/ssh)
Detect if a given process is running
This is actually a tricky one. Some good solutions, all giving answer in $?:
[ -e /proc/$pid ] # PID - nice, but is it portable?
ps -p $pid >/dev/null # PID - need redirect, otherwise ps will print the process found
pgrep "^$name$" # NAME - probably the best using command-name
pkill -0 $name # NAME - ... similar & less robust (fail if process can't accept signal)
/bin/kill -0 $pid 2>/dev/null # PID - need redirect, otherwise kill will complain if no process found
# ... also works with bash built-in kill
Using ... =~ ...:
if [[ $(ps $pid) =~ $name ]]; # Test both PID and process name
Some wrong / bad solutions:
ps -aef | grep $pid # --== FAIL ==-- Will match grep process itself + $pid as ppid
ps -aef | grep $name # --== FAIL ==-- Will match grep process itself
ps -aef | grep -v grep | grep $pid # --== UGLY ==-- ... and slow. Better use ps -fp $(pgrep $pid)
ps -p $pid | grep $pid # --== SLOW ==-- better test $? immediately
Don't use this method for locking in startup scripts. Be careful with race condition. The best solution is to use a mutex, or use an atomic command (like mkdir). See for example:
- http://flabdablet.nfshost.com/linux-scripts/test-locking.sh
- http://www.davidpashley.com/articles/writing-robust-shell-scripts.html#id2326620
Launch a process in the background
Different ways to launch process in the background (unordered - might be useful one day...). The double ampersand trick comes from here.
myprocess.exe &
exec myprocess.exe
exec myprocess.exe &
( ( exec myprocess.exe & ) & )
nohup myprocess.exe &
( ( nohup myprocess.exe & ) & )
Display the name / body of functions
To list the functions declared in the current environment, or to list the body of a function:
declare -f # List all defined functions and their bodies
declare -f name # List the body of function "name"
declare -F # List name of all defined functions
Or alternatively use bash built-in type:
type name # Works with commands, builtins, function, aliases...
Return the subnet address
Solution from [15].
/sbin/ifconfig eth0 |
grep 'inet addr' | tr .: ' ' |
(read inet addr a b c d Bcast e f g h Mask i j k l;
echo $(( $a & $i )).$(( $b & $j )).$(( $c & $k )).$(( $d & $l )) )
Remove file name extensions
FILENAME="myfile.pdf"
echo ${FILENAME%%.pdf} # only matches '.pdf', not '.PDF'
echo ${FILENAME%%.???} # only matches 3-letter extension
Formatted output / printing using printf
printf
is a Bash built-in function that allows printing formatted output much like the standard C printf
instructions.
printf "%02d" 1 # outputs '01'
Delete files with special characters
find . -inum [inode] -exec rm -i {} \; # Use inode
rm -- -foo # Special case for name with a heading dash
rm ./-foo
Remove useless invocation of 'cat'
There are basically only 3 valid uses of cat:
- Show the content of a file in a terminal
- Write a "here" document or standard input to a file in a terminal
- Concatenating several files together (hence the name of cat)
However cat is frequently used for other purposes like piping a file in a process. This is a bad habit. It is slow and add an unnecessary process. A better alternative is to use the file redirection feature of the shell:
|
|
Using Process Substitution
The process substitution feature of Bash takes the form <(list)
or >(list)
. The process list is run with its input or output connected to a FIFO (named pipe) or a file in /dev/fd. The name of this file is then passed as an argument to the current command (as a result of the expansion). We can see this explicitly with the following examples:
echo >(true)
# /dev/fd/63
echo <(true)
# /dev/fd/63
This feature can be used to build some very advanced redirection [16]:
diff <(ls dir1) <(ls dir2) # Compare the content of 2 directories
sort -k 9 <(ls -l /bin) <(ls -l /usr/bin) <(ls -l /usr/X11R6/bin) # Sort content of 3 directories
tar cf >(gzip -c > file.tar.gz) $directory # Equivalent of tar czf file.tar.gz $directory
It can also be used to use variables that would otherwise be limited to some subprocess, like:
: | ((x++)) # This actually starts a subprocess
: | ( ((x++)) ) # ... like this.
echo x # ... so 'x' is undefined here
((x++)) < <(:) # now variable 'x' remains in the main process
echo $x # x is defined
Redirecting stdout and stderr with tee and a pipe
Using tee and the standard piping mechanism, it is easy to redirect the content of stdout to a file and stdout:
command | tee stdout.log # Keep a copy of 'command' output in file 'stdout.log'
What if we also want to do the same with stderr? In other words, can we also pipe stderr?
Yes, in Bash this is easy! We only need to use the process substitution feature (reference [17])!
command |& tee stdoutnerr.log # Pipe BOTH stdout and stderr
command 2> >(tee stderr.log) >&2 # Keep a copy of 'command' stderr in file 'stderr.log'
command 2> >(tee stderr.log) >&2 # Keep a copy of 'command' stderr in file 'stderr.log'
command > >(tee stdout.log) 2> >(tee stderr.log >&2) # Keep both a copy of stdout and stderr in separate files
Note that tee always print the content of stdin to stdout. That's why we need the redirection >&2 to redirect it back to stderr.
To redirect stdout for current script:
#! /bin/bash
exec > >(tee foo)
To redirect both stdout and stderr for current script:
#! /bin/bash
exec > >(tee foo) 2>&1
Forcing program to read from standard input instead of file
See /proc filesystem
Finding symbolic link target
Use readlink:
target=$(readlink -n source) # Return target basename of link 'source'
target=$(readlink -nf source) # Return target fullname of link 'source'
Escape special / meta- character in a string
Use printf "%q"
to automatically escape special characters in a string, so that they can be reused as shell input:
printf "%q" 'pipe:[12345]' # Returns "pipe:\[12345\]"
safefname=$(printf "%q" "$fname") # Protects file name if it contains special character
Find intersection between 2 files
grep -f file1 file2
Join lines with comma
pgrep -P $somepid | sed -re ':a N; s/\n/,/; b a' # With Sed
pgrep -P $somepid | perl -e '@_=<>; chomp @_; print join ",",@_' # With Perl
pgrep -P $somepid | perl -e '@_=<>; chomp @_; $,=","; print @_' # With Perl
Another example using tr
:
echo -n "$(pgrep -P $somepid)" | tr '\n' ',' # use -n "..." so that interim newline are kept, but none added at the end
echo $(pgrep -P $somepid) | tr ' ' ',' # Here echo will translate interim newlines to space
Force single trailing slash in directory
#function single() { echo ${1%%\/*}/; } # WRONG!
function single() { A=${1%//}; echo ${A%/}/; }
for i in / // . ./ .// dir dir/ dir// /home/john; do single $i; done
# /
# /
# ./
# ./
# ./
# dir/
# dir/
# dir/
# /home/john/
Keep Color with Less
colordiff -bu file1 file2 | less -R # Use -R to preserve color with less pager
Pad with newlines
Padding with newlines is a bit difficult because we cannot use a function and a command substitution because the latter will always remove the trailing newlines no matter what. A solution is as follows:
function padln()
{
PAD=
local N=$1
while (( N-- > 0 )); do
PAD=$PAD$'\n'
done
}
padln 2
VAR=$'line1\nline2\n'$PAD
echo "$VAR" | wc # Don't forget quotes!
# 4 ...
Avoid duplicate entries in PATH
From [18]:
function addpath()
{
new_entry=$1
case ":$PATH:" in
*":$new_entry:"*) :;; # already there
*) PATH="$new_entry:$PATH";; # or PATH="$PATH:$new_entry"
esac
}
Or using ==
operator:
function addpath()
{
if ! [[ $PATH == *:$1:* ]]; then
export PATH="$1:$PATH" # or PATH="$PATH:$1"
fi
}
Another option is to use [ $(expr match ":$PATH:" ".*:$1:.*") -eq 0 ]
, but this spawns a process and hence is much slower.
Get directory of a sourced script
From [19].
To get the full directory name of the script:
DIR="$(cd "$(dirname "${BASH_SOURCE[0]}" )" && pwd )"
To get the dereferenced path (all directory symlinks resolved):
DIR="$(cd -P "$(dirname "${BASH_SOURCE[0]}" )" && pwd )"
If the script is itself a symlink, and we want the location after dereference:
SOURCE="${BASH_SOURCE[0]}"
while [ -h "$SOURCE" ] ; do SOURCE="$(readlink "$SOURCE")"; done
DIR="$( cd -P "$( dirname "$SOURCE" )" && pwd )"
Another solution I used was based on which
, but that only works if script is executable and within path:
PROGDIRNAME=$(dirname $(which "$0"))
Detect spaces in file name
Some script-fu of mine:
if [ $(wc -w <<< $FILENAME) -eq 1 ]; then echo no spaces; else echo space found in filename; fi
Get SSH hostname from given host name
Say we have the following .ssh/config:
Host myhost
UserName myuser
HostName myhost.domain.com
[...]
We want to get the HostName corresponding to myhost:
#First pre-process ssh config file, only keeping lines of the form "host xxx yyy hostname zzz"
SSH_CONFIG="$(< ~/.ssh/config sed -rn 's/#.*//; s/ +/ /g; s/[hH]ost/host/; s/[nN]ame/name/; /host |hostname/p'|sed -r ':a /host/N; /hostname/!b a; {s/\n *hostname/ hostname/; p; d}')"
NAME="myhost"
$(echo "$SSH_CONFIG" | perl -lne 'print for / '"$NAME"' .*hostname +(.*)/g')
String and path manipulation
- Echo first word in a space-separated list:
make="/usr/bin/make -r --no-print-directory -j 2"
# Using array
words=($make)
echo $words # $words same as ${words[0]}
# Using suffix matching
echo ${make% *}
# Using pattern matching
echo ${make/ */}
- Replace a folder name within a path (i.e. not trailing or ending).
FILE=/foobar/bar/foobar.txt
echo ${FILE/\/bar//fuu} # We *must* escape first /, but 2nd can be as-is.
echo ${FILE//bar//fuu} # WRONG. Will replace *all* occurences of "bar" with "/fuu"
Avoid eval
like the plague, use declare
and ${!ref}
From [20]:
declare
is a far safer option. It does not evaluate data as bash code likeeval
does, and as such it does not allow arbitrary code injection quite so easily
So do not write
Do NOT write: | ...write this instead: |
---|---|
eval "array_$index=$value" # Indirect var decl.
local ref="${array}_$index"
eval \$$ref # Var. indirection
|
declare "array_$index=$value" # Indirect var decl.
local ref="${array}_$index"
echo "${!ref}" # Var. indirection
|
There is a caveat though: any variable declared with declare
are local to the function. So there is no way to modify a global array with declare
.
Also ${!ref}
only works in Bash since v2. For more portable script, like compatibility with sh, eval is needed.
Use if ... =~ ''pattern'' instead of if ( ... | grep ... )
Constructs like if ( ... | grep ... )
spawn 2 processes, and are then inefficient (in particular on Cygwin).
if ( ps aux | grep ssh-agent ); then echo ssh-agent found; fi # NOT EFFICIENT, 2 processes spawn
if [[ $(ps aux) =~ ssh-agent ]]; then echo ssh-agent found; fi # BETTER!!!
Test whether a variable is set/defined/unset/empty
One can use the rich parameter expansion possibilities:
echo ${VAR:-word}
|
Use Default Values — (expansion of) word if VAR is unset or null; $VAR otherwise
|
echo ${VAR-word}
|
Use Default Values — (expansion of) word if VAR is unset; $VAR otherwise
|
echo ${VAR:+word}
|
Use Alternate Values — nothing if VAR is unset or null; (expansion of) word otherwise |
echo ${VAR+word}
|
Use Alternate Values — nothing if VAR is unset; (expansion of) word otherwise |
So one can test if VAR is unset with:
[ -n ${VAR:+1} ]
|
true if VAR set; false if unset or null. |
[ -n ${VAR+1} ]
|
true if VAR set or null; false if unset. |
If we want to test that a set of variables are defined, we can use indirect expansion:
REFS="FOO BAR[0] BAR[1]"
for refs in $REFS; do
[ -n "${!refs}" ] || echo "Variable '$refs' is NOT defined"
done
As we see it also works nicely with arrays!
Alternatively type echo $VAR
TAB, Bash shall add a space if VAR is set or empty.
Use sponge
to easily modify a file inplace
sponge is part of package moreutils. It can be used to easily edit file in-place:
sed -r '...' FILE | grep ... | sponge FILE # Sponge soaks its full input before creating output file
Use auto-complete with command starting with 'sudo'
Just add to .bashrc ([21]):
if [ "$PS1" ]; then
complete -cf sudo
fi
Test if a directory is empty
From [22]:
$ [ "$(ls -A /tmp)" ] && echo "Not Empty" || echo "Empty"
# OR
if [ "$(ls -A /tmp)" ]; then
echo "Not Empty"
else
echo "Empty"
fi
A solution that does not invoke a sub-shell [23]:
shopt -s nullglob
shopt -s dotglob # To include hidden files
files=(/some/dir/*)
if [ ${#files[@]} -gt 0 ]; then echo "huzzah"; fi
shopt -u nullglob dotglob
Be more efficient with Bash console
- use Alt-. to replace the last argument of last command.
$ cd mydirectory
bash: cd: mydirectory: No such file or directory
$ mk Alt-.
- use
!!
to replace last command. Very handy for:
$ apt-get install package
E: Could not open lock file /var/lib/dpkg/lock - open (13: Permission denied)
E: Unable to lock the administration directory (/var/lib/dpkg/), are you root?
$ sude !!
Sum integers, one per line?
From stackoverflow.com
awk '{s+=$1} END {print s}' mydatafile
awk '{s+=$1} END {printf "%.0f", s}' mydatafile # To avoid 2^31 overflow in some version of awk
Test existence of an array index or key
We find the following solution on stackoverflow.com
[ ${array[key]+abc} ] && echo "exists"
We can extend the solution. For instance, say we want to return a default key if a given key is not found:
read -p "enter key" key
echo "Value for key $key is ${array[$key]:-array[default]} ]" # Will print value for $key, or for defaultkey if not found
How to detect if a script is being sourced
This is a though question, see stackoverflow for details [[24]].
The best solution if bash support BASH_SOURCE
:
[[ "${BASH_SOURCE[0]}" != "${0}" ]] && echo "script ${BASH_SOURCE[0]} is being sourced ..."
The following solution is portable between Bash and Korn:
[[ $_ != $0 ]] && echo "Script is being sourced" || echo "Script is a subshell"
Get ip address of local host / remote host
Remote host:
getent hosts remotehost | awk '{ print $1; exit }'
dig +short remotehost | head -n 1
local host:
hostname -I | awk '{ print $1 }' # awk because might have several ip address
Expand tilde ~
in variables
The simplest [25]:
var="${var/#\~/$HOME}" # If var contains a single file name, var="~/myfile"
var="${var//\~/$HOME}" # If var contains several file names, var="~/myfile1 ~/myfile2"
DO NOT USE eval
. Using eval
is not safe if applied without safeguard (variable could eval to rm -rf $HOME
).
Run a command when a file changes
Easiest solution is to use entr
:
find -name *.c | entr make
Alternatively, use inotifywait
or script sleep_until_modified.sh [26].
Remove CRLF and trailing whitespace in text files
Using ack-grep:
# Convert CRLF to LF (2x to get rid of CRCRLF)
ack-grep -f --text --print0 | xargs -0 dos2unix
ack-grep -f --text --print0 | xargs -0 dos2unix
# Convert CR to LF
ack-grep -f --text --print0 | xargs -0 mac2unix
# Remove trailing blanks/tabs
ack-grep -f --text --print0 | xargs -0 sed -ri 's/[ \t]+$//'
Using ag:
# Convert CRLF to LF (2x to get rid of CRCRLF)
ag -lt0 | xargs -0 dos2unix # or 'ag --files-with-matches --all-text --print0 ...'
ag -lt0 | xargs -0 dos2unix
# Convert CR to LF
ag -lt0 | xargs -0 mac2unix
# Remove trailing blanks/tabs
ag -lt0 | xargs -0 sed -ri 's/[ \t]+$//'
Using find to restrict to some extensions:
# Convert CRLF to LF (2x to get rid of CRCRLF)
find -type f -regex ".*\.\(c\|h\|cpp\|hpp\)" -print0 | xargs -0 dos2unix
find -type f -regex ".*\.\(c\|h\|cpp\|hpp\)" -print0 | xargs -0 dos2unix
# Convert CR to LF
find -type f -regex ".*\.\(c\|h\|cpp\|hpp\)" -print0 | xargs -0 mac2unix
# Remove trailing blanks/tabs
find -type f -regex ".*\.\(c\|h\|cpp\|hpp\)" -print0 | xargs -0 sed -ri 's/[ \t]+$//'
Detect if script redirected through pipe
From stackoverflow.com:
if [ -t 1 ] ; then echo terminal; else echo "not a terminal"; fi
# terminal
(if [ -t 1 ] ; then echo terminal; else echo "not a terminal"; fi) | cat
# not a terminal
Try running a program until it succeeds
This is typically useful for cron scripts. From StackExchange:
#!/bin/sh
# Check to see if this is already running from some other day
mkdir /tmp/lock || exit 1
while ! command-to-execute-until-succeed; do
# Wait 30 seconds between successive runs of the command
sleep 30
done
rmdir /tmp/lock
Infinite wait in Bash
From SO:
#! /bin/bash
trap 'trap - INT TERM EXIT; rm -f mypipe; exit $?' INT TERM EXIT
mkfifo mypipe
while : ; do
read S <mypipe
case "$S" in
*EXIT*)
>&2 echo "Got EXIT."
break
;;
*)
>&2 echo "Signal '$S' not supported."
;;
esac
done
exit 0
Only drawback: the source process writing to fifo will block until the sink process start to read the fifo again. See SO again for ftee
, a tee
-like clone that can pipe to a fifo without blocking.
Pits
A list of frequent gotcha's !
Description | Example |
---|---|
Space! - Don't forget to add spaces whenever necessary, in particular around brace in function definition, or in test conditions for ifs. |
if -space- [ -space- -f /etc/foo -space- ]; then ... |
Quote - Always quote parameters, variables passed to test in if ... then ... else: |
if [ "$name" -eq 5 ]; then ... |
For loops with file - Use simply * to list files in for loops, not `ls *`: |
for file in *; cat "$file"; done # SUCCEEDS, even if white space
for file in `ls *`; cat "$file"; done # FAILS miserably
|
Incorrect variable definition
So it is MYVAR=value and not |
srcDir = $1 # WRONG - spaces around = sign
$srcDir=$1 # WRONG - $ prefix
maxW= $(sed -rn '/$^/Q' myfile.txt) # WRONG - SPACE!
srcDir=$1 # CORRECT
srcDir="$1" # BEST
|
Semi-colon in find - Semi-colon in find commands must be escaped ! |
find . -exec echo {} ; # WRONG - semi-colon not escaped
find . -exec echo {} \; # CORRECT
|
Using a bash built-in instead of external program Bash built-in commands override external commands with same name (eg. kill and echo) |
$ type kill # kill is a shell builtin
$ type /bin/kill # /bin/kill is /bin/kill
$ /bin/kill -v # kill (cygwin) 1.14
|
Wrong redirection order |
read pid < $PID_FILE 2> /dev/null # WRONG - error msg if $PID_FILE
# doesn't exist
read pid 2> /dev/null < $PID_FILE # CORRECT
|
Variable not exported outside parens |
( read pid < $PID_FILE ) 2> /dev/null # WRONG - var pid not kept
read pid 2> /dev/null < $PID_FILE # CORRECT
|
Read and piping
|
echo "1 2 3" | read a b c; echo $a $b $c # WRONG - subshell
echo "1 2 3" | (read a b c; echo $a $b $c) # CORRECT - same subshell
set -- $(echo "1 2 3"); echo $1, $2, $3 # BETTER
|
Don't quote tilde ... nor the following slash! | if [ -a "~/bin/my file" ]; then echo found; fi # WRONG
if [ -a ~/bin/"my file" ]; then echo found; fi # CORRECT
export FOO=~"/foo bar" # WRONG
export FOO=~/"foo bar" # CORRECT
|
Need quoting when echoing a variable with embedded newlines. This is because echo takes newlines (like any blanks) as parameter separator Moreover command substitution always remove the trailing newlines no matter what |
HEADER=$(sed -rn '/$^/Q' myfile.txt)
echo "$HEADER" # CORRECT
echo $HEADER # WRONG - newline are removed
VAR=$'\n\n'; echo "$VAR" # CORRECT, newlines are kept
VAR="$(echo; echo)"; echo "$VAR" # WRONG, trailing newlines stripped!
VAR="$(echo; echo; echo x"; VAR=${VAR%x}; echo "$VAR" # FIXED
|
Always append to /dev/stderr or use >&2 instead. The construct ls >/dev/stderr is wrong because if stderr was redirected to a file, then > /dev/stderr will overwrite the file content. Better use ls >>/dev/stderr or best >&2
|
sample() {
echo "foo" >/dev/stderr
echo "bar" >/dev/stderr
}
#REFERENCE:
sample # both lines
#WRONG:
sample 2> foobar.txt
cat foobar.txt # Only last line
#FIX USING PROCESS SUBSTITUTION:
sample 2> >(cat >foobar.txt)
cat foobar.txt # both lines
|
exit status of pipelines returns status of last step in pipeline. Use PIPESTATUS array to get status of each step separately. |
# WRONG - $? will return exit status of 'tee'
make | tee make.log
status=$?
# CORRECT
make | tee make.log
exit ${PIPESTATUS[0]}
|
|
# WRONG - Use read with default option
read -p "password: " passwd
echo "$passwd"
# CORRECT - Use IFS= and -p to keep blanks / backslashes
IFS= read -r -p "password: " passwd
echo "$passwd"
|
Do not give extra quotes in pattern matching. Use [[ ]] block.
|
# WRONG - Extra quotes or wrong block
if [[ $NAME == "*.c" ]]; then mv $NAME src/; fi
if [ $NAME == *.c ]; then mv $NAME src/; fi
# CORRECT - Use [[ ]] and no extra quotes
if [[ $NAME == *.c ]]; then mv $NAME src/; fi
|
There are no local variables in bash. Variables modified in a child function also affects the parent function, even if parent function uses the keyword local . A parent function can't prevent children to modify its variable. It is the opposite, by using the keyword local , a function avoids modifying the variable in the parent.
|
function b() {
SRC=overwritten-$1
echo $SRC
}
function a() {
local SRC=$1 # WRONG! what if fct. b redefines SRC?
local MYSCRIPT_SRC=$1 # CORRECT. Use unique variable names
b $SRC
echo $SRC $MYSCRIPT_SRC
}
|
local absorbs the return status of any process called within.
|
local OUT=$(foo BAR)
local RC=$? # WRONG! $? will always be 0
|