Bash Tips and Pitfalls

From miki
Revision as of 22:32, 23 April 2014 by Mip (talk | contribs) (→‎Avoid forking: fix templates)
Jump to navigation Jump to search

Reference

Local page:

External links:

Tips for Robust Scripts

Reference: [1], [2], [3].

Use set -u

This will detect uninitialized variable, the king of all evils!

#! /bin/bash
set -o nounset                          # Or "set -u"

chroot=$1
rm -r $chroot/etc                       # Will delete /etc if $1 is not given!!!

Use set -e

Script will exit if any command fails.

#! /bin/bash
set -o errexit                                               # Or "set -e"

# Don't do
command                                                      # Will fail and exit!
if [ "$?"-ne 0]; then echo "command failed"; exit 1; fi 
# But do instead:
command || { echo "command failed"; exit 1; }                # Ok

# Temporarily disable the check for some code section
set +e
command1
command2
set -e

Expect space in filenames

if [ $filename = "foo" ];                      # WRONG
if [ "$filename" = "foo" ];                    # Correct

for i in $@; do echo $i; done }                # WRONG
{ for i in "$@"; do echo $i; done }            # Correct

find | xargs ls                                # WRONG
find -print0 | xargs -0 ls                     # Correct

for i in $(locate .pdf); do basename $i; done  # WRONG
locate .pdf | xargs -d '\n' -n 1 basemane      # Correct

Use signals to fail cleanly

if [ ! -e $lockfile ]; then
   trap "rm -f $lockfile; exit" INT TERM EXIT
   touch $lockfile
   critical-section
   rm $lockfile
   trap - INT TERM EXIT
else
   echo "critical-section is already running"
fi

Beware of Race conditions

References:

There is race condition between the test of file and its creation. If 2 processes run simultaneously, they might both pass the test successfully and think that they are running alone. To solve it, we need an operation that tests & create the file in an atomic way. An example in [4] and [5], is to use IO redirection and bash's noclobber mode, which won't redirect to an existing file:

if ( set -o noclobber; echo "$$" > "$lockfile") 2> /dev/null; 
then
   trap 'rm -f "$lockfile"; exit $?' INT TERM EXIT

   # critical-section
   
   rm -f "$lockfile"
   trap - INT TERM EXIT
else
   echo "Failed to acquire lockfile: $lockfile." 
   echo "Held by $(cat $lockfile)"
fi

Another solution is to use mkdir (see [6]). mkdir is atomic, it will fail if directory already exists, or create it otherwise, both atomically.

LOCKDIR="~/.$(basename $0).lock"
if (mkdir $LOCKDIR); then echo “Could not lock…”; exit 1; fi
# “locking” succesful
do_stuff()
rmdir -f $LOCKDIR

A more thorough example below from [7]:

Alternate solutions using flock:

exec 200>"$LOCK_FILE"
flock -e -n 200 || exit 1
# ...critical section...
rm "$LOCK_FILE"                   # Optional

Tips for Fast Scripts

Avoid forking

Avoid calling an external program. Use Bash internal commands as much as possible. Here some common replacement:

cat $FILE | some_pgm
<$FILE some_pgm            # Don't cat, use redirection!
basename $FILE
echo ${FILE/*\/}           # Remove everything up to last slash

Tips

Parsing Command-Line Option Parameters

#!/bin/bash
# (old version)
args=`getopt abc: $*`
if test $? != 0
  then
    echo 'Usage: -a -b -c file'
    exit 1
fi
set -- $args
for i
do
  case "$i" in
    -c) shift;echo "flag c set to $1";shift;;
    -a) shift;echo "flag a set";;
    -b) shift;echo "flag b set";;
  esac
done
$ ./g -abc "foo"
flag a set
flag b set
flag c set to foo
#!/bin/bash
while getopts  "abc:" flag
do
  echo "$flag" $OPTIND $OPTARG
done
shift $((OPTIND-1))
echo $@
$ ./g -abc "foo" "bar"
a 1
b 1
c 3 foo
bar
  • To parse option like --value=name ([8])
until [[ ! "$*" ]]; do
  if [[ ${1:0:2} = '--' ]]; then
    PAIR=${1:2}
    PARAMETER=$(echo ${PAIR%=*} | tr [:lower:]- [:upper:]_)
    eval P_$PARAMETER=${PAIR##*=}
  fi
  shift
done

Empty a file keeping permissions

Empty a file named filename, keeping the same permission and user/group:

>filename

Print multi-lines with echo

Print multi-lines text with echo:

$ echo -e "Some text\n...on 2 lines..."                    # Enable interpretation of backslash escapes (must be quoted!)
Some text
...on 2 lines...

Print multi-line variables with echo

One can save in a variable the multi-line output of a command. Later this variable can echoed while preserving the linefeeds if the variable is enclosed in quotes "...":

$ mymultilinevar=$(<myfile.txt sed -e'/first line/,/last line/')
$ echo "$mymultilinevar"
first line
second line
...
last line

Echo with colors

References:


The command echo can display colors thanks to escape sequence commands [10]:

echo -e "\033[35;1m Shocking \033[0m"       #Display "shocking" in bright purple

The first character is the escape character 27 (033 in octal). One can also type directly ^[ (i.e. Ctrl-AltGr-[). The syntax is (where spaces were added for clarity)

\033 [ <command> m
\033 [ <command> ; <command> m

Note that commands can be chained. The set of commands is given in the color table below:

code style code foreground code foreground code background code background
0 default colour 90 dark grey 40 black 100 dark grey
1 bold 31 red 91 light red 41 red 101 light red
4 underlined 32 green 92 light green 42 green 102 light green
5 flashing text 33 orange 93 yellow 43 orange 103 yellow
7 reverse field 34 blue 94 light blue 44 blue 104 light blue
35 purple 95 light purple 45 purple 105 light purple
36 cyan 96 turquoise 46 cyan 106 turquoise
37 grey 47 grey

ANSI Color Code Variables

See [11]. Use echo -e "${Red}Red" to use them:

# Reset
Color_Off='\e[0m'       # Text Reset

# Regular Colors
Black='\e[0;30m'        # Black
Red='\e[0;31m'          # Red
Green='\e[0;32m'        # Green
Yellow='\e[0;33m'       # Yellow
Blue='\e[0;34m'         # Blue
Purple='\e[0;35m'       # Purple
Cyan='\e[0;36m'         # Cyan
White='\e[0;37m'        # White

# Bold
BBlack='\e[1;30m'       # Black
BRed='\e[1;31m'         # Red
BGreen='\e[1;32m'       # Green
BYellow='\e[1;33m'      # Yellow
BBlue='\e[1;34m'        # Blue
BPurple='\e[1;35m'      # Purple
BCyan='\e[1;36m'        # Cyan
BWhite='\e[1;37m'       # White

# Underline
UBlack='\e[4;30m'       # Black
URed='\e[4;31m'         # Red
UGreen='\e[4;32m'       # Green
UYellow='\e[4;33m'      # Yellow
UBlue='\e[4;34m'        # Blue
UPurple='\e[4;35m'      # Purple
UCyan='\e[4;36m'        # Cyan
UWhite='\e[4;37m'       # White

# Background
On_Black='\e[40m'       # Black
On_Red='\e[41m'         # Red
On_Green='\e[42m'       # Green
On_Yellow='\e[43m'      # Yellow
On_Blue='\e[44m'        # Blue
On_Purple='\e[45m'      # Purple
On_Cyan='\e[46m'        # Cyan
On_White='\e[47m'       # White

# High Intensty
IBlack='\e[0;90m'       # Black
IRed='\e[0;91m'         # Red
IGreen='\e[0;92m'       # Green
IYellow='\e[0;93m'      # Yellow
IBlue='\e[0;94m'        # Blue
IPurple='\e[0;95m'      # Purple
ICyan='\e[0;96m'        # Cyan
IWhite='\e[0;97m'       # White

# Bold High Intensty
BIBlack='\e[1;90m'      # Black
BIRed='\e[1;91m'        # Red
BIGreen='\e[1;92m'      # Green
BIYellow='\e[1;93m'     # Yellow
BIBlue='\e[1;94m'       # Blue
BIPurple='\e[1;95m'     # Purple
BICyan='\e[1;96m'       # Cyan
BIWhite='\e[1;97m'      # White

# High Intensty backgrounds
On_IBlack='\e[0;100m'   # Black
On_IRed='\e[0;101m'     # Red
On_IGreen='\e[0;102m'   # Green
On_IYellow='\e[0;103m'  # Yellow
On_IBlue='\e[0;104m'    # Blue
On_IPurple='\e[10;95m'  # Purple
On_ICyan='\e[0;106m'    # Cyan
On_IWhite='\e[0;107m'   # White

Get file size

The different ways to extract file size in a Bash script:

SIZE=$(stat -c%s "$FILENAME")                              # Using stat
SIZE=$(ls -l $FILENAME | awk -F" "'{ print $5 }')          # Using ls / awk
SIZE=$(du -b $FILENAME | sed 's/\([0-9]*\)\(.*\)/\1/')     # Using du
SIZE=$(cat $FILENAME | wc -c)                              # Using cat / wc
SIZE=$(ls -l $FILENAME | cut -d " " -f 6)                  # Using ls / cut

Read file content into env variable

Read the content of a file into an environment variable:

PID=`cat $PIDFILE`
read PID < $PIDFILE

Get the PID of a new process

Getting the pid of a new process (when other processes with same name are already running)

oldPID=`pidofproc /usr/bin/ssh`
/usr/bin/ssh -f -N -n -q -D 1080 noekeon
RETVAL=$?
newPID=`pidofproc /usr/bin/ssh`
uniqPID=`echo $oldPID $newPID|sed -e 's/ /\n/g'|sort|uniq -u`
echo $uniqPID

Get the PID of a running process

Getting the pid of a running process

pid=$(pidof -o $$ -o $PPID - o %PPID -x /bin/ssh)

Detect if a given process is running

This is actually a tricky one. Some good solutions, all giving answer in $?:

[ -e /proc/$pid ]               # PID  - nice, but is it portable?
ps -p $pid >/dev/null           # PID  - need redirect, otherwise ps will print the process found
pgrep "^$name$"                 # NAME - probably the best using command-name
pkill -0 $name                  # NAME - ... similar & less robust (fail if process can't accept signal)
/bin/kill -0 $pid 2>/dev/null   # PID  - need redirect, otherwise kill will complain if no process found
                                #        ... also works with bash built-in kill

Using ... =~ ...:

if [[ $(ps $pid) =~ $name ]];   # Test both PID and process name

Some wrong / bad solutions:

ps -aef | grep $pid                   # --== FAIL ==-- Will match grep process itself + $pid as ppid
ps -aef | grep $name                  # --== FAIL ==-- Will match grep process itself
ps -aef | grep -v grep | grep $pid    # --== UGLY ==-- ... and slow. Better use ps -fp $(pgrep $pid)
ps -p $pid | grep $pid                # --== SLOW ==-- better test $? immediately

Don't use this method for locking in startup scripts. Be careful with race condition. The best solution is to use a mutex, or use an atomic command (like mkdir). See for example:

Launch a process in the background

Different ways to launch process in the background (unordered - might be useful one day...). The double ampersand trick comes from here.

myprocess.exe &
exec myprocess.exe
exec myprocess.exe &
( ( exec myprocess.exe & ) & )
nohup myprocess.exe &
( ( nohup myprocess.exe & ) & )

Display the name / body of functions

To list the functions declared in the current environment, or to list the body of a function:

declare -f                    # List all defined functions and their bodies
declare -f name               # List the body of function "name"
declare -F                    # List name of all defined functions

Or alternatively use bash built-in type:

type name                     # Works with commands, builtins, function, aliases...

Return the subnet address

Solution from [12].

/sbin/ifconfig eth0 |
grep 'inet addr' | tr .: '  ' |
(read inet addr a b c d Bcast e f g h Mask i j k l;
echo $(( $a & $i )).$(( $b & $j )).$(( $c & $k )).$(( $d & $l )) )

Remove file name extensions

FILENAME="myfile.pdf"
echo ${FILENAME%%.pdf}          # only matches '.pdf', not '.PDF'
echo ${FILENAME%%.???}          # only matches 3-letter extension

Formatted output / printing using printf

printf is a Bash built-in function that allows printing formatted output much like the standard C printf instructions.

printf "%02d" 1                  # outputs '01'

Delete files with special characters

find . -inum [inode] -exec rm -i {} \;     # Use inode
rm -- -foo                                 # Special case for name with a heading dash
rm ./-foo

Remove useless invocation of 'cat'

There are basically only 3 valid uses of cat:

  • Show the content of a file in a terminal
  • Write a "here" document or standard input to a file in a terminal
  • Concatenating several files together (hence the name of cat)

However cat is frequently used for other purposes like piping a file in a process. This is a bad habit. It is slow and add an unnecessary process. A better alternative is to use the file redirection feature of the shell:

Correct use
cat file                # Correct
cat <<EOF >file         # Correct
cat file1 file2         # Correct
Bad use (and fix)
cat file | myprocess    # Bad
$(cat file)             # Bad
<file myprocess         # Correct
$(< file)               # Correct

Using Process Substitution

The process substitution feature of Bash takes the form <(list) or >(list). The process list is run with its input or output connected to a FIFO (named pipe) or a file in /dev/fd. The name of this file is then passed as an argument to the current command (as a result of the expansion). We can see this explicitly with the following examples:

echo >(true)
# /dev/fd/63
echo <(true)
# /dev/fd/63

This feature can be used to build some very advanced redirection [13]:

diff <(ls dir1) <(ls dir2)                                         # Compare the content of 2 directories
sort -k 9 <(ls -l /bin) <(ls -l /usr/bin) <(ls -l /usr/X11R6/bin)  # Sort content of 3 directories
tar cf >(gzip -c > file.tar.gz) $directory                         # Equivalent of tar czf file.tar.gz $directory

It can also be used to use variables that would otherwise be limited to some subprocess, like:

: | ((x++))           # This actually starts a subprocess
: | ( ((x++)) )       # ... like this. 
echo x                # ... so 'x' is undefined here

((x++)) < <(:)        # now variable 'x' remains in the main process
echo $x               # x is defined

Redirecting stdout and stderr with tee and a pipe

Using tee and the standard piping mechanism, it is easy to redirect the content of stdout to a file and stdout:

command | tee stdout.log           # Keep a copy of 'command' output in file 'stdout.log'

What if we also want to do the same with stderr? In other words, can we also pipe stderr?
Yes, in Bash this is easy! We only need to use the process substitution feature (reference [14])!

command 2> >(tee stderr.log) >&2                     # Keep a copy of 'command' stderr in file 'stderr.log'
command > >(tee stdout.log) 2> >(tee stderr.log >&2) # Keep both a copy of stdout and stderr in separate files

Note that tee always print the content of stdin to stdout. That's why we need the redirection >&2 to redirect it back to stderr.

Forcing program to read from standard input instead of file

See /proc filesystem

Finding symbolic link target

Use readlink:

target=$(readlink -n source)       # Return target basename of link 'source'
target=$(readlink -nf source)      # Return target fullname of link 'source'

Escape special / meta- character in a string

Use printf "%q" to automatically escape special characters in a string, so that they can be reused as shell input:

printf "%q" 'pipe:[12345]'         # Returns "pipe:\[12345\]"
safefname=$(printf "%q" "$fname")  # Protects file name if it contains special character

Find intersection between 2 files

grep -f file1 file2

Join lines with comma

pgrep -P $somepid | sed -re ':a N; s/\n/,/; b a'                    # With Sed
pgrep -P $somepid | perl -e '@_=<>; chomp @_; print join ",",@_'    # With Perl
pgrep -P $somepid | perl -e '@_=<>; chomp @_; $,=","; print @_'     # With Perl

Another example using tr:

echo -n "$(pgrep -P $somepid)" | tr '\n' ','                        # use -n "..." so that interim newline are kept, but none added at the end
echo $(pgrep -P $somepid) | tr ' ' ','                              # Here echo will translate interim newlines to space

Force single trailing slash in directory

#function single() { echo ${1%%\/*}/; }             # WRONG!
function single() { A=${1%//}; echo ${A%/}/; }

for i in / // . ./ .// dir dir/ dir// /home/john; do single $i; done
# /
# /
# ./
# ./
# ./
# dir/
# dir/
# dir/
# /home/john/

Keep Color with Less

colordiff -bu file1 file2 | less -R            # Use -R to preserve color with less pager

Pad with newlines

Padding with newlines is a bit difficult because we cannot use a function and a command substitution because the latter will always remove the trailing newlines no matter what. A solution is as follows:

function padln()
{
    PAD=
    local N=$1
    while (( N-- > 0 )); do 
        PAD=$PAD$'\n'
    done
}
padln 2
VAR=$'line1\nline2\n'$PAD
echo "$VAR" | wc                   # Don't forget quotes!
#     4 ...

Avoid duplicate entries in PATH

From [15]:

function addpath()
{
  new_entry=$1
  case ":$PATH:" in
    *":$new_entry:"*) :;; # already there
    *) PATH="$new_entry:$PATH";; # or PATH="$PATH:$new_entry"
  esac
}

Or simpler:

function addpath()
{
  if [ $(expr match ":$PATH:" ".*:$1:.*") -eq 0 ]; then
    export PATH="$PATH:$1"
  fi
}

Get directory of a sourced script

From [16].

To get the full directory name of the script:

DIR="$(cd "$(dirname "${BASH_SOURCE[0]}" )" && pwd )"

To get the dereferenced path (all directory symlinks resolved):

DIR="$(cd -P "$(dirname "${BASH_SOURCE[0]}" )" && pwd )"

If the script is itself a symlink, and we want the location after dereference:

SOURCE="${BASH_SOURCE[0]}"
while [ -h "$SOURCE" ] ; do SOURCE="$(readlink "$SOURCE")"; done
DIR="$( cd -P "$( dirname "$SOURCE" )" && pwd )"

Another solution I used was based on which, but that only works if script is executable and within path:

PROGDIRNAME=$(dirname $(which "$0"))

Detect spaces in file name

Some script-fu of mine:

if [ $(wc -w <<< $FILENAME) -eq 1 ]; then echo no spaces; else echo space found in filename; fi

Get SSH hostname from given host name

Say we have the following .ssh/config:

Host myhost
    UserName    myuser
    HostName    myhost.domain.com

[...]

We want to get the HostName corresponding to myhost:

#First pre-process ssh config file, only keeping lines of the form "host xxx yyy hostname zzz"
SSH_CONFIG="$(< ~/.ssh/config sed -rn 's/#.*//; s/ +/ /g; s/[hH]ost/host/; s/[nN]ame/name/; /host |hostname/p'|sed -r ':a /host/N; /hostname/!b a; {s/\n *hostname/ hostname/; p; d}')"

NAME="myhost"
$(echo "$SSH_CONFIG" | perl -lne 'print for / '"$NAME"' .*hostname +(.*)/g')

String manipulation

  • Echo first word in a space-separated list:
make="/usr/bin/make -r --no-print-directory -j 2"

# Using array
words=($make)
echo $words                  # $words same as ${words[0]}

# Using suffix matching
echo ${make% *}

# Using pattern matching
echo ${make/ */}

Avoid eval like the plague, use declare and ${!ref}

From [17]:

declare is a far safer option. It does not evaluate data as bash code like eval does, and as such it does not allow arbitrary code injection quite so easily

So do not write

Do NOT write: ...write this instead:
eval "array_$index=$value"   # Indirect var decl.
local ref="${array}_$index"
eval \$$ref                  # Var. indirection
———BAD———
declare "array_$index=$value"   # Indirect var decl.
local ref="${array}_$index"
echo "${!ref}"                  # Var. indirection
———GOOD———

There is a caveat though: any variable declared with declare are local to the function. So there is no way to modify a global array with declare.

Also ${!ref} only works in Bash since v2. For more portable script, like compatibility with sh, eval is needed.

Use if ... =~ ''pattern'' instead of if ( ... | grep ... )

Constructs like if ( ... | grep ... ) spawn 2 processes, and are then inefficient (in particular on Cygwin).

if ( ps aux | grep ssh-agent ); then echo ssh-agent found; fi    # NOT EFFICIENT, 2 processes spawn
 
if [[ $(ps aux) =~ ssh-agent ]]; then echo ssh-agent found; fi   # BETTER!!!

Test whether a variable is set/defined/unset/empty

One can use the rich parameter expansion possibilities:

echo ${VAR:-word}
Use Default Values — (expansion of) word if VAR is unset or null; $VAR otherwise
echo ${VAR-word}
Use Default Values — (expansion of) word if VAR is unset; $VAR otherwise
echo ${VAR:+word}
Use Alternate Values — nothing if VAR is unset or null; (expansion of) word otherwise
echo ${VAR+word}
Use Alternate Values — nothing if VAR is unset; (expansion of) word otherwise

So one can test if VAR is unset with:

[ -n ${VAR:+1} ]
true if VAR set; false if unset or null.
[ -n ${VAR+1} ]
true if VAR set or null; false if unset.

Alternatively type echo $VARTAB, Bash shall add a space if VAR is set or empty.

Use sponge to easily modify a file inplace

sponge is part of package moreutils. It can be used to easily edit file in-place:

sed -r '...' FILE | grep ... | sponge FILE                   # Sponge soaks its full input before creating output file

Use auto-complete with command starting with 'sudo'

Just add to .bashrc ([18]):

if [ "$PS1" ]; then
    complete -cf sudo            
fi

Test if a directory is empty

From [19]:

$ [ "$(ls -A /tmp)" ] && echo "Not Empty" || echo "Empty"
# OR
if [ "$(ls -A /tmp)" ]; then
    echo "Not Empty"
else
    echo "Empty"
fi

A solution that does not invoke a sub-shell [20]:

shopt -s nullglob
shopt -s dotglob # To include hidden files
files=(/some/dir/*)
if [ ${#files[@]} -gt 0 ]; then echo "huzzah"; fi
shopt -u nullglob dotglob

Pits

A list of frequent gotcha's !

Description Example
Space! - Don't forget to add spaces whenever necessary, in particular around brace in function definition, or in test conditions for ifs.

if -space- [ -space- -f /etc/foo -space- ]; then ...
function myfunc() { -space- echo Hello, World!; }

Quote - Always quote parameters, variables passed to test in if ... then ... else:

if [ "$name" -eq 5 ]; then ...

For loops with file - Use simply * to list files in for loops, not `ls *`:
for file in *; cat "$file"; done       # SUCCEEDS, even if white space
for file in `ls *`; cat "$file"; done  # FAILS miserably
Incorrect variable definition
  • NO space around equal sign
    var = val is interpreted as command var with param = val
  • No dollar $ prefix!!!

So it is MYVAR=value and not MYVAR= value !!!

srcDir = $1                         # WRONG - spaces around = sign
$srcDir=$1                          # WRONG - $ prefix
maxW= $(sed -rn '/$^/Q' myfile.txt) # WRONG - SPACE!
srcDir=$1                           # CORRECT
srcDir="$1"                         # BEST
Semi-colon in find - Semi-colon in find commands must be escaped !
find . -exec echo {} ;        # WRONG - semi-colon not escaped
find . -exec echo {} \;       # CORRECT
Using a bash built-in instead of external program
Bash built-in commands override external commands with same name (eg. kill and echo)
$ type kill                 # kill is a shell builtin
$ type /bin/kill            # /bin/kill is /bin/kill
$ /bin/kill -v              # kill (cygwin) 1.14
Wrong redirection order
read pid < $PID_FILE 2> /dev/null  # WRONG - error msg if $PID_FILE
                                   #   doesn't exist
read pid 2> /dev/null < $PID_FILE  # CORRECT
Variable not exported outside parens
( read pid < $PID_FILE ) 2> /dev/null   # WRONG - var pid not kept
read pid 2> /dev/null < $PID_FILE       # CORRECT
Read and piping
  • Don't pipe to read command, or use parens to preserve subshell!
  • Better yet, use set
echo "1 2 3" | read a b c; echo $a $b $c    # WRONG - subshell
echo "1 2 3" | (read a b c; echo $a $b $c)  # CORRECT - same subshell
set -- $(echo "1 2 3"); echo $1, $2, $3     # BETTER
Don't quote tilde in if test block
if [ -a ~/bin/"my file" ]; then echo found; fi # CORRECT
if [ -a "~/bin/my file" ]; then echo found; fi # WRONG
Need quoting when echoing a variable with embedded newlines.
This is because echo takes newlines (like any blanks) as parameter separator
Moreover command substitution always remove the trailing newlines no matter what
HEADER=$(sed -rn '/$^/Q' myfile.txt)
echo "$HEADER"                   # CORRECT
echo $HEADER                     # WRONG - newline are removed
VAR=$'\n\n'; echo "$VAR"         # CORRECT, newlines are kept
VAR="$(echo; echo)"; echo "$VAR" # WRONG, trailing newlines stripped!
VAR="$(echo; echo; echo x"; VAR=${VAR%x}; echo "$VAR" # FIXED
Always append to /dev/stderr or use >&2 instead.
The construct ls >/dev/stderr is wrong because if stderr was redirected to a file, then > /dev/stderr will overwrite the file content. Better use ls >>/dev/stderr or best >&2


The same way never redirect stderr to a file, but instead to a process using Bash's process substitution trick so that to prevent undesired file reset.

sample() {
  echo "foo" >/dev/stderr
  echo "bar" >/dev/stderr
}

#REFERENCE:
sample                # both lines
#WRONG:
sample 2> foobar.txt
cat foobar.txt        # Only last line
#FIX USING PROCESS SUBSTITUTION:
sample 2> >(cat >foobar.txt)
cat foobar.txt        # both lines

exit status of pipelines returns status of last step in pipeline. Use PIPESTATUS array to get status of each step separately.

# WRONG - $? will return exit status of 'tee'
make | tee make.log
status=$?
# CORRECT
make | tee make.log
status=${PIPESTATUS[0]}

read does not preserve spaces and backslashes by default.

# WRONG - Use read with default option
read -p "password: " passwd
echo "$passwd"
# CORRECT - Use IFS= and -p to keep blanks / backslashes
IFS= read -r -p "password: " passwd
echo "$passwd"