Home » Why write an entire bash script in functions?

Why write an entire bash script in functions?

Solutons:


Readability is one thing. But there is more to modularisation than just this. (Semi-modularisation is maybe more correct for functions.)

In functions you can keep some variables local, which increases reliability, decreasing the chance of things getting messed up.

Another pro of functions is re-usability. Once a function is coded, it can be applied multiple times in the script. You can also port it to another script.

Your code now may be linear, but in the future you may enter the realm of multi-threading, or multi-processing in the Bash world. Once you learn to do things in functions, you will be well equipped for the step into the parallel.

One more point to add. As Etsitpab Nioliv notices in the comment below, it’s easy to redirect from functions as a coherent entity. But there’s one more aspect of redirections with functions. Namely, the redirections can be set along the function definition. Eg.:

f () { echo something; } > log

Now no explicit redirections are needed by the function calls.

$ f

This may spare many repetitions, which again increases reliability and helps keeping things in order.

See also

  • https://unix.stackexchange.com/a/483304/181255

I’ve started using this same style of bash programming after reading Kfir Lavi’s blog post “Defensive Bash Programming”. He gives quite a few good reasons, but personally I find these the most important:

  • procedures become descriptive: it’s much easier to figure out what a particular part of code is supposed to do. Instead of wall of code, you see “Oh, the find_log_errors function reads that log file for errors “. Compare it with finding whole lot of awk/grep/sed lines that use god knows what type of regex in the middle of a lengthy script – you’ve no idea what’s it doing there unless there’s comments.

  • you can debug functions by enclosing into set -x and set +x. Once you know the rest of the code works alright , you can use this trick to focus on debugging only that specific function. Sure, you can enclose parts of script, but what if it’s a lengthy portion ? It’s easier to do something like this:

     set -x
     parse_process_list
     set +x
    
  • printing usage with cat <<- EOF . . . EOF. I’ve used it quite a few times to make my code much more professional. In addition, parse_args() with getopts function is quite convenient. Again, this helps with readability, instead of shoving everything into script as giant wall of text. It’s also convenient to reuse these.

And obviously, this is much more readable for someone who knows C or Java, or Vala, but has limited bash experience. As far as efficiency goes, there’s not a lot of what you can do – bash itself isn’t the most efficient language and people prefer perl and python when it comes to speed and efficiency. However, you can nice a function:

nice -10 resource_hungry_function

Compared to calling nice on each and every line of code, this decreases whole lot of typing AND can be conveniently used when you want only a part of your script to run with lower priority.

Running functions in background, in my opinion, also helps when you want to have whole bunch of statements to run in background.

Some of the examples where I’ve used this style:

  • https://askubuntu.com/a/758339/295286
  • https://askubuntu.com/a/788654/295286
  • https://github.com/SergKolo/sergrep/blob/master/chgreeterbg.sh

In my comment, I mentioned three advantages of functions:

  1. They are easier to test and verify correctness.

  2. Functions can be easily reused (sourced) in future scripts

  3. Your boss likes them.

And, never underestimate the importance of number 3.

I would like to address one more issue:

… so being able to arbitrarily swap the run order isn’t something we
would generally be doing. For example, you wouldn’t suddenly want to
put declare_variables after walk_into_bar, that would break things.

To get the benefit of breaking code into functions, one should try to make the functions as independent as possible. If walk_into_bar requires a variable that is not used elsewhere, then that variable should be defined in and made local to walk_into_bar. The process of separating the code into functions and minimizing their inter-dependencies should make the code clearer and simpler.

Ideally, functions should be easy to test individually. If, because of interactions, they are not easy to test, then that is a sign that they might benefit from refactoring.

Related Solutions

Joining bash arguments into single string with spaces

[*] I believe that this does what you want. It will put all the arguments in one string, separated by spaces, with single quotes around all: str="'$*'" $* produces all the scripts arguments separated by the first character of $IFS which, by default, is a space....

AddTransient, AddScoped and AddSingleton Services Differences

TL;DR Transient objects are always different; a new instance is provided to every controller and every service. Scoped objects are the same within a request, but different across different requests. Singleton objects are the same for every object and every...

How to download package not install it with apt-get command?

Use --download-only: sudo apt-get install --download-only pppoe This will download pppoe and any dependencies you need, and place them in /var/cache/apt/archives. That way a subsequent apt-get install pppoe will be able to complete without any extra downloads....

What defines the maximum size for a command single argument?

Answers Definitely not a bug. The parameter which defines the maximum size for one argument is MAX_ARG_STRLEN. There is no documentation for this parameter other than the comments in binfmts.h: /* * These are the maximum length and maximum number of strings...

Bulk rename, change prefix

I'd say the simplest it to just use the rename command which is common on many Linux distributions. There are two common versions of this command so check its man page to find which one you have: ## rename from Perl (common in Debian systems -- Ubuntu, Mint,...

Output from ls has newlines but displays on a single line. Why?

When you pipe the output, ls acts differently. This fact is hidden away in the info documentation: If standard output is a terminal, the output is in columns (sorted vertically) and control characters are output as question marks; otherwise, the output is...

mv: Move file only if destination does not exist

mv -vn file1 file2. This command will do what you want. You can skip -v if you want. -v makes it verbose - mv will tell you that it moved file if it moves it(useful, since there is possibility that file will not be moved) -n moves only if file2 does not exist....

Is it possible to store and query JSON in SQLite?

SQLite 3.9 introduced a new extension (JSON1) that allows you to easily work with JSON data . Also, it introduced support for indexes on expressions, which (in my understanding) should allow you to define indexes on your JSON data as well. PostgreSQL has some...

Combining tail && journalctl

You could use: journalctl -u service-name -f -f, --follow Show only the most recent journal entries, and continuously print new entries as they are appended to the journal. Here I've added "service-name" to distinguish this answer from others; you substitute...

how can shellshock be exploited over SSH?

One example where this can be exploited is on servers with an authorized_keys forced command. When adding an entry to ~/.ssh/authorized_keys, you can prefix the line with command="foo" to force foo to be run any time that ssh public key is used. With this...

Why doesn’t the tilde (~) expand inside double quotes?

The reason, because inside double quotes, tilde ~ has no special meaning, it's treated as literal. POSIX defines Double-Quotes as: Enclosing characters in double-quotes ( "" ) shall preserve the literal value of all characters within the double-quotes, with the...

What is GNU Info for?

GNU Info was designed to offer documentation that was comprehensive, hyperlinked, and possible to output to multiple formats. Man pages were available, and they were great at providing printed output. However, they were designed such that each man page had a...

Set systemd service to execute after fstab mount

a CIFS network location is mounted via /etc/fstab to /mnt/ on boot-up. No, it is not. Get this right, and the rest falls into place naturally. The mount is handled by a (generated) systemd mount unit that will be named something like mnt-wibble.mount. You can...

Merge two video clips into one, placing them next to each other

To be honest, using the accepted answer resulted in a lot of dropped frames for me. However, using the hstack filter_complex produced perfectly fluid output: ffmpeg -i left.mp4 -i right.mp4 -filter_complex hstack output.mp4 ffmpeg -i input1.mp4 -i input2.mp4...

How portable are /dev/stdin, /dev/stdout and /dev/stderr?

It's been available on Linux back into its prehistory. It is not POSIX, although many actual shells (including AT&T ksh and bash) will simulate it if it's not present in the OS; note that this simulation only works at the shell level (i.e. redirection or...

How can I increase the number of inodes in an ext4 filesystem?

It seems that you have a lot more files than normal expectation. I don't know whether there is a solution to change the inode table size dynamically. I'm afraid that you need to back-up your data, and create new filesystem, and restore your data. To create new...

Why doesn’t cp have a progress bar like wget?

The tradition in unix tools is to display messages only if something goes wrong. I think this is both for design and practical reasons. The design is intended to make it obvious when something goes wrong: you get an error message, and it's not drowned in...