GNU/Linux Command Line Quick Reference Guide

More concise than the title

This is a gradually (but most likely not ever) evolving quick reference guide for GNU/Linux CLI. A collection of various topics on how to use the terminal, bash, and the available programs - or at least a subset of these -, mostly for those who has just started out with the terminal or use it only occasionally so they always forget how they did things the last time and need a quick reminder - like I am with certain tools. Some parts of this might be useful only for those on Debian based distributions. The page was designed with command line browsers, such as w3m or Lynx, in mind - or perhaps used as an excuse not to have fancy CSS or a gram of JS or any backend programming. Just kidding, if those aren't necessary, why add 'em?

Should be noted: what I write here is not authoritive, and with time it could get outdated. So while I thank you for reading, in the hope it will be enlightening for you, I encourage everyone to read the man pages, and works of people who (should) know what they are doing. The author is not responsible for You causing WWIII or something based on the information you read here.

I recommend that you take a look at these:

Table of Contents

Commands, tricks, bits and pieces

a simple list

Just a list of often used and/or handy commands, applications. Descriptions not neccessarily cover the whole useage, but probably how I used them first, or use 'em most frequently.

man - manual pager, let you read the manual of commands and programs, use like: 'man ls', should try 'man man'

apropos - search for keywords in the man pages

help - display info about built-in commands, just issue 'help' to list these, some of them have their own manual, some don't

info - some programs have Info documents (such as 'tar'), this one let's you read them

pwd - print working directory (where we are at the moment)

ls - list files, contents of directory

cd - change directory, use it to move around in the filesystem

mkdir - make a new directory

touch - create new, empty files, actually: changing timestamps of files

rm - remove files and directories

echo - display some text

printf - display some text, can format it, has its own syntax

read - reads an input (like keyboard) and stores it in the memory

awk - a whole programming language to process text, won't write about this in length anytime soon

cat - display file's contents, actually for concatenating files

more - paging through long text files (or text output), one screen at a time

less - like 'more' just more, has additional features

sed - edit files without text editor, good when working with lotsa files

tr - change or delete characters

head - display the front of files

tail - display the end of files

sort - sort lines of text files

uniq - reduce redundancy by filtering to unique lines

du - space usage, size of directories

df - display used/free space on partitions

free - display used/free memory

ps - print processes, all or a subset of them

top - real-time view of processes, can be tweaked

lsblk - list disks and partitions (block devices)

lsusb - list usb devices

lscpu - display cpu information

mount - mounting partitions

file - determining the type of a file (remember, on Linux everything is a file)

wc - display info about files, word count, newlines, byte counts

stat - display status of files, such as size, number of blocks, access rights, timestamps...

find - search for files and directory, not just by name

grep - search for anything, based on patterns in files (remember, on Linux everything is a file)

watch - repeat a command periodically

date - print out the date and time

cal - calendar

at - schedule tasks to run at a certain time once

crontab - schedule tasks to run periodically

clear - clear the terminal screen

Back to Contents

the home directory

It's your directory, with your username. In the '/home' folder under the filesystem which is noted as '/'. In /home all the user home directories are created. These are working spaces for each user, the place where users have full rights over files, since they own those. Such common directories as Documents or Desktop are stored here. Some examples:

/home/Joe

/home/JoeSmith

/home/Krtecek

/home/nondescriptusername

Every user has (well, almost all) its own home directory, this could complicate things from the point of system administration, scripting, programming, or just even explaining things - I don't know what's your username, therefore I don't know the path of your home folder. But there is a shorthand (or rather, an alias) for noting the home directory of a user:

~

Furthermore the path of your home directory is stored in the '$HOME' environment variable, you can check it with:

echo $HOME

So I can refer to stuff as:

~/Documents

~/.vimrc

~/.local/bin/custombashscript

Or:

$HOME/Documents

$HOME/.vimrc

$HOME/.local/bin/custombashscript

...and you can copypaste it too. This will take you to your Documents folder, if that exists:

cd ~/Documents

The root also has his own folder, the /root directory.

traversing the filesystem

Moving around in the directory hierarchy is simple but rudimentary information for the newcomers. For changing to any directory we use the...

cd

...command. Issued in itself, will take you to your home folder from anywhere. The same can be achieved with:

cd ~

Or as noted in the firs article with:

cd $HOME

But why anyone wanna type that much? Before we move on to moving about we have to discuss something else. To know how to move somewhere, first we have to know where we are. The go to command is:

pwd

/home/username

... should be the output. If we'd issue it as a root, we would get:

/root

There is a 'tree' application which shows the tree structure of the directory hierarchy, depending on your distro you have to install it.

Beyond these two generally the prompt should give this info. The prompt is the line where you type the commands in. Usually the format of the prompt is:

username@hostname:~$

...and a blinking cursor after that. From the above the '$' signifies that this is a user account, if it's a '#' instead, then we are logged in as root. And the '~' is the aforementioned home folder, and shows that you are in that directory at the moment. Issue:

cd Documents

The prompt should change to something liek:

usernam@hostname:~/Documents$

The '~/Documents' is the directory where you are now. Check it with 'pwd'

/home/username/Documents

And we arrived to the topic what we can call the difference between absolute and relative path. Path is the list of directories from a starting point to a file. On GNU/Linux each level of directory is separated by a '/' and we note the filesystem's root directory with the same sign:

/

Do not mistake the filesystem's root directory with the root user home folder!

Absolute paths will always start at the filesystem's root and follow the hierarchy from there. So when pwd returned:

/home/username/Documents

..., that was an absolute path. When we issue the 'cd' command we can use absolute path:

cd /home/username/Documents

cd /boot/grub/fonts

cd /var/log/apt

Relative path starts at the current working directory, and the follows the hierarchy from there. SO when we issued:

cd Documents

..., we give a relative path to the command as parameter. A longer example would be:

cd Documents/books/manuals

If that path existed. From the current directory we can move upwards in the hierarchy with the help of a shorthand. Let'say we are in that 'manuals' directory above, but we want to move to the 'cookbooks' directory next to it. We can issue:

cd ../cookbooks

The '..' is shorthand for one directory above ie. the parent directory. Let's say we have a 'comics' folder in the 'Documents' directory next to 'books' and we want to move there from 'cookbooks'.

cd ../../comics

And from there we want to check on the system logs:

cd ../../../../var/log

Okay, it would have been simpler to use the absolute path in this case. But you get the point, we move up to the parent of the parent of the parent of the parent, then down to 'var' and 'log'.

To finish this up: I mentioned the '..' shorthand for parent directory, I have to mention another:

.

Means the current directory. It has no role in traversing the directory hierarchy since if we use relative path, the system automatically uses the working directory as the point of reference. However these examples are legitimate commands (imagine we are in the ~/Document directory):

cd ./books

cd ./../comics

The '.' is just unnecessary.

Back to Contents

getting rudimentary info

Gathering foundational information about the machine, system, the host you are logged into. If you have 'neofetch' installed run it and see what I mean. If not, well, stuff like name of the logged in users, name of the host, installed distribution, RAM, and some more. We'll check a collection of tools. I suggest trying all these while you read each. First is...

who

..., which shows the users logged in at that moment. Obviously on multi-user environment can monitor users, could come in handy both in defensive and offensive operations, and which we most likely won't do. But you can quick check the userid you logged in with - uh in case you are forgetful or something. Add the '-H' option to display the columns, '-a' to display all data or comibine both:

who -aH

Perhaps surprising feature, can display the time when the machine was booted:

who -b

Similar to this can be achieved with the...

uptime

...command. It's a bit more detailed, also outputs the load averages, which is the load on the CPUs in the past 1, 5, and 15 minutes. On a 1 core CPU 1 means 100% usage, 0,5 half, 2 means double load. On an 8 core CPU a full load will be 8, half 4, and 16 200%, 0,5 is basically nothing. You can figure out the rest. Use the '-s' option to get the boot time only.

Quick checking userid can be done with...

whoami

...too. And print all the users logged it at the moment:

users

These two above have no other uses. Just simple output. Good for scripts, me thinks. If you want to overwhelm yourself with users issue:

cat /etc/passwd

This has not much to do with passwords - well it did at one point, but not anymore. This file contains all the users the system has and generates to perform tasks. The normal 'human' users tend to be at the end of the file. The 'x' was the place of the password back in the old days, the two numbers are the user id (UID) and group id (GID). Some other data can be found here such as full name (if given at user creation), contact info (if given), home folder path, or login shell. Each field is delimited by a ':'. Note: encrypted passwords are in the /etc/shadow file.

Next one in is about the system and in its final form prints out a long string of info:

uname -a

The '-a' option again means all. In itself 'uname' just returns the first item of the strings, according to the man page, it's the kernel name (and equals to 'uname -s'). There is an option for printing each item. For example you can get the hostname (nodename how the man calls it) with:

uname -n

To check out the rest of the possible options, I'll leave it up to you. Let's move onto the ...

hostname

...command, which will result in the hostname getting printed. The aforementioned nodename. Notable use is getting the IP address(es) of the host:

hostname -I

How to check what distribution is installed? And what version of it? First look for a specific file:

cat /etc/os-release

This will print out a bunch of things. The Linux Standard Base version utility could help too:

lsb_release -a

Again each item of the list can be displayed with various options. See the manual. Similar can be achieved by issuing:

cat /etc/lsb-release

...offering a perhaps more script friendly output. Another possibility to get similar result is:

cat /etc/issue

Or:

cat /etc/issue.net

An average desktop user probably won't find this much helpful, but getting IDs, user and group could come up once in a while for everyone.

id

Done. Interesting options are '-G' and '-Gn', first lists just the group IDs, the second the name of those instead of numbers. The next tool is:

pinky

This will list all the logged in users with some details. Each column has a handy header to know what's what. By default it uses the short format. The long format can only be used on specific users. Test it with the $USER environment variable:

pinky -l $USER

And that's about it. There are options to filter various data from the output.

Have it ever happened to you that you opened up a document - perhaps with your favourite command line text editor - and in place of every second-third letter there was just some pictogram representing missing characters? Well, you were missing the specific "locale" of that specific language. You can check the localization settings with:

locale

This results in a list of environmental variables which store the information for programs to help them display correct character sets, local time and date, monetary values, address, telephone number conventions and the like. On a "bare metal" install of a distro the list could be short, but in a typical desktop installation the result can be rich.

Here I won't explain locales but to fix the missing characters issue here's the tip. The missing locale needs to be generated. Open the '/etc/locale.gen' file in a text editor, and uncomment the line that contains the locale you want - delete the '#' in front of the line. Then issue:

locale-gen

Done. Check it with:

locale -a

It should be in the list.

Let's take a look at networking. Our go to tool will be 'ip'. While this can do more, we just use it now to get info. First we check our interfaces and IP addresses:

ip addr

This lists all the interfaces, from the loopback to various virtual one, all. If you've never seen anything like this, it'll look like gibberish. You should recognize the IPs at least. Now check the routing table:

ip route

Similar can be achieved with the...

route

...command. Table format, bit easier on the eyes perhaps. On desktops NetworkManager is usually installed, again more than just a report tool, you can (mis)configure your network settings with it. For now we are just gathering data, so run:

nmcli

Again you'll get info about interfaces and IP addresses, and even the hardware used. If you scroll down the output might just suggest to use two commands. First:

nmcli connection show

Then:

nmcli device show

I would suggest to try one more:

nmcli device status

...for a more succint table like output.

To finish this segment up, we rake together some info about the hardware. Most of these commands will follow the same pattern. First we check the CPU:

lscpu

The rest is: 'lsblk' = block devices, 'lspci' = PCI devices - such as bridges of the motherboard, graphics, ethernet, audio controllers... -, 'lsusb' = USB devices. Lastly we can check the available RAM with:

free

Back to Contents

input, output, error

All programs receive input from somewhere, produce output to somewhere, and have an error handling facility. All the programs have a standard input (stdin), which is the keyboard for most apps on Linux shell, they have standard out (stdout), usually the screen, and a standard error (stderr), which is also the screen. Since on Linux everything is a file all three has a so called 'file descriptor':

  • stdin = 0
  • stdout = 1
  • stderr = 2

These files (stdin, stdout, stderr) can be found on the filesystem in the /dev directory.

Let's demonstrate with the 'cat' command. Issue:

cat

This will wait for input from the keyboard, and whatever you type it will repeat it on the screen after hitting Enter.

example

example

If you just hit Enter without typing anything then it will just print out a new line.

From the behaviour of cat we can see it takes input from the keyboard (stdin), and pass the output to the screen (stdout).

And since on Linux everything is a file, all the programs recieve input from a file, pass output to a file, and report errors to a file. It is also possible to direct each to a different file. This will helps us to demonstrate stderr. First we give 'cat' a "regular" file as an input - make sure you are in your home folder.

cat .bashrc

This will output something like...

# ~/.bashrc: executed by bash(1) for non-login shells.

# see /usr/share/doc/bash/examples/startup-files (in the package bash-doc)

# for examples

...and other gibberish, the contents of the .bashrc file. So 'cat' takes a "regular" file as an input. Why am I writing "regular" file? Because keyboard and screen are also files on GNU/Linux! They have file descriptors, just as I wrote above! So I'm referring to the files we're used to being files as "regular" ones.

Now let's do a typo when issuing the smae command:

cat bashrc

cat: bashrc: No such file or directory

Here we got an error message, we successfully made the program print to stderr, ie. the screen.

Back to Contents

pipes

One technique to redirect output and input is piping. It simply chains commands after each other making one's output another's input. The command centipede. Sequencing the commands is done by adding a so called 'command operator', in this case the '|' - the pipe.

history | head

This will pipe the output of 'history' into the 'head' command in order to display the first 10 lines from it.

Piping can result into quite long commands. Here's an example for a moderate one:

history | cut -d " " -f 4 | sort | uniq -c | sed "s/^[ \t]*//" | sort -n | tail

This will print out the ten most used commands (disregarding their parameters), and their number of occurrence. I saw using "awk '{print $2}'" instead of 'cut -d "" -f 4', which might be better idea, but my version works enough for our purpose here.

Back to Contents

redirection

Next step in redirection is the use of '<' and '>' operators - one for the in and one for the out. We can feed the 'cat' a file with that too.

cat < .bashrc

Kinda redundant since cat takes files as parameters, but for our purpose it will suit. Now redirect the output.

cat .bashrc > outfile

Now we just duplicated '.bashrc' with a different name. You can 'cat' the created new file.

cat outfile

Or with file redirection:

cat < outfile

And we can see the same gibberish .bashrc has.

We can redirect errors too. For that we need to use the file descriptor combined with the '>'. So make some typo in the name of the input file.

cat bashrc 2> error

Note that this won't produce the error message to the screen since we just redirected it into a file named 'error'. Right? Now display the contents of this new file.

cat error

cat: bashrc: No such file or directory

Nice.

As we have seen the '<' and '>' operators can be used with file descriptors. In fact these have '0' and '1' as default, this is why we don't have to add them - we can '0<' and '1>' it's just unnecessary. And this is why we had to add '2' to '>' to change the default file descriptor.

If we want to append the error to a file (for example to create a log) instead of overwriting the existing file, we should just double the '>' like this:

cat bashrc 2>> error

It is possible to direct output to one file, and error to another:

cat .bashrc 1> outfile 2> error

Also possible to direct both to one:

cat .bashrc > outerror 2>&1

Or with a shorthand:

cat .bashrc &> outerror

Try with something that will throw an error:

cat bashrc &> outerror

There are some other weird rune magic with these operators, and there are other file descriptors as well. The depth is a bit too much for this guide here.

Back to Contents

screen

The program 'screen' is a screen manager that allows launching multiple virtual terminals, shells. The user can switch between each instance, and also can be detached, put into the background, and return to the "real" shell. The most obvious use is to run bunch of programs in parrallel in the foreground.

It operates with keyboard shortcuts, all based on the 'Ctrl-a' key combination. First hold down 'Ctrl', push 'a', lift both - this tells screen to wait for a command; then push a key or another key combination to issue the command.

Ctrl-a c - create a new shell and switch to it

Ctrl-a Ctrl-a - switch back to the last used instance (pushing it repeatedly switches back and forth between the two), don't need to release Ctrl between the two, just hold it and push 'a' twice

Ctrl-a " - list the open terminals, the '"' is Shift+2 on my keyboard layout

Ctrl-a 'digit' - 'digit' is a number 0-9, all new shells get a number, jump to a specific one

Ctrl-a d - detach screen, send it to the background

After detaching to return to it, issue the...

screen -r

...command.

To exit from screen, type...

exit

...in each instance. If you opened lots of instances, you're gonna type 'exit' for a while.

Back to Contents

history

The issued commands are saved by the GNU History Library in a file which is ~/.bash_history by default. To check which file it is, just print out the $HISTFILE environment variable:

echo $HISTFILE

Of course the variable can be changed and another file can be used for this purpose. For example:

export HISTFILE=~/new_history

We can move between the history items by using the up and down arrows... or if we want to have a more seamless typing experience, and don't move our hand to the arrow keys, then we should use to the ctrl+p (previous) and ctrl+n (next) combos.

The history itself can be viewed with the...

history

...command.

Probably will result in lotsa lines, and some redundancies. To reduce it to the last few, try:

history | tail

To run a history item type '!' and the number in front of the specific command.

Couple neat tricks are tied to the exclamation mark, their pet name is 'Event Designators'. Typing...

!!

...will repeat the previously issued command. Important: won't just print it out, but will execute it too! If we type ! and couple of starting characters of a command, it will repeat the latest one that starts with those characters. Let's say we have a list:

cd Documents

cat file1.txt

cd /tmp

ls

Then we issue:

!cd

Will result in:

cd /tmp

Lastly we can add a question mark after that '!' and we can search expressions within the rest of the command, not just at the beginning. So if we wanted to rerun the same command above, we could type...

!?tmp

...as well. Of course these examples are pretty rudimentary commands, not much more effort just typing them out, and easy to remember, but there are cases when we want to use a more complex again, what we issued some time ago, then it's pretty damn useful.

We can customize what commands history saves. Or rather what it excludes from saving. Not much point in tracking all the 'cd' and 'ls' we issue, is there. Edit the ~/.bashrc file, just append something like this to the end of it:

HISTIGNORE:"ls:cd*:history:pwd"

Note: each command is separated by a ':'. The * after the 'cd' specifies anything and everything that follows, without it the histignore would ignore all the cd commands except when it is issued in itself, and they would appear in the history. We could do this with basically all of them, since all can accept options and parameters, such as 'ls -l' or 'history | tail -5' or 'pwd -P'.

The history can be cleared by...

history -c

Back to Contents

aliases

We can give aliases to commands. Great for shortening long but frequently used ones. Should keep in mind that bash scripts don't know aliases.

Edit the ~/.bashrc file. By default it should have a section with couple of predefined, usually commented out. Just follow that syntax but here's an example:

lah='ls -alh'

Back to Contents

* or globbing

In bash pattern matching is called globbing. They are sometimes referred as wildcards. The most common is the * character which simply matches 'all'. Globbing is simpler than Regular Expressions, but they can be extended with 'extglob' - I won't go into this here. Simple globbing offers a number of possibilities.

* - matches all which is 0 or more characters

? - matches one character

[...] - matches one character in a specified range of characters

Range examples

[cgp] - match either c, g, or p

[!cgp] - match anything but c, g, or p; the ! is negation here, reverses the pattern, can be used within any globbing

[d-h] - match anything between these two letters of the alphabet

[[:digit:]] - match digits, numbers

[[:space:]] - match any whitespace

There are many other character classes like the last two.

Back to Contents

checksums

Generate a hash value of anything, a string input, a file. A number of programs and hashing algorithms can be used, such as cksum, md5sum or sha256sum.

cksum .bashrc

2721016318 3985 .bashrc

The first, longer string of numbers is the hash value, the second is the byte count (basically the filesize/character count) and lastly the name of the file you just checked.

The hash value of a file is unique so it's a good way to validate integrity of files, that they aren't corrupted or tempered with. For example it is customary to give the md5 checksum for the downloadable iso-s of Linux distributions. After you downloaded one you can check if it turned out to be the carbon copy of the original:

md5sum cbpp-12.0-amd64-20230611.iso

71d02a8e55627ce43e217724dc6de6c5 cbpp-12.0-amd64-20230611.iso

Taking input from the keyboard, hashing a string is a bit tricky. You can't just

sha256sum "cli is awesome"

sha256sum: 'cli is awesome': No such file or directory

Checksums have interactive mode, you run the program which offers you a prompt and can type in there, or with other words it initiates a "read call". But you can't just hit an "enter" to finish the program running, because that adds another line. So if you finished typing, hit Ctrl+D. Twice. Except if you hit enter first, then once is enough. Ctrl+D is a "special character" a so called "EOT" ("End of Transmission") which terminates the read call by sending the input to read. Important: if you hit enter first the hash will be different since that adds a "newline" at the end of the string.

Let's try both. Without 'enter', Ctrl+D twice:

sha256sum

cli is awesome9d7a14bb3d692972d5ccd0d95d1f3048928c580f12bd7145f1b3f7f4570d041d -

Again, but now after you typed what you typed, hit 'enter' then Ctrl+D

sha256sum

cli is awesome

d6fea9f2eae4012b23ae690f21a77d3c21e9b7e870bf5d56251d45a89dbd2653 -

Another way of getting the checksum of a string: pipe it into the hashing algorithm.

echo -n "cli is awesome" | sha256sum

9d7a14bb3d692972d5ccd0d95d1f3048928c580f12bd7145f1b3f7f4570d041d -

The '-n' parameter above tells 'echo' not to add the newline character. Now try it without the '-n':

echo "cli is awesome" | sha256sum

d6fea9f2eae4012b23ae690f21a77d3c21e9b7e870bf5d56251d45a89dbd2653 -

There are other possibilities, like using 'printf' or adding the string to a file then check that file, but I think we had enough fun with this.

Back to Contents

SSH

OpenSSH is a remote login tool generally available on GNU/Linux. It consists of a client and a server. The server needs to be installed where you want to log in, on the machine you want to reach remotely. It also has facilities for transferring files: sftp and scp.

On the very first time connecting to a host it will ask you to fingerprint it. If later the host is changed in a way, like reinstalled, it will ask again.

To connect we're gonna issue a command that'll look like this:

ssh username@ipaddress

The 'ipaddress' is the IP of the host/server you wish to connect to, and username is the username of the account you have on that host/server. Let's say you have a machine on your LAN you want to reach:

ssh bestusernameever@192.168.201.51

If you have to connect on different port, you have to add an option with the exact port.

ssh -oPort=54326 bestusernameever@192.168.201.51

Details of login can be stored in the 'ssh_config' file, in the /etc/ssh/ directory, such as username, or port, or host definitions. These aren't necessary, and basically no configuration is needed to work with it.

Another feature of ssh we should take a look at. We can run commands on remote machine without actually logging in. Or run local scripts on remote machine, or direct output to a local file from a remotely executed command. Let's say you want to check the available space on your server. Run:

ssh bestusernameever@192.168.201.51 df -h

Direct the output of the same command into a local file:

ssh bestusernameever@192.168.201.51 df -h > ~/Documents/spess

Direct the output of the very same command into a file on the remote host:

ssh bestusernameever@192.168.201.51 df -h \> spayes

Now about the server.

When you look up how to configure SSH server people will advise you moving the listening port to some high number from the default 22, for security reasons, thinking that obscurity will give protection. These people aren't aware that the potential intruder is just one little nmap scan away finding it. And literally all 1337 h4XX0r2 start their career learning about port scans. They could say that according to their threat model it will still protect from a number of intruders who are not knowledgable enough to use nmap. Sure. But lamers won't crack your ssh login anyway, so why bother.

On the other hand for you moving the port will mean complexity. Not just you have to add the port number to your ssh/sftp/scp commands, but applications, services might rely on it being on the default so they have to be changed, people whom you want to allow to connect to your server has to be told about the change too. So very little to gain, in exchange more than enough pain.

Now that we are at ssh server config, for a very basic setup just disable root login, and perhaps you might wanna use SSH-PPKP (Private Public Key-Pair) as authentication method instead of password and disable password authentication as a whole. Btw the configuration for the server daemon is at

/etc/ssh/sshd_config

We're gonna use couple of commands to set up the SSH-PPKP. As the name suggests it generates a key pair: a private key and a public key, in two files (the public will have '.pub' extension). The latter has to be uploaded to the server and at login the private key will be matched to it.

The first command is the 'ssh-keygen'. It takes couple of parameters which might need setting: specifying filename, type of key, and size/length of the key. The length perhaps should be set to 4096 bits, although the man page says the default 3072 should be enough. This is for the rsa type, which also had to be set, with the warning that it might be a breakable algorithm in the near future. The default was rsa and now it is rsa-sha2-512. Here's an example for the command:

ssh-keygen -f ~/.ssh/example_rsa -b 4096

This sets the filename to 'example_rsa' in the '.ssh' folder in your home directory, and the size to 4096 bits. Don't forget to use the filename you want and suits the case.

If you don't specify filename in the command, the program will ask for one during the key generation process, but it might be more flexible how we did it above.

During key generation it will also ask to set a passphrase. It's optional. If you set it, ssh will ask it every time you try connecting with this key - it's a local authentication. So this is for security in case someone sits to your computer with the intention to use it. You can skip the passphrase during generation just by pressing 'Enter'. Then again. If it's not set, it won't ask you for it when you try to connect.

After all this we have to upload the public key to the server. Perhaps this is an oversimplification of the process, but I feel this grabs the essence of it. For this to work password authentication still shoud be enabled on the server. Issue:

ssh-copy-id -i ~/.ssh/example_rsa.pub bestusernameever@192.168.201.51

This will ask for your password. And done. The next ssh login will use the Private Public Key Pair authorization. You might wanna disable password logins on your server/host (in '/etc/ssh/sshd_config').

As for file transfer, OpenSSH has 'sftp' an interactive program where you can browse the directories of both the server/host and the client machine, and exchange files with the 'put' and 'get' commands. Orienting in the filesystems are done by basic commands:

cd

pwd

ls

mkdir

...on the remote host, and with their counterparts:

lcd

lpwd

lls

lmkdir

...on the local machine. Note, all the local commands are preceded by and 'l' on the LLLocal machine. They do what you'd expect them to do. But first thing first, connecting is done by:

sftp username@ipaddress

In our example, it will be:

sftp bestusernameever@192.168.201.51

Then uploading files to the host is done by:

put filename

In case of directories:

put -r filename

Downloading is dony with:

get filename

In case of directories:

get -r filename

If you wanna quit, issue either 'quit', 'exit', or 'bye'.

The other facility is 'scp'. It's a one liner command, a secure copy, to and from a remote host/server, and probably even between remote hosts, I haven't tried that yet. But it goes similarly like 'cp', you tell the command what you want to copy and where.

Copying a file to remote host:

scp filepath username@ipaddress:path

scp ~/Documents/random.txt bestusernameever@192.168.201.51:/tmp/

This copies 'random.txt' from the Documents folder up to specified host's /tmp directory.

Now the reverse, copying a file from remote host to your local machine:

scp filepath username@ipaddress:path

scp bestusernameever@192.168.201.51:/tmp/modnar.txt ~/Documents/

Here scp will copy 'modnar.txt' file from the /tmp directory of specified remote host, to the local Document folder.

If you want to copy a whole directory, issue 'scp' with the '-r' option:

scp -r ~/Documents/books bestusernameever@192.168.201.51:~/Documents/

If you use custom port (let's say port 54326), than change the command:

scp -r -P 54326 ~/Documents/books bestusernameever@192.168.201.51:~/Documents/

If you use password authentication then scp will ask it, just like ssh and sftp. If you have passphrase for key based authentication, then it'll ask that.

Back to Contents

watch

With the program 'watch' we can re-execute another command. It can be used to monitor something, how a process is going, what changes occur. Perhaps see the logs updating, or following disk space usage, or file changes in a directory. Let's watch how the last ten lines of the system log file changes:

watch tail /var/log/syslog

This might be slow depending on what's going on on your machine. When you got enough of it, or at least you saw it change once quit watching by pressing ctrl+c. But we could monitor the memory usage. This time we'll highlight the changing parts with the '-d' option:

watch -d free

Again stop with ctrl+c when you've seen enough.

Other useful options are '-n' which allows setting the interval of rerunning the commands (default is 2 seconds), or perhaps '-g' which will make the program quit if the output changes - this is useful if you are waiting for an event happening or a process finishing perhaps.

Back to Contents

top

Top is a system monitoring tool, you get a an interactive interface where you can follow in real time the runnig processes, CPU and memory usage. The displayed information is customizable to some extent, and you can stop and kill processes as well. The controls are done via keyboard shortcuts. Pressing the 'h' key will show you the help, and pressing 'q' quits the program. Similarly to watch the refresh interval can be set the 's' key. Change the unit of measurement with 'e', by default it displays memory in Kb, which tends to be too small. Sort by CPU isage with 'P', memory 'M', and reverse the sort order with 'R'.

Saving configuration should be possible with 'W'.

Some other similar interactive process viewer exists, such as 'atop' or 'htop'

Back to Contents

archiving, tar, zip and such

Even if you yourself don't create archives, it's sure you'll come accross some, download some. It is a good way to wrap a number of files into one package for archiving. Or for data transfer, perhaps you want to get all the logs from a remote machine, so you can examine them locally. On GNU/Linux the most common archives are .tar and tar.gz. We're gonna use the program 'tar' for both. The name of the program is short for tape archiver and it's around for a long time. The two basic operation is creating an archive, and extracting one.

For creating archives we have to use the -c (create) and -f (file) options. To be honest in almost all cases the -f has to be used to specify the tar archive we want to use. After the options we have give the name of the archive we are creating. Then comes the list of files we want to include in the archive. Gonna just use a glob here.

tar -cf logs.tar /var/log/auth*

Let's look into the arhive we just created:

tar -tf logs.tar

This will output something like:

var/log/auth.log

var/log/auth.log.1

var/log/auth.log.2.gz

var/log/auth.log.3.gz

var/log/auth.log.4.gz

Now extract it:

tar -xf logs.tar

You can specify a path you want the files to be extracted:

tar -xf logs.tar ~/Documents/subfolder1/moreDirectory/anotherlevel/

Note: the options of tar should work without the '-' in front of them.

If you give a .tar or a .tar.gz archive to your normie Windows user pals they might think you want to give them viruses. And even if you do they can't extract it anyway. So these two formats are no go. And then perhaps they send you an archive, or you download something from the net, it very likely will be zip, 7zip (7z), or rar. I haven't seen .ace since ages, so the previous three are the most typical. Tar can't do much with them, but other packagers are available, some (or all, depending on distribution you use) might have to be installed first. These programs are: zip/unzip (yes, these are two separate programs), 7z (the name of the package is perhaps p7zip), and rar/unrar (separate, but rar can extract in itself).

Zipping up files don't need any options (I think) but for a directory with subdirectories, we need the -r (recursive) option.

zip -r doc.zip ~/Documents/

Now extract:

unzip doc.zip

The options of 7zip don't use '-':

7z a doc.7z ~/Documents/

Now extract:

7z x doc.7z

Rar options don't use '-' either.

rar a doc.rar ~/Documents/

Now extract:

rar x doc.rar

unrar x doc.rar

cpio - "copy in and out"

This is another archival tool, similarly from the times of tape archiving. Less used than tar, as far as I know I had not used it before I wrote this guide. Perhaps the GNU/Linux branch of RedHat applies it more often.

Let's talk about the question of compression.

The difference between .tar and .tar.gz is, that the first one is the archive, the .gz is a compressed archive? By how much? Depends on the contents, in many cases I wouldn't expect real difference. Tar has a number of compression options, which makes it filtering through certain compressing algorithms, such as bzip2, xz, lzip, lzma, lzop, zstd, and gzip. The man page contains the options for each. Again... do they really matter? Depends on the contents, some files can be compressed better than others, some none at all. Supposedly the speed and compression ranking is something like this: gzip > bzip2 > xz - where gzip is the fastes but least efficient, and xz the slowest but strongest compression. At least this is what I read elsewhere.

I did couple of brief tests for the sake of this article.

On a ~3GB game installer file (an .sh), they did not make any difference. All had the same size: the original size of the file.

However the experiment on tha full contents of the /var/log directory resulted this:

log.tar - 1,2 GB

log.tar.gz - 176 MB

log.tar.bz - 156 MB

log.tar.xz - 90 MB

They did really make a difference! And it correlates with what I read online.

Now a third test on small, 61 MB directory of mp3 files, all resulted in a 61 MB archive.

So as I said it depends on the files we want to pack smaller. The logs are a bunch of text files, while the installer is an already compressed self extracting archive. And mp3 is also a type of compression, it was unlikely that these algorithms would yield results in their case. Would they fare better in case of '.wav's... ?

Note: files can be compressed without archiving them. But that's a different topic.

Back to Contents

scheduling, cron and at

For repeating scheduled tasks we have to create and run 'cronjobs'. Cron is the service running in the background taking care of things, and the users can use 'crontab' to do the scheduling. The command is:

crontab -e

'crontab' itself is a textfile in '/var/spool/cron/crontabs/username' where username is your username. And all the other users on the host have their own crontabs in the same folder. So the directory can contain files named as: "JoeSmith", "Krtecek", "nondescriptiveusername", etc. For viewing that directory you will need root privileges, and can't edit the files in the usual ways, not even yours. This is what the 'crontab -e' command is for. Note: the root has it's own under /etc/crontab which can be edited "normally".

After you issue the command your crontab will open in a text editor. It's vim for me, probably the default editor is used (issue 'echo $EDITOR' to see which is the default). The file will contain instructions how to record cronjobs into the file.

There are six columns to fill. The sixth will be the command itself to run, the first five regulates the timing. Each column is a value:

The first is the minutes: 0-59

The second is the hours: 0-23

The third is the day of the month: 1-31

The fourth is the month: 1-12

The fifth is the day of the week (dow): 0-7

This one needs some explanation. 0 and 7 are both Sunday. 1 is Monday, 2 is Tuesday etc. This value also accepts abbreviations of the English names of the days: Sun, Mon, Tue...

In all of the columns the values can be substituted with * wildcard, which means all the possible numbers.

Let's learn from example. In each case I'll use a hypothetical job.sh script in the .local/bin directory in our home folder. Issue the:

crontab -e

...command, then add this to the end of the document (do not start the line with # that will just comment the line out):

* * * * * $HOME/.local/bin/job.sh

After saving and exiting we get a notificatioN:

crontab: installing new crontab

The line we added will execute the 'job.sh' script in every minute, of every hour, of every day, of every month (on all the days of a week). I find this to be the a great baseline to start with, and build from here. If we set:

0 * * * * $HOME/.local/bin/job.sh

Then this will execute only once in each full hour (at 0:00, 1:00, 2:00, 3:00 etc. etc.), on every day, of every month... But this:

15 * * * * $HOME/.local/bin/job.sh

... will execute it at every hour and 15 minutes (at 0:15, 1:15, 2:15, 3:15 etc. etc.) Let's say we want to run the script on each day at 2:15 pm, then:

15 14 * * * $HOME/.local/bin/job.sh

But let's say we want the script to run only on weekdays in same time. We leave the days and the months alone, and set the DoW as a range betwen 1 and 5:

15 14 * * 1-5 $HOME/.local/bin/job.sh

If this was the weekend we could issue 6-7 instead.

Lastly we can schedule with increment. For example to make the job run every other day:

15 14 */2 * * $HOME/.local/bin/job.sh

Ofc we can do this with the other values too.

Lastly two commands left to be introduced. The first lists the contents of your crontab, basically do a 'cat' to stdout:

crontab -l

And this one removes your crontab:

crontab -r

If you need to run a program once, then use 'at'.

The easiest to demonstrate how this works is this. Issue:

at now + 2 minutes

This will start a read call, opening an interactive prompt where type:

echo "EPIC WIN" > win.txt

Hit enter and press Ctrl+D - which will print:

<EOT>

And this will close the prompt noting the job is scheduled. I wrote about Ctrl+D earlier, as noted it means End-of-Transmission, and shuts down the read call.

Quickly check the pending tasks with:

atq

The 'atq' command lists all the taks waiting for at to execute. Anyway, look for that win.txt after the 2 minutes expired, it should be in the directory where you executed the 'at' command.

'at' has various ways of handling time, can type like '13:37', 6am, or 'midnight'. Can specify day like 'tomorrow' (protip: 'yesterday' won't work) or give exact dates in various formats (MMDD[CC]YY or [CC]YY-MM-DD, etc.), for example '2024-12-25'. And similarly to the 'now + X minutes' we can use hours, days, and weeks.

'at' does not forgets jobs if you turn the host off, or reboot it. The 'atd' service is running in the background handling all those, and if it misses one, it'll run it on the next day the same time.

Note: commands are executed via /bin/sh, not bash, so don't expect outputs in your terminal. If you use bash, that is.

Back to Contents

dd

disk dump. And a nice cup size. As the man page says it's for converting and copying files, if we go by the name specifically to clone disks. Which means just copying a file - since we know on Linux, everything is a file. It can convert character encoding (from EBCDIC to ASCII and back, not sure how handy that is these days) or uppercase to lowercase for example, or make a copy of the content of a disk to another, or create iso images of disks, or the other way around copy an iso to a device such as an USB. It can copy a specified part of a file, or - by ommitting the output file - display (copy it to stdout) specific part of a file.

How useful 'dd' is for our purpose? I use it quite rarely, and I imagine basically oldschool and old sysadmins are the main userbase of it. First thing comes to mind is creating bootable usb sticks from an iso of a Linux distro. That can be achieved with other CLI tools such as 'cp' and 'cat'. It is still worth to talk about for it looks quite irregular.

Let's see a generic example, which should be harmless if you try it yourself, "according to my readings" awfully lot can go wrong in case of 'dd'. But again, mistakes can be done with 'cp' the same, I don't see huge warnings about that.

dd if=.bashrc of=testcopy bs=1024 conv=noerror,sync status=progress

This is an unusual syntax! Each option takes a parameter, no '-' or '--' anywhere. To be honest in the line above the last three options are unnecessary we just duplicate a tiny file, but those are the often used options, better get used to them, and they demonstrate well how they should be written. Note 'conv' which we supplied with two parameters, separated by a comma. See the 'man' page for the complete list of parameters and "symbols" - how it calls the parameters.

So how 'dd' differs from other tools beyond the syntax? It can do nifty things, such as read from the keyboard and output to the screen. If you issue...

dd

...in its naked self you get a prompt where you can type stuff, hit enter type some more, and again, and perhaps one more time, and then press Ctrl-D (EOT = End of Transmission), and lo it will repeat everything you wrote and close it with a report of records (in and out), byte copied, time elapsed, and copy speed in Kb/s. Okay, this is not very interesting or useful in itself, but a feature. Issue:

dd if=.bashrc

See, we can print files to the screen as well. But we can print part of a file too:

dd if=.bashrc bs=1024 count=1

Maybe this is more useful if we know which part of the file we need.

So, 'bs' is bytes - default is 512 -, literally how many bytes are read and wrote by 'dd'. Not just bytes can be used, values like 'bs=4k', 'bs=4K', 'bs=4kB', 'bs=8MB', etc. are valid as well, see the 'man' page of 'dd', take note that some of these multipliers are decimal (1000x1000) and some based on 1024 (binary).

The 'count' option means how many times those bytes from the start of the file will be read and written, how many blocks of data we work with. This opens up some interesting possibilities for us, because, well, you know, everything is a file on Linux. For the next command - and the following iterations of it - you need elevated rights, so 'su -' or 'sudo'. We're gonna print some data from the very beginning of our first disk, where the master boot record resides.

dd if=/dev/sda bs=1024 count=5

If this doesn't print out some gibberish try changing the count, start at 1, and increase it by steps of 1 until you get something. All these should be binary data so don't expect anything recognisable. If the terminal starts to look a bit wonky or too much data is displayed try paging through the result by piping the output into 'less':

dd if=/dev/sda bs=1024 count=20 | less

Don't forget: this is 1Kb of data 20 times (so 20Kb, right?). If your home folder is on a specific partition - you can check it with the 'lsblk' command -, let's say on 'sda5' or 'sda6', then you could try:

dd if=/dev/sda6 bs=1024 count=20 | less

Somewhere at the beginning it should clearly say:

/home

I tried to come up with another way of reading he home partition, so added the directory path. I got an error message:

dd: error reading '/home/bestuserever/': Is a directory

So this is a no go. But we could try something else to get something readable. Let's go back to sda's master boot record. We're gonna pipe the output into 'od' which stands for "octal dump" if we go after the man page, elsewhere I read "octal display". The '-a' option will "translate" the result to readable format.

dd if=/dev/sda bs=1024 count=1 | od -a

In theory the command we pipe the output into should be:

od -a -

The last hyphen makes 'od' to take input from stdin instead of a file, but did not see any difference in the output in this case. Anyway.

Another feature of 'dd' is that it can jump given amount of 'bs' sized blocks:

dd if=/dev/sda bs=1024 count=1 skip=3 | od -a -

Again, this is more useful if we know what we are looking for.

These are short tasks but cloning a whole disk can take a while. 'dd' has a handy progress display so we can know things are happening we just need to be patient. If you are following this by typing each command and checking what it does, you should stop, this will be a fictious command. We're returning to the addition of an output file, and add one option from the first example.

dd if=/dev/sda1 of=/dev/sdb1 bs=4k status=progress

This will print transfer statistics, updating every second. It counts the bytes copied, elapsed time, and the speed of the operation in MB/s. There is two other 'status' setting: 'none' and 'noxfer'. The latter will suppress the final statistics report you could observe if you issued any of the commands above, the first suppresses all.

Now, about the conversion possibilities. Quite a lot, so we'll just take a look at that lower/upper case conversion. Let's go back to .bashrc:

dd if=.bashrc count=1 conv=ucase

My result:

# ~/.BASHRC: EXECUTED BY BASH(1) FOR NON-LOGIN SHELLS.

# SEE /USR/SHARE/DOC/BASH/EXAMPLES/STARTUP-FILES (IN THE PACKAGE BASH-DOC)

# FOR EXAMPLES

Stop screaming, dude! Jesus!

All right enough of the fun. We printed to the stdout again, and omitted the block size, the default 512 is enough. But we wanted to print just the first block, so added that 'count' option. The 'conv=ucase' told the command to convert everything to uppercase, surprisingly.

Perhaps we should note how to "burn" an iso to an USB stick. It is not much different from the things above.

dd if=debian-11.1.0-amd64-DVD.iso of=/dev/sdb bs=1M status=progress

See how I changed the block size. In theory using larger blocks will make the process faster, to a certain block size. It's quicker to copy larger chunks fewer times, with the cost of a bit higher RAM usage. There are some tests published online (for example on stackexchange) that suggests at 64M the effectiveness starts to wane. If you stick to 4 megabytes every time, it won't be a mistake.

Now that I mentioned stackexchange, I remembered a comment there how it is possible to create sparse files with 'dd'. A "sparse file" is a file when the contents of a file don't match the size of the file, so the actual filesize is smaller thant the stated. You'll understand in a sec. Firs create a directory for this experiment, then add some files to create some content there. Here I'll use 'dd' for that.

mkdir test

dd if=/dev/random of=test/content bs=64k count=218

Now check the size of the directory, with 'du':

du -d 0 -h test

We should get:

14M test

Now the command. What I saw was this:

dd if=/dev/zero of=test/sparse-file bs=1 count=1 seek=10GB

The block size is 1 byte, and we call it only once. The 'seek' option tells to skip a certain amount of blocks - in our case in the size of 10GB -, from the start of the file before writing. So essentially we appended 1 byte of data at the end of 10GB of nothing. List the files in the 'test' directory:

ls -alh test

There should be a line saying 'sparse-file' with 9,4G exists in that directory. Now check directory size with du again:

du -d 0 -h test

14M test

Nice!

Okay. I think we saw enough examples to get familiar with the syntax, and to see some possibilities in the usage.

Note: the suggestion that you can wipe data from a disk with 'dd' - for example in case you wish to sell a hard drive - is not entirely true. Even after several pass of the wiping process the data is recoverable. One can f*ck up his system by overwriting something important, but as far as secure data destroyer... 'dd' is a no go.

Back to Contents

partitioning

The whole idea of filesystems and partitions on Linux is wildly different from how Windows do things. I won't go into explanations about them here. There is also Logical Volume Management another interesting and kinda cool topic, but that's another thing on its own. This will be about how to partition a hard drive in Linux CLI. We're gonna use 'lsblk', 'fdisk', 'mkfs', and perhaps 'e2label'.

In this hypothetical situation we had one disk we used and loved (this we'll call 'sda'), and installed another for one never can have enough space. So let's see our block devices, our hard drives.

lsblk

This will print out what it prints out: a bunch of useful stuff about sda1, sda2, sda5, sda6, sda9001... - for example the size and the "mount point" which essentially means where each mounted in the filesystem if they are mounted. There should be an '/' at least, and maybe others like 'swap' or '/home' or '/var' or '/boot'. At the bottom of the table we'll see an 'sdb' and nothing else. That's the new disk, connected, powered. We're gonna make couple of partitions on that. You'll probably need root privileges, if you are sudoing then sudo away. Issue:

fdisk /dev/sdb

'fdisk' is an interactive application. We're greeted and told that the device has no recognized partition table, and it gives us a prompt, waiting for us to issue commands. Each command is just one key, 'm' is help. Issue that.

m

This will list the available commands. Very helpful. For now the notable ones:

F - free unpartitioned space

n - create new partition

p - print the partition table

w - write changes to disk, then exit - this will finalize what we did, up to this point everything is reversible, gives time to think things through in case we work with a disk containing data.

If you want check the free space, after that, press:

n

...which gives us two more options, let's create a primary partition first. Default is that so just hit Enter.

We'll select the default value for the next two question, for partition number and first sector, just hit Enter.

Then we have to give the number of the last sector. Luckily we can give the size of the parition we wish to create. Let's say the hard drive has 1TB of space and we want a 100GB partition right in the beginning of the disk, perhaps for system partition. So we type:

+100G

Then press 'Enter'. The program then let's us know that partition 1 was created. Great. We can check it with 'p' and it will list the /dev/sdb1 we just created.

Let's make a logical partition too. First we need an extended. Issue:

n

Then pick:

e

Then just hit Enter for the rest. Number, first sector, last sector. Use the whole space. Done. List it with 'p' again. Then again for the logical parition:

n

It says it will make a logical partition, offering to start it at the start of the extended. Let's start there, hit Enter. Then for last sector we should just add 400GB to the first sector.

+400G

Done. List partitions with 'p'. We get a 'sdb1', 'sdb2', and 'sdb5'. And still have a space for some more partitions, or at least another one.

Now go ahead and delete the partitions if you like (nothing is fixed until written to disk as noted above), and repartition with whatever sizes you wish. When you are satisfied write the changes to the disk with:

w

Or drop everything and quit:

q

Now check the block devices again:

lsblk

This should list the partitions of 'sdb' ('sdc1', 'sdc2', 'sdc5'...). We still need to put filesystems onto these partition. For some years now the 'ext4' type is all the rage on Linux. We're gonna use a program called 'mkfs', which will take two arguments from us, the type of the filesystem it should make, and the place where it should create it, the specific partition

mkfs -t ext4 /dev/sdb1

Done, done, done, and done. There should be four dones. After this we can mount the freshly created partitions if we wish. Very good.

This part is not necessary, but let's go through how to add labels. It's simple:

e2label /dev/sdb5 WeKillTheBatman

The e2label program takes two arguments, the partition we want to label, and the label we want to add. Now check it with:

e2label /dev/sdb5

Result should be:

WeKillTheBatman

Back to Contents

rename

'rename' is an application made specifically to batch rename files. It uses Regular Expressions (or rather Perl Regular Expressions - Perl is a Unix scripting language) which complicates things for new users. Job can be done with a loop and mv, but that would complicate things for new users even more (or perhaps with 'find' which has a handy feature to call a command on every file it finds, but this also would complicate things even more). So should stick to this. For now. It can be used for many simple renaming tasks without knowing RegEx. The syntax is very similar to sed's and goes like this:

rename "s/change-this/to-that/" files

The quotes either double ("") or single ('') and are necessary. There is a difference between the two that matters to bash, but I won't explain it here, should go into another article. The '/' is a separator. The 's' means subtitute, but the 'man' mentions 'y' as well - gonna get back to that one later. The 'change-this' is the part of the filename you don't want, and 'to-that' is what you wanna add. If you just want to remove a part, then skip the 'to-that', leave it empty. Note: all the '/' are necessary, so when you leave a field empty, don't omit these.

The 'files' are the ones you wanna batch rename, all in one directory, typically a glob with an extension, such as '*.jpg', '*.mp3', or even perhaps '*.*'

Let's say you have a thousands of photos like 'DSC_1234.jpg' and wanna remove the 'DSC_' and leave only the numbers.

rename "s/DSC_//" *.jpg

Filenames sometimes contain characters that have special meaning for RegEx, for example the '[' and ']' are used to define sets to look for. Eg. [abcd] or [a-d] means either of the letters a, b, c, or d; or [1234] and [1-4] are the numbers 1 to 4. But what if we have '[1234]' in the filename which we want gone: 21[1234]34.jpg? We can use '\' as an escape character to neutralize the special meaning of the square brackets:

rename "s/\[1234\]//" *.jpg

'rename' will remove or substitute the very first occurrance of the expression you are looking for. This behaviour can be changed. Let's say you want to replace every ' ' (space) between characters to a '-' in the '01 Ride the Puppets.mp3' filename, here comes 'y' in the picture, we switch to that from 's', or we add the 'g' flag (means global) to the end of the middle section of the command instead:

rename "y/ /-/" *Puppets.mp3

rename "s/ /-/g" *Puppets.mp3

This will rename the file to '01-Ride-the-Puppets.mp3'. Note we used globbing (*) with the part of the filename, because less typing.

Perhaps we want to work on the end of a filename. Then we have to add a '$' after the expression we are looking for. Renaming all .jpeg files to .jpg is fairly simple, but what if some has 'jpeg' or '.jpeg' within the filename proper? Okay unlikely, but what if.

rename "s/jpeg$/jpg/" *.jpeg

Nice.

The example for converting uppercase to lowercase (and vice versa) is in the man page, but who knows if they take that out at one point (it's in since 1992 apparently), so here we go:

rename "y/A-Z/a-z/" *.jpg

rename "y/a-z/A-Z/" *.jpg

This defintely does not work with the 's///g' syntax.

Back to Contents