Renewal of Let's Encrypt Certificates

In an earlier post I showed how to optain a TLS certificate from the Let’s Encrypt certification authority. As those certificates are only valid for 90 days you will have to renew them early enough. But no worries, Let’s Encrypt will remind you of an upcoming certificate expiry by sending you a certificate expiration notice via email.

Presuming you followed the steps from my earlier post, these are the steps to renew your certificate:

$ cd letsencrypt
$ git pull
$ 
$ sudo service nginx stop
$ sudo ./letsencrypt-auto certonly --standalone -d layereight.de
$ sudo service nginx start

Again, if everything went well we will find our renewed certificates in /etc/letsencrypt/live/layereight.de/.


Bon Voyage, Ian!

We mourn the passing of Ian Murdock. Our thoughts are with his family.

layer8 - running on Debian


Let's Encrypt - Free, Trusted TLS for Everybody

Let’s Encrypt is a new certification authority (CA) providing free TLS certificates. Just recently they entered Public Beta. Reason enough to give it a try… They claim to automate the main steps of obtaining a certificate. And indeed, the procedure turns out to be fairly easy. Let’s Encrypt provides a client software hosted on github that’s doing all the hard work for you. The software comes with a couple of plugins that aim to obtain and install certificates with a running web server. But as it currently still is beta software, some of the plugins are marked as experimental or might not work. I’m running Nginx and refrained from using the experimental nginx plugin. Instead I used the standalone plugin which will start an agent including a webserver to communicate with the CA. It will prove that you actually control the domain (Domain Validation). Therefor the agent’s webserver will try to bind port 80 and 443 as callback endpoints for Let’s Encrypt’s backend system. This means you will have to stop your webserver for the time running the client software. If you can’t or don’t want to stop your webserver you should take a look at the webroot or manual plugins.

Anyway, to cut a long story short, here is what worked for me:

$ git clone https://github.com/letsencrypt/letsencrypt
$ cd letsencrypt
$ ./letsencrypt-auto --help
$ 
$ sudo service nginx stop
$ sudo ./letsencrypt-auto certonly --standalone -d layereight.de
$ sudo service nginx start

In case everything went well you will find your certificates in /etc/letsencrypt/live/layereight.de/. Currently they are only valid for 90 days, probably to minimize risk in case of abuse. Another interesting fact is that the whole process currently also only works with IPv4, which means with the DNS A record for your domain.


layer8 Relaunch

Hello World! (again) layer8 relaunched as a blog today. :-)


git Aliases

While the SCM integration of an IDE is convenient for most people, I preper using the command line. So for git. However typing full git commands like git commit becomes cumbersome at a certain point (even though it’s only a few letters to type). Luckily, git comes with built-in support for command aliases you can use to abbreviate commands. So I only need to type git ci. Here are the git aliases I have configured:

git config --global alias.co checkout
git config --global alias.ci commit
git config --global alias.st status
git config --global alias.br branch
git config --global alias.di diff
git config --global alias.hist 'log --pretty=format:"%h %ad | %s%d [%an]" --graph --date=short'

You may also be interested in some documentation on that topic:


Copy bootable ISO-images to a USB drive using dd

There is a variety of tools to create bootable USB drives. Many of them come with a graphical user interface, a rich feature set and are convenient to use. You are probably familiar with UNetbootin or the Ubuntu Startup Disk Creator. While those programs do a fine job, not all people are aware that the versatile linux core util dd is also capable to perform this task. And sometimes dd is all you have (and need).

Here is what I usually do when I want to create a bootable USB flash drive:

I use lsblk to figure out under which device the USB flash drive is known to the system. You can also consult the kernel output using dmesg if it’s not that obvious. In any case you should be sure of picking the right target device, otherwise you might destroy the contents of your system hard drive. For the example we can be pretty sure that it’s /dev/sdb.

$ lsblk
NAME   MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda      8:0    0 223,6G  0 disk 
├─sda1   8:2    0   8,0G  0 part [SWAP]
├─sda2   8:3    0 215,2G  0 part /
sdb      8:16   1   7,4G  0 disk 
├─sdb1   8:17   1   245M  0 part 
└─sdb2   8:18   1   3,1M  0 part 

Execute the dd command with the necessary parameters. My standard scenario is creating a bootable USB drive of GParted, a very powerful tool I use to copy, shrink or extend partitions.

$ sudo dd if=/path/to/gparted-live-0.24.0-2-amd64.iso of=/dev/sdb

When your ISO image is big or the copy process is just slow you might be interested in its progress. You can easily achieve this by sending the dd process the USR1 signal. In a separate shell you can find out the pid of your running dd process and send it the USR1 signal.

$ ps awx | grep dd
32609 pts/2    D+     0:00 dd if=/path/to/gparted-live-0.24.0-2-amd64.iso of=/dev/sdb
$ sudo kill -USR1 32609

Sending dd the USR1 signal will make it print the current copy progress on the original shell, e.g.:

382777+0 records in
382777+0 records out
195981824 bytes (196 MB) copied, 36,025 s, 5,4 MB/s

After dd has finished you might want to call sync, just to make sure that the filesystem cache is flushed to the USB drive.

$ sync

That’s it. Your bootable USB drive is ready for being used. By the way, the same procedure is applicable to SD memory cards.


Speeding up sshfs with faster Cipher Algorithms

When copying data using sshfs on weak systems such as a NAS, you usually encounter poor throughput performance. With those systems not network bandwidth or hard drive performance might be the bottleneck, but little CPU power. ssh’s data encryption and compression algorithms slow down the overall data transfer.

Tweaking sshfs with some options might improve the situation:

$ sshfs [user@]host:[dir] mountpoint -o Ciphers=arcfour -o compression=no

Be aware, a faster cipher algorithm is usually also a weaker one! That means you will weaken your overall security! So don’t do this in unsafe environments! You might only want to do this in your home network.


How to create an ISO image from a CD or DVD

This article describes how to create an ISO image file from a CD or DVD. As usual with linux, there is more than one way to accomplish a task. But as we are all big fans of the linux command line, we will be using the command line tool dd. But first we need to know the device our optical drive is assigned to. We can use lsblk to figure that out.

$ lsblk
NAME   MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda      8:0    0 223,6G  0 disk 
├─sda1   8:2    0   8,0G  0 part [SWAP]
├─sda2   8:3    0 215,2G  0 part /
sr0     11:0    1 227,8M  0 rom

In our case it seems to be /dev/sr0. Often there are also some symbolic links pointing to the optical drive. We can use them too, of course.

$ ls -l /dev/cdrom /dev/dvd
lrwxrwxrwx 1 root root 3 Jan 25 13:40 /dev/cdrom -> /dev/sr0
lrwxrwxrwx 1 root root 3 Jan 25 13:40 /dev/dvd -> /dev/sr0

After we made sure that the device is not mounted anywhere, we can actually start the copy process using dd.

$ dd if=/dev/cdrom of=/tmp/image.iso

Depending on the drive’s speed and the size of the image, the copy process might take some time. We can gain insight into the program’s progress by sending the USR1 signal to the dd process. In a separate shell we can find out the pid of the running dd process and send it the USR1 signal.

$ ps awx | grep dd
3359 pts/3    D+     0:00 dd if=/dev/cdrom of=/tmp/image.iso
$ kill -USR1 3359

Sending dd the USR1 signal will make it print the current copy progress on the original shell, e.g.:

161505+0 records in
161504+0 records out
82690048 bytes (83 MB) copied, 48,8422 s, 1,7 MB/s

Recursive HTTP download with wget

Downloading a lot of files from an HTTP source with a lot of sub directories can be quite annoying. Who ever has clicked through several folders in his browser to download a couple of files knows what I’m talking about, especially if you have several hierarchies of sub directories. Of course there are browser extensions that help, but if you want to solve the problem in a shell, wget is your tool of choice.

wget -r -l [depth] -A '*.jpg' -p -nd -np -nv -k -P [target_folder] --http-user=[user] --http-passwd=[password] --no-check-certificate 'https://[domain]/[folder_with_subfolders]/'

The parameters explained, taken from the wget manual page, some of them might be optional for your case:


Folder Disk Usage

I wanted to find out what’s the total size (including sub folders) of every single sub directory inside one particular folder. Additionally, I only wanted to see folders that are at least some megabytes in size. Here are some commands that helped:

$ du --max-depth=1 -h | grep -P "^[^MG]*[MG]\t"           # human readable
$ du --max-depth=1 -B1K | sort -nr | grep -E '^[0-9]{4,}' # sorted by size, 1K blocks

Hello World!

Hello World! layer8 sees the light of day.