So you either have to have a CD-ROM drive or Windows installed. Now, usually it is fairly easy to
copy a bootable ISO image to a USB stick. But
the BIOS Update (Bootable CD) that Lenovo provides is an ISO image after
the El Torito Bootable CD Specification. So the normal procedure won’t
work here. There is an El Torito bootimage inside bootable CD image that we need to extract first. Fortunately, there is a tool called
geteltorito.pl that can do that for us.
So let’s go ahead and see how the BIOS update can be done:
download the BIOS Update (Bootable CD) from the Lenovo support page for your ThinkPad model, for me it was the file r0rur07w.iso
and about 102MB in size
get the geteltorio.pl tool and make sure the version is >= 0.6
now you can reboot your ThinkPad with the USB drive attached and follow the steps to update your BIOS, make sure you
boot in UEFI mode!
The procedure described here is mainly taken from the English and
German version of the ThinkWiki. So there goes all the credit.
It can probably applied to most ThinkPad models.
Since version 2016-11-25 of Raspbian the ssh daemon is disabled by default (see the
Raspbian release notes). But
the Raspbian documentation on ssh describes a way how you can enable it
again. You can do that before you even copy the image to an SD card and boot for the first time. This comes in handy especially when you
run your Raspberry Pi without a monitor or keyboard attached (headless).
Here is what you need to do:
download the latest Raspbian image, e.g. 2017-02-16-raspbian-jessie-lite.img
examine the image using fdisk
in order to mount the boot partition we need to know its start offset in bytes, in our case that would be 512 * 8192 = 4194304
create a temporary folder to mount Raspbian’s boot partition
mount Raspbian’s boot partition
create ssh file to tell Raspbian to enable the ssh daemon by default
unmount the partition again
copy the image file to an SD card
boot your Raspberry Pi, the ssh daemon should be enabled by default
In the beginning of 2016 the Raspberry Pi Foundation released the
Raspberry Pi 3 Model B. It was the first Raspberry Pi model that was
equipped with 802.11 Wireless LAN out of the box. Previous models just didn’t have wifi. Buying a wifi USB dongle is the only way to use
Wireless LAN with older models like the Raspberry Pi 2 Model B.
The drawback of Raspberry Pi 3’s wifi, the very popular
Edimax EW-7811Un or even the
official Raspbery Pi USB Wifi Dongle is that all of them only
operate in the 2.4Ghz band (IEEE 802.11b/g/n). If you live in a big city like me you
may have noticed that those frequencies are pretty crowded. That’s because each of your neighbours runs his own access point. I’m talking of
30+ wireless networks within reach. That means a lot of interference and concurrent usage of frequencies.
In such an environment throughput performance is quite bad. Sometimes also connection losses occur. If it’s really bad not even switching
the channel helps. Fortunatelly the IEEE 802.11 standard also specifies operation in other
frequency bands. IEEE 802.11a/n/ac do so for the 5GHz band. Even though these parts of
the standard also exist for several years now, they are less often in use. This is good news. That’s why almost all of the wifi devices in
my place support IEEE 802.11a, n or ac. And in fact competition with neighbours is less.
When I was looking for a wifi dongle for my Raspberry Pi 2 I was specifically searching for one supporting 5GHz. A friend recommended the
Edimax EW-7811UTC.
So I bought one. What I didn’t consider before buying that thing was the driver support by common
distributions for the Raspberry Pi.
I realized that while OpenELEC for instance does come with a driver for the
Edimax EW-7811UTC
out of the box, Raspbian doesn’t.
Compiling the Driver step by step
While finding an open source driver for the Edimax EW-7811UTC was easy, compiling it for Raspbian
was a bit more of a challenge. That’s because Raspbian doesn’t ship with its kernel sources in the software repositories. After some web
research I stumbled upon rpi-source. This small tool is able to install the kernel source used
to build the kernel on the Raspian image.
By the way, the driver could not only be used for the Edimax EW-7811UTC. The Realtek 8812au chipset is also built into many other wifi
products like the
D-Link DWA-171. So if you know
that your wifi dongle is also using the rtl8812au chipset this might work for you, too.
Raspbian Jessie release 2016-05-27 using kernel version 4.4.11-v7+
Raspbian Jessie release 2016-09-23 using kernel version 4.4.21-v7+
Raspbian Jessie release 2017-01-11 using kernel version 4.4.34-v7+
Raspbian Jessie release 2017-04-10 using kernel version 4.4.50-v7+
Raspbian Stretch release 2017-09-07 using kernel version 4.9.41-v7+
Raspbian Stretch release 2017-11-29 using kernel version 4.9.59-v7+
Raspbian Stretch (9.4) release 2018-06-27 using kernel version 4.14.50-v7+
Raspbian Buster (10) release 2019-09-26 using kernel version 4.19.93-v7+
Ansible role
I started to bootstrap all my computers using Ansible a while ago. Especially for the
Raspberry Pi I find an Ansible playbook pretty helpful since I keep setting it up from scratch for various new projects very often. For
compiling the rtl8812au wifi driver on Raspbian I published an Ansible role on github. Check it out:
In an earlier post I showed how to optain a TLS certificate from the
Let’s Encrypt certification authority. As those certificates are only valid for 90 days you will have to
renew them early enough. But no worries, Let’s Encrypt will remind you of an upcoming certificate expiry by sending you a
certificate expiration notice via email.
Presuming you followed the steps from my earlier post, these are the steps to
renew your certificate:
Again, if everything went well we will find our renewed certificates in /etc/letsencrypt/live/layereight.de/.
Let’s Encrypt is a new certification authority (CA) providing free TLS certificates.
Just recently they entered Public Beta.
Reason enough to give it a try… They claim to automate the main steps of obtaining a certificate. And indeed, the procedure turns out to
be fairly easy. Let’s Encrypt provides a client software hosted on github
that’s doing all the hard work for you. The software comes with a
couple of plugins that aim to obtain and install certificates with a
running web server. But as it currently still is beta software, some of the plugins are marked as experimental or might not work.
I’m running Nginx and refrained from using the experimental nginx plugin. Instead I used the
standalone plugin which will start an agent including a webserver
to communicate with the CA. It will prove that you actually control the domain (Domain Validation). Therefor the agent’s webserver will try
to bind port 80 and 443 as callback endpoints for Let’s Encrypt’s backend system. This means you will have to stop your webserver for
the time running the client software. If you can’t or don’t want to stop your webserver you should take a look at the
webroot or manual plugins.
Anyway, to cut a long story short, here is what worked for me:
In case everything went well you will find your certificates in /etc/letsencrypt/live/layereight.de/. Currently they are only valid for
90 days, probably to minimize risk in case of abuse. Another interesting fact is that the whole process currently also only works with IPv4,
which means with the DNS A record for your domain.
While the SCM integration of an IDE is convenient for most people, I prefer using the command line. So
for git. However typing full git commands like git commit becomes cumbersome at a certain point (even though
it’s only a few letters to type). Luckily, git comes with built-in support for command aliases you can use to
abbreviate commands. So I only need to type git ci. Here are the git aliases I have configured:
You may also be interested in some documentation on that topic:
There is a variety of
tools to create bootable USB drives. Many
of them come with a graphical user interface, a rich feature set and are convenient to use. You are probably familiar
with UNetbootin or the
Ubuntu Startup Disk Creator. While those programs do a fine job,
not all people are aware that the versatile linux core util dd is also capable to perform this task.
And sometimes dd is all you have (and need).
Here is what I usually do when I want to create a bootable USB flash drive:
I use lsblk to figure out under which device the USB flash drive is known to the system. You can also consult
the kernel output using dmesg if it’s not that obvious. In any case you should be sure of picking the right target
device, otherwise you might destroy the contents of your system hard drive. For the example we can be pretty sure
that it’s /dev/sdb.
Execute the dd command with the necessary parameters. My standard scenario is creating a bootable USB drive of
GParted, a very powerful tool I use to copy, shrink or extend partitions.
When your ISO image is big or the copy process is just slow you might be interested in its progress. You can easily
achieve this by sending the dd process the USR1 signal. In a separate shell you can find out the pid of your
running dd process and send it the USR1 signal.
Sending dd the USR1 signal will make it print the current copy progress on the original shell, e.g.:
After dd has finished you might want to call sync, just to make sure that the filesystem cache is flushed to
the USB drive.
That’s it. Your bootable USB drive is ready for being used. By the way, the same procedure is applicable to SD memory cards.
When copying data using sshfs on weak systems such as a NAS, you usually encounter poor throughput performance.
With those systems not network bandwidth or hard drive performance might be the bottleneck, but little CPU power.
ssh’s data encryption and compression algorithms slow down the overall data transfer.
Tweaking sshfs with some options might improve the situation:
use a faster cipher algorithms that need less cpu power, e.g. arcfour, blowfish-cbc
additionally you can turn off compression
Be aware, a faster cipher algorithm is usually also a weaker one! That means you will weaken your overall security!
So don’t do this in unsafe environments! You might only want to do this in your home network.
This article describes how to create an ISO image file from a CD or DVD. As usual with linux, there is more than one way
to accomplish a task. But as we are all big fans of the linux command line, we will be using the command line tool dd.
But first we need to know the device our optical drive is assigned to. We can use lsblk to figure that out.
In our case it seems to be /dev/sr0. Often there are also some symbolic links pointing to the optical drive. We can use them too,
of course.
After we made sure that the device is not mounted anywhere, we can actually start the copy process using dd.
Depending on the drive’s speed and the size of the image, the copy process might take some time. We can gain insight into
the program’s progress by sending the USR1 signal to the dd process. In a separate shell we can find out the pid of the
running dd process and send it the USR1 signal.
Sending dd the USR1 signal will make it print the current copy progress on the original shell, e.g.:
Downloading a lot of files from an HTTP source with a lot of sub directories can be quite annoying. Who ever has clicked through several
folders in his browser to download a couple of files knows what I’m talking about, especially if you have several hierarchies of sub
directories. Of course there are browser extensions that help, but if you want to solve the problem in a shell, wget is your tool of
choice.
The parameters explained, taken from the wget manual page, some of them might be optional for your case:
-r --recursive Turn on recursive retrieving. The default maximum depth is 5.
-l depth --level=depth Specify recursion maximum depth level depth.
-A acclist --accept acclist Specify comma-separated lists of file name suffixes or patterns to accept. Note that if any of the
wildcard characters, *, ?, [ or ], appear in an element of acclist, it will be treated as a pattern, rather than a suffix. In this case,
you have to enclose the pattern into quotes to prevent your shell from expanding it, like in -A “*.mp3” or -A ‘*.mp3’.
-p --page-requisites This option causes Wget to download all the files that are necessary to properly display a given HTML page.
This includes such things as inlined images, sounds, and referenced stylesheets.
-nd --no-directories Do not create a hierarchy of directories when retrieving recursively.
-np --no-parent Do not ever ascend to the parent directory when retrieving recursively. This is a useful option, since it guarantees
that only the files below a certain hierarchy will be downloaded.
-nv --no-verbose Turn off verbose without being completely quiet (use -q for that), which means that error messages and basic
information still get printed.
-k --convert-links After the download is complete, convert the links in the document to make them suitable for local viewing.
-P prefix --directory-prefix=prefix Set directory prefix to prefix. The directory prefix is the directory where all other files and
subdirectories will be saved to, i.e. the top of the retrieval tree. The default is . (the current directory).
--http-user=user --http-password=password Specify the username user and password password on an HTTP server. According to the type
of the challenge, Wget will encode them using either the “basic” (insecure), the “digest”, or the Windows “NTLM” authentication scheme.
--no-check-certificate Don’t check the server certificate against the available certificate authorities. Also don’t require the URL
host name to match the common name presented by the certificate.
I wanted to find out what’s the total size (including sub folders) of every single sub directory inside one particular folder.
Additionally, I only wanted to see folders that are at least some megabytes in size. Here are some commands that helped: