Jan 092017

To run boxbackup on a system which uses systemd, use this little systemd service file (taken from here and modified for Ubuntu):

# /etc/systemd/system/boxclient.service
# This service file runs an Box Backup daemon that runs backups on demand.

Description=Box Backup Client

ExecStart=/usr/bin/bbackupd -F -c /etc/boxclient/bbackupd.conf


Enable and start via

systemctl enable boxclient.service
systemctl start boxclient.service

 Posted by at 17:40  Tagged with:
Jun 132016
Ansible - Thoughts

One draw of Ansible is the ability to use ready-to-use roles doing things. Similar to CPAN, Docker Hub and other places where users share their work with others. Ansible’s solution is Ansible Galaxy where you find roles which you can (in theory) just use like this:

ansible-galaxy install resmo.ntp -p ~/ansible/roles/
mv ~/ansible/roles/resmo.ntp ~/ansible/roles/ntp
cat <<_EOF_ >ntp-server.yaml
- hosts: ntpserver.lan 
  become: yes 
    - role: ntp 
      ntp_config_server: [ntp1.jst.mfeed.ad.jp, ntp2.jst.mfeed.ad.jp] 
ansible-playbook -i hosts.ini ntp-server.yaml

What this does is:

  • Install resmo’s ntp role
  • Apply this to the host ntpserver.lan

The first run will install the ntp package, configure ntp.conf and start the ntp daemon. The next runs will ideally do nothing, in practice this implementation does restart the ntp daemon regardless.

Danger here is that some modules run as root and do unexpected things, so reading those roles found in the Ansible Galaxy is critical. It’s also a nice learning exercise how to organize things.

In the end, making another server a ntp server now is a snap. And once I have some more of those roles, installing a WordPress blog and a MySQL server including configuring it is as easy as using Docker containers. Except Docker containers don’t touch an existing system and Ansible (by design) does…well, you can’t have everything I guess.

 Posted by at 20:21  Tagged with:
Apr 232016
Vagrant - First Impressions

When you need a VM, you just build one manually. When you need two, you just build two manually. When you need three or more and with changing network topologies, you wonder if there is no better way to do this.

I needed 4 VMs: 1 doing NAT/acting as a router, and 3 for a typical cluster stuff like etcd, cassandra, object storage etc.
So something better than manual build was needed. OpenStack is a possible solution but that turned out to be surprisingly complicated and overkill. Vagrant looked better.

Things I learned so far:

  • Using Vagrant for a simple VM “out of the box” (“box” is basically the template) is very simple.
  • You connect to the single VM by “vagrant ssh”
  • Similar to Docker, you can import files (e.g. config files) and run scripts inside the VM
config.vm.provision "shell", inline: <<-SHELL
    date >/tmp/f0
  • Similar to Docker, you can mount local directories into the VM:
 config.vm.synced_folder "./data", "/vagrant"
  • Modifying VM parameter is not difficult:
config.vm.provider "virtualbox" do |vb|
  # Display the VirtualBox GUI when booting the machine
  # vb.gui = true
  # Customize the amount of memory on the VM:
  vb.memory = "512"
  vb.cpus = 1
  vb.linked_clone = true
  vb.customize ["modifyvm", :id, "--vram", "9"]
  • To modify an existing box or making a new one, you can do that manually, or via a “packer”. The manual one is easy:
    • Enter an existing box (easiest one: console, account vagrant, same password)
    • Do your stuff
    • Outside the box run:
vagrant package --base VMNAME
mv package.box some-more-sensible-name.box
vagrant box add some-more-sensible-name.box some-name
  • You can now use “some-name” as box name for new VMs
  • Keep the box files. And the Vagrantfile. That’s all beside replicated/copied/mounted directories.
  • Multiple VMs are easily created, and need some extra network configuration. Like this:
config.vm.define "web" do |web|

config.vm.define "db" do |db|
  • Since I have 2 bridged interfaces, I have to name the one I like. Both VMs promptly aquire IPs via DHCP on my internal network.
config.vm.network "public_network", bridge: "enp3s0"

Additional interesting links:





 Posted by at 23:21  Tagged with:
Jan 172016
Banana Pi in a Case

My Banana Pi (AKA BPI, CPU AllWinner A20 @ 1 GHz, 1 GB RAM, 32 GB SD-Card as mass storage) is not fast, but nice to run 24/7. Barely draws power and thus could run on a battery for a while. When I saw this I thought “That’s a nice idea. Let me do that with my BPI too”. I always wanted to put the 5″ LCD somewhere permanent anyway: the LCD and the BPI were loosely connected by this fragile flat cable…it was a question of time when it would break. Unfortunately there is no nice case for both units.

So I searched for a usable case. As flat as possible, as small as possible. With or without space for a battery. I was looking for something made of wood, alas plastic took over the world, so I bought one of those. Not perfect, but the best I could find.

2 problems I had:

  1. How to attach everything inside (BPI and LCD) and what to do about connecting cables. The BPI has connectors on all 4 sides. And the cable to the LCD is not very long.
  2. The case opens completely flat, which would make the screen unreadable as well as it would break the cable. So I had to limit the top cover from opening too much. It’s now set at about 100º.

In the end, below you can see how I connected everything.

  • The side of the SD-Card has enough space to remove the card easily.
  • The side of the USB and NIC has enough space to connect normal cables, but many USB sticks are too long
  • Composite video and audio are basically not usable
  • Neither is HDMI and SATA
  • Power connects via barrel connector (alternatively micro-USB, but I don’t use this)

Future improvements:

  • A longer and/or more flexible LCD cable would be nice
  • Being able to connect cables (power, Ethernet) while the case is closed
  • Run via battery. A small 800mAh 7.2V LiPo should last for 4h
  • A custom made case for BPI, LCD, LiPo and charger circuit


Top of the Banana Pi in a case

Top of the Banana Pi in a case

Banana Pi in the box, open

Banana Pi in the box, open

Bottom of the Banana Pi in the box

Bottom of the Banana Pi in the box

 Posted by at 20:23  Tagged with:
Jan 112016
Zyx, Windows 10 and PL2303 Driver

When using Windows 10 the driver for the PL2303 inside the USB connection cable for the Zyx, the Zyx software cannot find the COM port. Windows itself might see it. Or it cannot start the device. In all cases the Zyx software cannot find your Zyx.

The tricky part is that with the default Windows driver it finds the PL2303 driver, but it cannot start it. Installing the PL2303 driver from Tarot itself does not help since it’s an older version of the driver.  Windows thus defaults to the newer one. Thus you not only have to install the Tarot USB PL2303 driver, but also select it explicitely in the device manager.

 Posted by at 17:24  Tagged with: ,
Dec 312015
Synology git

Running git on github is great, but since all repositories are public, there’s a certain danger to publish password, API keys or similar on it. See here.

Since I got a Synology NAS, I can install a git server package! Except it does not do a lot, but it’s enough to get started and have my repositories on NAS. One central location. One location where making a backup is much easier than on random devices.

Requirements for this to work:

  • Enable ssh access on the NAS
  • Have a home directory with .ssh and .ssh/authorized_keys so you can log in via ssh and without needing to enter a password

Now to set up a git repository:

  • Install the git server Synology package
  • Log in as root on the NAS
  • cd /volumeX (x=1 in my case since I have only one volume)
    mkdir git-repos ; chown YOURACCOUNT:users git-repos
  • Log in as you on the NAS
  • cd git-repos
    mkdir dockerstuff; cd dockerstuff ; git init --bare --shared ; cd ..
  • Repeat for other repositories/directories

Now on a git client do:

  • git clone ds.lan:/volume1/git_repos/dockerstuff
  • put files in here
  • git add * ; git commit -a -m "Initial population" ; git push


Jan 292014

Got a Cubietruck as a small (literally) and low power (really) server. Main purpose is to have an off-site backup so I don’t need to continue to use CrashPlan. CrashPlan is nice generally, but it has show-stopper-quality problems. And those are not being addressed at an acceptable speed.

The solution is: boxbackup at a friend’s house. In return I host his off-site backup. BaaS we’d call this nowadays: Backup-as-a-Service.


  1. A small Internet connected PC with a disk which should be available 24/7
  2. boxbackup-server running on it
  3. some 500GB disk space being available


Get a Cubieboard 3. Get a 500GB 2.5″ SATA disk. Set up forwarding rules for the Internet-facing router.


OS is on an SD Card. Debian Wheezy in my case. Instructions see here and here. Worked like a charm. If using the serial console (highly recommended), make sure this is in the uEnv.txt file:

extraargs=console=tty0 console=ttyS0,115200 hdmi.audio=EDID:0 disp.screen0_output_mode=EDID:1280x720p50 rootwait pan
ic=10 rootfstype=ext4 rootflags=discard

Note the “console=ttyS0,115200” part: without it, there is no serial console available.

Next steps

  1. Connect 2.5″ disk to the Cubietruck with the supplied cables.
  2. Configure boxbackup-server
  3. Set up network connectivity to the remote backup (port forwarding, tunnel, etc.)
  4. Do a backup


Mar 232012
Synology DS212j

Finally got a small NAS. Although it was tempting to get a bigger/faster one with 5 or  4 disk slots and a fast CPU, it’s way overkill for my purpose, so in the end, I went for a small DS212j plus a (for now) 3TB disk.

It’s plenty fast (75MB/s read via NFS), the GUI is awesome, the capabilities more than sufficient. It has some kinks though:

  • The OS is on the disk and not in flash memory.
  • If you have 2 different size disks, then if you create a mirrored volume, the rest of the space goes unused instead of being able to use it as a non-mirrored volume.
  • Volumes always take full disks (or what is left after the OS is copied on them)
  • To enable NFS, you need to first enable it, and then create shares (which are usually for Samba shares), and there enable NFS sharing too.
  • To enable home directories, click on the user modules and click there on “User Home” and enable it and say which volume to use.
  • Disk groups names are hardcoded: Disk group 1, Disk group 2 etc.
  • Volume names are hardcoded: Volume 1, Volume 2 etc. They map to mount points called /volume1, /volume2 etc.
  • The media server does not ask for the location of files. It defines it to be on a volume you pick.

If you have only 2 disks, do yourself a favor and get 2 of identical size and use a RAID-1 (or their hybrid volumes). Alternatively expect no mirroring whatsoever. If you have more disks (4 would be a good start), then this is much less of a problem.

Mar 052012
nVidia Power Management

My Dell Vostro 3700 is all good and nice, but too hot. All the performance I usually don’t care. CPU performance is nice to have, but thr GPU is overkill unless I play games, which I have not done for years.

nVidia’s Linux driver has some hidden power management settings, which until today I could never enable. The nvidia-settings tool always showed high performance (Graphics clock 575MHz, Memory clock 790MHz, Processor Clock 1265MHz). The lowest performance is 135/135/270MHz. Obviously much slower, yet fast enough for pushing windows around the desktop.

Finally this did it (and here some more explanations):

PowerMizerEnable=0x1; PerfLevelSrc=0x2222; PowerMizerLevel=0x3; PowerMizerDefault=0x3; PowerMizerDefaultAC=0x3

This sets the performance to lowest possible (0x03=lowest) for all possible power-situations.

Screen is still snappy, and even Google Earth, which is probably the most taxing program I run in terms of graphics, is still perfectly ok to use. The result is about 6° less temperature (idle machine runs at 67 instead of 73°C. That allows the CPU to run a bit faster, or alternatively more cores to run in parallel without either the fan spinning faster, or the system forcing a shutdown due to overheating.


 Posted by at 22:28  Tagged with:
Mar 052012
Kubuntu 12.04 LTS Beta 1

Kubuntu 12.04 LTS Beta 1 is out and curious as I was, I gave it a try. I had some good reasons:

  • The company’s remote access RDP client behaved funny with my dual monitor setup after messing with the Java runtime environment
  • My Dell Vostro 3700 gets really hot really quick and there’s a known regression in the Linux kernel

The first one caused me to boot into Windows 7 again, which is at least annoying, the latter causes my CPU to run not at 1.6GHz which would be its nominal frequency, but 1.2GHz max, without Turbo-Boost. Luckily the CPU is still very fast for my purposes and 4 cores plus hyperthreading, 6GB RAM and a GeForce GT 330M help.

So I tried the update to Kubuntu 12.04 LTS Beta 1 which is well described here and which worked equally well, with one minor problem which was Dropbox: while it tried to reinstall or re-download the Dropbox Debian package, it was just sitting there…after 2h I killed it and then the install process continued. The fix is as simple as a

aptitude reinstall nautilus-dropbox

and that fixed everything.


Most things look just as before with minor changes, some things are much nicer (Dolphin’s icons now move around nicely animated when resizing its window), and both issues I had with the previous install are either gone (no issues with the VPN RDP client anymore) or much better (temperature definitely decreased during normal operations)


 Posted by at 22:05  Tagged with: