Here are a bunch of howtos, tips and advice on how to do beginner to intermediate things with computers. Though they never stop sucking...
  • Some reminders for http caching

    Found here:

    No-cache means “do cache”

    Caching has never been easy, but HTTP cache headers can be particularly confusing. The worst examples of this are no-cache and private. What does the below response header do?

    Cache-Control: private, no-cache

    This means “please store this response in all browser caches, but revalidate it when using it”. In fact, this makes responses more cacheable, because this applies even to responses that wouldn’t normally be cacheable by default.

    Specifically, no-cache means that your content is explicitly cacheable, but whenever a browser or CDN wants to use it, they should send a request using If-Match or If-Modified-Since to ask the server whether the cache is still up to date first. Meanwhile private means that this content is cacheable, but only in end-client browsers, not CDNs or proxies.

    If you were trying to disable caching because the response contains security or privacy sensitive data that shouldn’t be stored elsewhere, you’re now in big trouble. In reality, you probably wanted no-store.

    If you send a response including a Cache-Control: no-store header, nobody will ever cache the response, and it’ll come fresh from the server every time. The only edge case is if you send that when a client already has a cached response, which this won’t remove. If you want to do that and clear existing caches too, add max-age=0.

    Twitter notably hit this issue. They used Pragma: no-cache (a legacy version of the same header) when they should have used Cache-Control: no-store, and accidentally persisted every user’s private direct messages in their browser caches. That’s not a big problem on your own computer, but if you share a computer or you use Twitter on a public computer somewhere, you’ve now left all your private messages conveniently unencrypted & readable on the hard drive. Oops.

  • SpinRite 6 on external Toshiba usb disk

    After 827 days of running time my RaspiBlitz BTC lightning node refused to mount the external hdd (Toshiba HDTB410EK3AA Canvio Basics, USB 3.0, 1TB). Smart errors of the weirdest kind. I remembered Gibson’s spammy advertisements during the Security Now! Podcast, praising SpinRite for recovery. As there was no physical damage / interaction that would have caused that i gave it a try.

    After i bought the license, i downloaded the exe causing first problem, how to run on Linux? I have a Windows 7 laptop for such cases, so i executed the program and tried all the different options to create a bootable USB, finally succeeding by writing out the diskette spinrite.img to harddisk, then dd-ing it onto a usb flash drive:

    dd if=/path/to/SpinRite.img conv=notrunc of=/dev/<your usb device, i.e. sda>

    After rebooting the same laptop with the external USB disk attached, SpinRite started right away, and luckily for me, the drive was instantly recognized; no need for driver voodoo on the included FreeDOS distribution - that was my biggest concern. Probably the fact that the external disk is not a casing with some exotic usb-controller, but a disk with an integrated usb port helped a lot. A small downer was the unavailability of smart data for SpinRite - I don’t have a theory about that.

    The first run failed with a program abort:

    This is ongoing.

  • Wireguard scenario workstation -> vpn gateway -> private network

    I’ve moved a rather hacky tinc mesh vpn solution to wireguard, all set up through an ansible playbook. the topology is rather classic:

    my workstation (laptop, changing network situation) connects as a ‘client’ to two wireguard ‘servers’ as vpn gateways which are publicly accessible bastion hosts, and who also are members of a private subnet to which they ought to give access. the specific nodes are cloud instances of each Hetzner cloud and Vultr cloud.

    Hetzner recently started to provide private interfaces to their cloud instances, currently the private addresses seem to be given randomly when using the cli tool, but can be specified also via their website interface. Vultr has that service already longer, however, the private ip cannot be specified and is assigned at random.

    the above used terms ‘client’ and ‘server’ are a bit anachronistic, as Wireguard does not make such a difference. the ‘servers’ merely do not get endpoints to their peers in their interface configuration, as they do not initiate connections.

    Generally, when running a linux vpn gateway that connects two interfaces into different subnets (here wg0 is the wireguard interface, ens10 is the interface to the cloud provider’s virtual router and a self configured private subnet) one only needs to set /proc/sys/net/ipv4/ip_forward to 1 and /proc/sys/net/ipv6/conf/all/forwarding to 1 and be done with it. The nodes in the private subnet possibly need some way of receiving the necessary route back to that vpn gateway, via some routing protocol or static routing.

    I was not able to set this up neither on Hetzner, nor on Vultr, and had to instead set up NAT on the gateway via iptables, as advised here in this tutorial, by the way a good reference on how to set up Wireguard:

    My theory is that the virtual routers of the cloud providers are filtering this kind of traffic, as i can see the packets running through both the wireguard interface and the private subnet interface on the vpn gateway, but cannot see them at the final node’s interface. But i could be entirely wrong.

    UPupdated: And here’s a quick follow up on the wireguard topic:

  • Install and monitor skypool's Nimiq client via ansible playbook, systemd and ruby & cron


    This is a short entry to document installation and monitoring of the skypool nimiq client. The Nimiq network is a decentralized payment network that runs in the browser and is installation-free.

    Personally, i believe that the Litecoin and Ethereum projects have been so far able to generate a strong economy around them, however, projects like Nimiq definitely convince me in terms of usability and simplicity approach to the user.


    I am considering Ubuntu 16.04 as base operating system.

    The playbook does the following things:

    1. Install the necessary dependencies - ruby-dev for ruby 2.3, ruby gem package manage - unzip to handle the release file from github
    2. Create a specific user nimiq and a program directory /opt/nimiq
    3. Download and unpack the release file from github under a version-specific directory below the program directory
    4. Create skypool client configuration file according to your demands and with your wallet address
    5. Create a systemd unit file, start the skypool client as a service and enable restart on reboot
    6. Create a status checker that uses the skypool api to check the worker’s online/offline status
    7. Create a crontab entry for the root user to run the status checker every ten minutes



    The cron entry running every 10 minutes is a tradeoff on how brittle the online/offline check delay currently is experienced by me through the skypool site. Presumably skypool does not have a real heartbeat check towards the worker but assumes that the worker is online when it receives results from it, and subsequently assumes the worker to be offline if it does not (most pools in the cryptocurrency world work like that). That means in terms of perfect time period between checks, your mileage may vary.


    The service runs currently under the user nimiq, hence a non-privileged user of the system. However, the systemd daemon used is the one from root. Hence only the root user can restart the nimiq service. For this reason, the cron entry is registered through the root user. If you want to be able to use the nimiq user to restart the nimiq service, you have to run a systemd daemon based on the nimiq user. I have successfully done that for another service playbook, and I might add this information in the future, if demand is voiced.


    Find below the full gist as published on github. Full gist here.

  • Installing Ubuntu per minimal image to PC engines APU2

    This is the company: PCengines

    This is the device: APU2

    nullmodem setup

    using putty

    Check which com port, mine was set to ‘com4’

    Get a usb to serial converter, install drivers. Some of those converters seem to have timing problems, but i did not encounter that.

    I once tried lowest baud rate 9600 and that produced some nice screen carnival, but nothing really legible.

    prepping usb stick

    Download usb prep tool ‘TinyCore USB Installer’ and run it against on usb, i’ve used an 8GB stick, make sure it’s not the slowest

    To try out you can now boot into TINYCORE. So put this into the APU2’s usb port and boot up having the serial nullmodem cable connected and the putty session open. Finished boot is indicated by an audible beep. This is good to check the serial connection which you should have established parallel to that.

    If you want to keep the option of booting into TINYCORE open, backup the syslinux.config fom the USB’s root directory, as this one will be overwritten by the package content we are now downloading.

    Download special ubuntu package from pcengines, unpack and move the three files into the usb root folder / or :/ depending on your system.

    Now plug in the usb into the apu2 and boot having the serial nullmodem cable connected and the putty session open. You will see the setup menu, similar to this screen shot:

    View Installation Setup Wizzard

    The terminal setup process seems daunting at first, but it essentially is really analogues to the graphical ubuntu installer. I found my way around by basically following the Easy Path(tm) of most of the suggestions of the installer, going automatically step by step through the menu. On some of the sub menus i was able to make some educated changes as i knew a bit of more details and i had a good idea where i want to go with this system, but this might not apply to you.

    The one exception was the network configuration. running the automatic network detection seems to have got the dhcpd info , but when i dropped into the busy box ash shell environment (one menu option Execute a shell in the main hierarchy at the beginning of the installation process), i had to run dhclient directly on the interface again. Checking via ip addr i now could verify the indeed applied values, and could ping any public server. With exit i dropped back into the installation menu. On a later second setup run this problem did not occur again.

    I chose no automatic updates as i can see the cronjob using quite some resources. I’d rather manually schedule that for this particular system at them moment. Part of the minimum running service policy of mine for this instance.

    I followed some tip regarding the bootloader installation, and it apparently solved my problem of an unfinished installation before. I lost the link, but it boiled down to manually enter the first partition of the setup target (pcie flash device in my case), so that was /dev/sdb1 as opposed to /dev/sdb. Again, this might be different for you.

    Once that was done, and with a bit more patience i rebooted and eventually login via ssh could be established. I then halted the machine, physically unplugged the usb key and the console, and replugged power.

    After about 45 sec ping answered and after than ssh came back online.

  • primecoind and/or primeminer (beeeeerpool ed.) on a digital ocean droplet

    First of all, make sure you have swap on (see previous post)

    The Ubuntu droplet needed those packages libdb++-dev libssl-dev libboost1.48-all-dev libgmp-dev

    the versioning is sometimes tricky, depending on distribution version and a lot of others. I was successfully using 1.46 of the Boost package before, so you might want to try different ones. The distro standard you get with libboost-all-dev

    assuming root here

    Get the source from github

    git clone


    cd ~/primecoin/src
    make -f makefile.unix

    This is how i execute the miner. As you can see, i use nohup and & to daemonize and prevent signals when closing the shell. all output goes to primeminer_stdout for possible later reference. if you want to keep all output, consider using >> in order to append to the end of the file.

    this is one line, replace $ to your liking

    nohup ./primeminer
    -genproclimit=$procs > primeminer_stdout &

    More details here

  • make a swap file on the fly

    Disclaimer: this should only be used if you can’t partition your drive by yourself, or it would be a hazzle to do so. I’ve used that method to make one compile process work, otherwise I don’t really need it.

    1. Check for present swap space

      if there’s any output you might consider another solutions

      sudo swapon -s

    2. Create the actual file

      bs = times count equals filesize, in this case 1gb

      sudo dd if=/dev/zero of=/swapfile bs=1024 count=1024k

    3. Alternative:

    4. Create a linux swap area

      sudo mkswap /swapfile

      Output looks something like this:

      Setting up swapspace version 1, size = 262140 KiB no label, UUID=103c4545-5fc5-47f3-a8b3-dfbdb64fd7eb

      sudo swapon -s

    5. Activate the swap file

      sudo swapon /swapfile

      Now `swapon -s` should show something like this

      Filename Type Size Used Priority /swapfile file 262140 0 -1

    6. Make it persistent in the /etc/fstab with the following entry

      /swapfile none swap sw 0 0

    7. Make swappiness 0, otherwise performance will be poor. So it’s just an emergency buffer

      echo 0 > /proc/sys/vm/swappiness

    8. make swappiness persistent

      echo 'vm.swappiness=0' > /etc/sysctl.conf

    9. A bit of good practice, since this is root’s business

      chown root:root /swapfile chmod 0600 /swapfile