whysthatso

These are snippets about stuff that i came across kind of posts.

Redirect

Posted on January 22, 2023  //  ruby on rails

One way to redirect inside the Rails router based on the client’s Accept-Language header.

Previously I thought I had to do this inside the proxy webserver Nginx, only really possible with the LUA-enhanced fork of Openresty, or the selfcompiled Nginx version.

Or to go into the rack middleware world and figure out how to do it there - it’s probably still the fastest, cleanest to do it there.

There are more ways above that: routes file, and of course application controller.

I went for the routes file and added this directive:

root to: redirect { |params, request|

"/#{best_locale_from_request!(request)}"

}, status: 302, as: :redirected_root

The curly braces syntax is obligatory, do/end block does not work.

The actual work is being done with the help of the accept_language gem and these two methods, split up for easier reading i presume:

def best_locale_from_request(request)
	return I18n.default_locale unless request.headers.key?("HTTP_ACCEPT_LANGUAGE")
	string = request.headers.fetch("HTTP_ACCEPT_LANGUAGE")
	locale = AcceptLanguage.parse(string).match(*I18n.available_locales)

	# If the server cannot serve any matching language,
	# it can theoretically send back a 406 (Not Acceptable) error code.
	# But, for a better user experience, this is rarely done and more
	# common way is to ignore the Accept-Language header in this case.

	return I18n.default_locale if locale.nil?
	locale
end

I’ve put them both into the routes file, but there might be a better place for that.

The available locales array grew a bit, in order to prevent edge cases:

# config/application.rb

config.i18n.available_locales = [:en, :"en-150", :"en-001", :"en-DE", :de, :"de-AT", :"de-CH", :"de-DE", :"de-BE", :"de-IT", :"de-LI", :"de-LU", :et, :"et-EE"]

Turns out the gem always forwards the geography part as well, so in order to make sure nobody is left out, I have added this for now. this might become tricky later on as paths are created based on that, and the language switcher might be a bit more tricky. maybe it makes sense to cut the second part off somehow.

Resources

Accept-Language gem: https://github.com/cyril/accept_language.rb

A rack app i did not get to work, but apparently does the i18n settings as well: https://github.com/blindsidenetworks/i18n-language-mapping

This was very helpful for the redirect syntax: https://www.paweldabrowski.com/articles/rails-routes-less-known-features

Moving lvm-thin volumes on proxmox between vm-s or ct-s

Posted on November 5, 2021

Following this official howto

lvs shows you all volumes in their volume group (in my case ‘ssd’)

LV               VG  Attr       LSize    Pool        Data%  Meta%
data             pve twi-a-tz-- 32.12g               0.00   1.58
root             pve -wi-ao---- 16.75g
swap             pve -wi-ao---- 8.00g
guests           ssd twi-aotz-- <2.33t               74.93  45.51
vm-100-disk-0    ssd Vwi-a-tz-- 12.00g guests        72.69
vm-101-disk-0    ssd Vwi-a-tz-- 12.00g guests        85.22
vm-101-disk-1    ssd Vwi-a-tz-- 50.00g guests        99.95
vm-102-disk-0    ssd Vwi-a-tz-- 12.00g guests        97.57
vm-102-disk-1    ssd Vwi-a-tz-- 50.00g guests        64.54
vm-103-disk-0    ssd Vwi-a-tz-- 12.00g guests        74.37
vm-103-disk-1    ssd Vwi-a-tz-- 150.00g guests        52.42
vm-104-disk-0    ssd Vwi-a-tz-- 12.00g guests        90.74
vm-104-disk-1    ssd Vwi-a-tz-- 10.00g guests        95.27
vm-105-disk-0    ssd Vwi-a-tz-- 12.00g guests        55.79
vm-105-disk-1    ssd Vwi-a-tz-- 10.00g guests        32.89
vm-106-disk-0    ssd Vwi-a-tz-- 12.00g guests        77.78
vm-106-disk-1    ssd Vwi-a-tz-- 10.00g guests        99.82
vm-107-disk-0    ssd Vwi-a-tz-- 32.00g guests        0.00
vm-107-disk-1    ssd Vwi-a-tz-- 500.00g guests        95.41
vm-108-disk-0    ssd Vwi-aotz-- 8.00g guests        43.73
vm-109-disk-0    ssd Vwi-a-tz-- 12.00g guests        52.41
vm-109-disk-1    ssd Vwi-a-tz-- 50.00g guests        2.22
vm-110-disk-0    ssd Vwi-a-tz-- 12.00g guests        51.14
vm-110-disk-1    ssd Vwi-a-tz-- 50.00g guests        2.22
vm-111-disk-0    ssd Vwi-a-tz-- 12.00g guests        84.85
vm-111-disk-1    ssd Vwi-a-tz-- 100.00g guests        16.97
vm-112-disk-0    ssd Vwi-a-tz-- 8.00g guests        13.53
vm-113-disk-0    ssd Vwi-a-tz-- 8.00g guests        11.55
vm-114-disk-0    ssd Vwi-a-tz-- 16.00g guests        84.31
vm-115-disk-0    ssd Vwi-a-tz-- 16.00g guests        97.12
vm-116-disk-0    ssd Vwi-a-tz-- 8.00g guests        31.49
vm-117-cloudinit ssd Vwi-aotz-- 4.00m guests        50.00
vm-117-disk-0    ssd Vwi-aotz-- 10.00g guests        39.71
vm-117-disk-1    ssd Vwi-aotz-- 1000.00g guests        97.47

If the id of the new ct or vm is not equal to the id of the volume’s previous attachment, rename them, i.e.

lvrename ssd/vm-101-disk-1 ssd/vm-117-disk-2

this will make vm-101-disk-1 available as vm-117-disk-2, you have to increase the count in the end of the name.

then edit the config of the actual vm.

take the line from /etc/pve/qemu-server/<vm id>.conf that describes the volume to the new <vm id>.conf

the tricky thing was to run qm rescan afterwards which fixed syntax and made the volume appear in the web gui where i could finally attache it to the new vm.

WakeOnLan, Archlinux, systemd-networkd, Asus Pro WS X570-ACE

Posted on June 3, 2021  //  networking arch linux

The board has two integrated ethernet adapters, here’s the lshw data:

sudo lshw -c network
  \*-network
       description: Ethernet interface
       product: I211 Gigabit Network Connection
       vendor: Intel Corporation
       physical id: 0
       bus info: pci@0000:05:00.0
       logical name: enp5s0
       version: 03
       serial: 24:4b:fe:<redacted>
       size: 1Gbit/s
       capacity: 1Gbit/s
       width: 32 bits
       clock: 33MHz
       capabilities: pm msi msix pciexpress bus\_master cap\_list ethernet physical tp 10bt 10bt-fd 100bt 100bt-fd 1000bt-fd autonegotiation
       configuration: autonegotiation=on broadcast=yes driver=igb driverversion=5.12.8-zen1-1-zen duplex=full firmware=0. 6-1 ip=<redacted> latency=0 link=yes multicast=yes port=twisted pair speed=1Gbit/s
       resources: irq:61 memory:fc900000-fc91ffff ioport:e000(size=32) memory:fc920000-fc923fff
  \*-network
       description: Ethernet interface
       product: RTL8111/8168/8411 PCI Express Gigabit Ethernet Controller
       vendor: Realtek Semiconductor Co., Ltd.
       physical id: 0.1
       bus info: pci@0000:06:00.1
       logical name: enp6s0f1
       version: 1a
       serial: 24:4b:fe:<redacted>
       size: 1Gbit/s
       capacity: 1Gbit/s
       width: 64 bits
       clock: 33MHz
       capabilities: pm msi pciexpress msix bus\_master cap\_list ethernet physical tp mii 10bt 10bt-fd 100bt 100bt-fd 1000bt-fd autonegotiation
       configuration: autonegotiation=on broadcast=yes driver=r8169 driverversion=5.12.8-zen1-1-zen duplex=full firmware=rtl8168fp-3\_0.0.1 11/16/19 ip=<redacted> latency=0 link=yes multicast=yes port=twisted pair speed=1Gbit/s
       resources: irq:24 ioport:d800(size=256) memory:fc814000-fc814fff memory:fc808000-fc80bfff

It seems that the UEFI entry to activate Wake on Lan for PCIe devices only affects the Intel port, I have persistently activated WOL for the Realtek port by adding a .link file to /etc/systemd/network/foobar.link

[Match]
MACAddress=<redacted>

[Link]
WakeOnLan=magic
# below lines are cloned from original entry in
# /usr/lib/systemd/network/99-default.link
# which is the default link file for all adapters whose section is hereby overwritten
NamePolicy=keep kernel database onboard slot path
AlternativeNamesPolicy=database onboard slot path
MACAddressPolicy=persistent

The arch wiki shows a couple of alternative ways, but this seems to be the most straight forward for me.

Upgrade Postgresql from 11 upwards

Posted on April 19, 2021  //  postgresql ubuntu

On Ubuntu 18.04

Be wary of multiple installations (11, 12, 13), as pg_upgradcluster for example will always go for the highest version.

  1. Copied configuration files for new version
cp -R  /etc/posgresql/11 /etc/posgresql/12
  1. Initialized new version db
/usr/lib/postgresql/12/bin/initdb -D /srv/postgres/12/main
  1. Stopped the current server and killed all connections
/usr/lib/postgresql/11/bin/pg_ctl -D /srv/postgres/11/main/ -mf stop
  1. Ran checked upgrade with linked files
time /usr/lib/postgresql/12/bin/pg_upgrade --old-bindir /usr/lib/postgresql/11/bin/ --new-bindir /usr/lib/postgresql/12/bin/ --old-datadir /srv/postgres/11/main/ --new-datadir /srv/postgres/12/main/ --link --check
  1. Had to fix diverse configuration file problems that are obvious when running
/usr/lib/postgresql/11/bin/pg_ctl -w \
-l pg_upgrade_server.log \
-D /srv/postgres/11/main \
-o "-p 50432 -b  -c listen_addresses='' -c unix_socket_permissions=0700 -c unix_socket_directories='/var/lib/postgresql'" start
cat pg_upgrade_server.log

Those were mostly faulty references to configuration files, or having to explicitly state the non-standard data directory location.

Lastly, the systemd related things:

systemctl disable postgres@11-main
systemctl enable postgres@12-main

Some reminders for http caching

Posted on March 27, 2021  //  networking

Found here: https://httptoolkit.tech/blog/http-wtf/

No-cache means “do cache”

Caching has never been easy, but HTTP cache headers can be particularly confusing. The worst examples of this are no-cache and private. What does the below response header do?

Cache-Control: private, no-cache

This means “please store this response in all browser caches, but revalidate it when using it”. In fact, this makes responses more cacheable, because this applies even to responses that wouldn’t normally be cacheable by default.

Specifically, no-cache means that your content is explicitly cacheable, but whenever a browser or CDN wants to use it, they should send a request using If-Match or If-Modified-Since to ask the server whether the cache is still up to date first. Meanwhile private means that this content is cacheable, but only in end-client browsers, not CDNs or proxies.

If you were trying to disable caching because the response contains security or privacy sensitive data that shouldn’t be stored elsewhere, you’re now in big trouble. In reality, you probably wanted no-store.

If you send a response including a Cache-Control: no-store header, nobody will ever cache the response, and it’ll come fresh from the server every time. The only edge case is if you send that when a client already has a cached response, which this won’t remove. If you want to do that and clear existing caches too, add max-age=0.

Twitter notably hit this issue. They used Pragma: no-cache (a legacy version of the same header) when they should have used Cache-Control: no-store, and accidentally persisted every user’s private direct messages in their browser caches. That’s not a big problem on your own computer, but if you share a computer or you use Twitter on a public computer somewhere, you’ve now left all your private messages conveniently unencrypted & readable on the hard drive. Oops.

SpinRite 6 on external Toshiba usb disk

Posted on February 21, 2021  //  howto hardware

After 827 days of running time my RaspiBlitz BTC lightning node refused to mount the external hard drive (Toshiba HDTB410EK3AA Canvio Basics, USB 3.0, 1TB). Smart errors of the weirdest kind. I remembered Gibson’s spammy advertisements during the Security Now! Podcast, praising SpinRite for recovery. As there was no physical damage / interaction that would have caused that i gave it a try.

After i bought the license, i downloaded the exe causing first problem, how to run on Linux? I have a Windows 7 laptop for such cases, so i executed the program and tried all the different options to create a bootable USB, finally succeeding by writing out the diskette spinrite.img to harddisk, then dd-ing it onto a usb flash drive:

dd if=/path/to/SpinRite.img conv=notrunc of=/dev/<your usb device, i.e. sda>

After rebooting the same laptop with the external USB disk attached, SpinRite started right away, and luckily for me, the drive was instantly recognized; no need for driver voodoo on the included FreeDOS distribution - that was my biggest concern. Probably the fact that the external disk is not a casing with some exotic usb-controller, but a disk with an integrated usb port helped a lot. A small downer was the unavailability of smart data for SpinRite - I don’t have a theory about that.

The first run failed with a program abort:

This is ongoing.

run openvpn in client mode automatically after linux boot

Posted on January 15, 2021  //  Updated on February 21, 2021  //  networking vpn

Context: Remote raspberry pi model b rev1, all setup with raspberry os / raspbian.

the hardware specs are nothing much, but the machine is reliable, even when apparently half the ram chips are dead.

  1. Install openvpn from the distro’s repository
  2. Take the config file from the server you want to connect to - in my case an ovpn file generated by pivpn - and put it into the config folder /etc/openvpn/.
  3. if your vpn profile is password protected, add a textfile with the cleartext pass and reference it in your vpn profile file like so:
    askpass /etc/openvpn/passwordfilename
    
  4. make sure openvpn.service is started and enabled.
    systemctl enable openvpn && systemctl restart openvpn
    

ip a should show you the tunnel interface already.

NB: for the routing, make sure that your that your router has a static entry that sends all the traffic to the vpn subnet to the vpn server, but that is something that depends really on your own net topology.

update gnubee debian jessie to buster, to bullseye

Posted on December 28, 2019  //  Updated on November 2, 2023  //  debian gnubee

Upgrade to stretch (Debian 9) and then buster (Debian 10)

To upgrade gnubee to stretch, put this in /etc/apt/sources.list:

deb http://httpredir.debian.org/debian stretch main
deb http://httpredir.debian.org/debian stretch-updates main
deb http://security.debian.org/ stretch/updates main

Then upgrade the packages:

apt update
apt full-upgrade
apt autoremove
reboot

To upgrade to buster, put this in /etc/apt/sources.list:

deb http://httpredir.debian.org/debian buster main
deb http://httpredir.debian.org/debian buster-updates main
deb http://security.debian.org/debian-security buster/updates main

and upgrade the packages:

apt update
apt full-upgrade
apt autoremove
reboot

Then to bullseye (Debian 11)

  1. Make sure the system is fully up to date
apt update
apt full-upgrade
apt autoremove
reboot
  1. Edit /etc/apt/sources.list
  • replace each instance of buster with bullseye
  • find the security line, replace buster/updates with bullseye-security
  • this is an example:
deb http://security.debian.org/ bullseye-security main contrib non-free
deb http://httpredir.debian.org/debian bullseye main contrib non-free
deb http://httpredir.debian.org/debian bullseye-updates main contrib non-free
  1. Again upgrade the system
apt update
apt full-upgrade
apt autoremove
reboot

instant domain name for ipv6 device

Posted on December 22, 2019  //  Updated on November 2, 2023  //  networking

You can use IPv6address.has-a.name as a domain name for any of your computers, containers or VMs. The required format is 1234-5678-9abc-def0-1234-5678-9abc-def0.has-a.name. This is already a valid name and points to the IPv6 address 1234:5678:9abc:def0:1234:5678:9abc:def0. Alternatively you can also use the domain has-aaaa.name, which implies IPv6 stronger.

Both domains support IPv6 abbreviation using dashes, you can f.i. use 2a0a-e5c0–3.has-aaaa.name.

Configure Ubuntu 18.04 with grub2 to activate serial console

Posted on December 16, 2019  //  Updated on November 2, 2023  //  ubuntu tty

Edit the file /etc/default/grub

  1. Change GRUB terminal to console and ttyS0. This will provide one GRUB to a monitor display and serial console.
  2. Change linux kernel console to tty1 and ttyS0. This setting will be taken over to userland, and there will be two login prompt for tty1 and ttyS0.
GRUB_CMDLINE_LINUX="console=tty1 console=ttyS0,115200"
GRUB_TERMINAL="console serial"
GRUB_SERIAL_COMMAND="serial --speed=115200 --unit=0 --word=8 --parity=no --stop=1"

Wireguard scenario workstation -> vpn gateway -> private network

Posted on July 25, 2019  //  howto vpn wireguard

I’ve moved a rather hacky tinc mesh vpn solution to wireguard, all set up through an ansible playbook. the topology is rather classic:

my workstation (laptop, changing network situation) connects as a ‘client’ to two wireguard ‘servers’ as vpn gateways which are publicly accessible bastion hosts, and who also are members of a private subnet to which they ought to give access. the specific nodes are cloud instances of each Hetzner cloud and Vultr cloud.

Hetzner recently started to provide private interfaces to their cloud instances, currently the private addresses seem to be given randomly when using the cli tool, but can be specified also via their website interface. Vultr has that service already longer, however, the private ip cannot be specified and is assigned at random.

the above used terms ‘client’ and ‘server’ are a bit anachronistic, as Wireguard does not make such a difference. the ‘servers’ merely do not get endpoints to their peers in their interface configuration, as they do not initiate connections.

Generally, when running a linux vpn gateway that connects two interfaces into different subnets (here wg0 is the wireguard interface, ens10 is the interface to the cloud provider’s virtual router and a self configured private subnet) one only needs to set /proc/sys/net/ipv4/ip_forward to 1 and /proc/sys/net/ipv6/conf/all/forwarding to 1 and be done with it. The nodes in the private subnet possibly need some way of receiving the necessary route back to that vpn gateway, via some routing protocol or static routing.

I was not able to set this up neither on Hetzner, nor on Vultr, and had to instead set up NAT on the gateway via iptables, as advised here in this tutorial, by the way a good reference on how to set up Wireguard: https://angristan.xyz/how-to-setup-vpn-server-wireguard-nat-ipv6/

My theory is that the virtual routers of the cloud providers are filtering this kind of traffic, as i can see the packets running through both the wireguard interface and the private subnet interface on the vpn gateway, but cannot see them at the final node’s interface. But i could be entirely wrong.

UPupdated: And here’s a quick follow up on the wireguard topic:

https://grh.am/2018/wireguard-setup-guide-for-ios/

subtle changes in key format of key pairs generated with `ssh-keygen` on linux

Posted on July 11, 2019  //  Updated on July 25, 2019  //  cryptography cli

I just came across an unexpected ssh key subtlety you might have to consider while creating a drone ci deployment pipeline using drone’s ansible plugin.

Part of the pipeline includes deploying code to a remote host via ssh. I generated a new key pair with ssh-keygen. This created a key with openssh new format starting with:

-----BEGIN OPENSSH PRIVATE KEY-----

Apparently ansible does not like this format and on the “Gathering facts” step erred out with the message “Invalid key”. Googling that was not very successful, and I could not find that particular message in the ansible source, until i eventually found an unrelated closed issue on github which pointed me towards possible problems with key formats.

Eventually i generated a new key pair like so ssh-keygen -m PEM, the -m option setting the key format. The key then had the starting line

-----BEGIN RSA PRIVATE KEY-----

As far as i understand both keys are actually RSA keys, the latter’s PEM format being implied, whereas the former uses some new openssh format i was not previously aware of.

Earlier runs of ssh-keygen did produce keys in the PEM format and as i am running Archlinux with OpenSSH_8.0p1, OpenSSL 1.1.1c 28 May 2019

One of the rolling updates to my system probably brought along this unexpected change.

Hope that helps somebody.

Compile Go on MIPS/MIPS32

Posted on February 27, 2019  //  Updated on July 25, 2019  //  mips cli gnubee

I’ve been trying to compile go programs on the gnubee which runs on the MIPS architecture.

Found this on github:

I have successfully cross compileed a go program into a mips32 binary with below command.

GOARCH=mips32 is for ar71xx, change to GOARCH=mips32le if it is ramips.

cd
git clone https://github.com/gomini/go-mips32.git
cd go-mips32/src
export GOOS=linux
export GOARCH=mips32
sudo mkdir /opt/mipsgo
./make.bash
cd ..
sudo cp -R * /opt/mipsgo
export GOROOT=/opt/mipsgo
export PATH=/opt/mipsgo/bin:$PATH
vi helloworld.go
go build helloworld.go

Source https://github.com/bettermanbao

chroot and serial console to fix ubuntu distro upgrade gone wrong

Posted on September 20, 2018  //  Updated on July 25, 2019  //  tty apu2 ubuntu

I had to fix a do-distro-upgrade from 16.04 to 18.04 due to a severed ssh connection, and no screen running (apparently earlier distro upgrades used screen to prevent this kind of problem)

The machine as a PCengine apu2, so no video. Also, the root file system is sitting on a miniPCI ssd.

Eventually, my laptop and this chroot cheatsheet helped: https://aaronbonner.io/post/21103731114/chroot-into-a-broken-linux-install

  1. Mount the root filesystem device

     mount -t ext4 /dev/<device> /mnt/
    
  2. If there’s a different boot partition or anything else

     mount -t ext2 /dev/<device> /mnt/boot
    
  3. Mount special devices

     mount -t proc none /mnt/proc
     mount -o bind /dev /mnt/dev
     mount -o bind /sys /mnt/sys
    
  4. chroot

     chroot /mnt /bin/bash
     source /etc/profile
    

In order to help troubleshoot in the future, i followed this advice to get a systemd service unit for a constant shell on the serial port, but mine runs for some reason on S0: http://0pointer.de/blog/projects/serial-console.html

systemctl enable serial-getty@ttyS0.service
systemctl start serial-getty@ttyS0.service

It won’t help if systemd does not start, but otherwise it is online really early.

Install and monitor skypool's Nimiq client via ansible playbook, systemd and ruby & cron

Posted on April 27, 2018  //  howto crypto currencies

INTRODUCTION

This is a short entry to document installation and monitoring of the skypool nimiq client. The Nimiq network is a decentralized payment network that runs in the browser and is installation-free.

Personally, i believe that the Litecoin and Ethereum projects have been so far able to generate a strong economy around them, however, projects like Nimiq definitely convince me in terms of usability and simplicity approach to the user.

CONTENT

I am considering Ubuntu 16.04 as base operating system.

The playbook does the following things:

  1. Install the necessary dependencies - ruby-dev for ruby 2.3, ruby gem package manage - unzip to handle the release file from github
  2. Create a specific user nimiq and a program directory /opt/nimiq
  3. Download and unpack the release file from github under a version-specific directory below the program directory
  4. Create skypool client configuration file according to your demands and with your wallet address
  5. Create a systemd unit file, start the skypool client as a service and enable restart on reboot
  6. Create a status checker that uses the skypool api to check the worker’s online/offline status
  7. Create a crontab entry for the root user to run the status checker every ten minutes

REMARKS

cron

The cron entry running every 10 minutes is a tradeoff on how brittle the online/offline check delay currently is experienced by me through the skypool site. Presumably skypool does not have a real heartbeat check towards the worker but assumes that the worker is online when it receives results from it, and subsequently assumes the worker to be offline if it does not (most pools in the cryptocurrency world work like that). That means in terms of perfect time period between checks, your mileage may vary.

systemd

The service runs currently under the user nimiq, hence a non-privileged user of the system. However, the systemd daemon used is the one from root. Hence only the root user can restart the nimiq service. For this reason, the cron entry is registered through the root user. If you want to be able to use the nimiq user to restart the nimiq service, you have to run a systemd daemon based on the nimiq user. I have successfully done that for another service playbook, and I might add this information in the future, if demand is voiced.

GIST

Find below the full gist as published on github. Full gist here.

Installing Ubuntu per minimal image to PC engines APU2

Posted on May 17, 2017  //  howto apu2 ubuntu

This is the company: PCengines This is the device: APU2

Nullmodem setup

using putty

Check which com port, mine was set to ‘com4’

Get a usb to serial converter, install drivers. Some of those converters seem to have timing problems, but i did not encounter that.

I once tried lowest baud rate 9600 and that produced some nice screen carnival, but nothing really legible.

prepping usb stick

Download the usb prep tool ‘TinyCore USB Installer’ and run it against on usb, I’ve used an 8GB stick. Make sure it’s not the slowest.

To try out you can now boot into TINYCORE. So put this into the APU2’s usb port and boot up having the serial Nullmodem cable connected and the putty session open. Finished boot is indicated by an audible beep. This is good to check the serial connection which you should have established parallel to that.

If you want to keep the option of booting into TINYCORE open, backup the syslinux.config fom the USB’s root directory, as this one will be overwritten by the package content we are now downloading.

Download special Ubuntu package from pcengines, unpack and move the three files into the usb root folder / or :/ depending on your system.

Now plug in the usb into the apu2 and boot having the serial Nullmodem cable connected and the putty session open. You will see the setup menu, similar to this screen shot:

View Installation Setup Wizzard

The terminal setup process seems daunting at first, but it essentially is really analogues to the graphical Ubuntu installer. I found my way around by basically following the Easy Path(tm) of most of the suggestions of the installer, going automatically step by step through the menu. On some of the sub menus i was able to make some educated changes as i knew a bit of more details and i had a good idea where i want to go with this system, but this might not apply to you.

The one exception was the network configuration. Running the automatic network detection seems to have got the dhcpd info, but when I dropped into the busy box ash shell environment (one menu option Execute a shell in the main hierarchy at the beginning of the installation process), I had to run dhclient directly on the interface again. Checking via ip addr I now could verify the indeed applied values, and could ping any public server. With exit I dropped back into the installation menu. On a later second setup run this problem did not occur again.

I chose no automatic updates as i can see the cronjob using quite some resources. I’d rather manually schedule that for this particular system at them moment. Part of the minimum running service policy of mine for this instance.

I followed some tip regarding the bootloader installation, and it apparently solved my problem of an unfinished installation before. I lost the link, but it boiled down to manually enter the first partition of the setup target (pcie flash device in my case), so that was /dev/sdb1 as opposed to /dev/sdb. Again, this might be different for you.

Once that was done, and with a bit more patience i rebooted and eventually login via ssh could be established. I then halted the machine, physically unplugged the usb key and the console, and replugged power.

After about 45 sec ping answered and after than ssh came back online.

Quick way to forward mails via postfix

Posted on January 31, 2015  //  Updated on May 17, 2017  //  software

Source: https://www.bentasker.co.uk/documentation/linux/173-configuring-postfix-to-automatically-forward-mail-for-one-address-to-another

Assuming you’re running Postfix, first we make sure the virtual mappings file is enabled here /etc/postfix/virtual:

# Scroll down until you find virtual_alias_maps, make sure it reads something like
virtual_alias_maps = hash:/etc/postfix/virtual
# We also need to make sure the domain is enabled
virtual_alias_domains=example.com

Save and exit. Next we add the aliases to our mapping file /etc/postfix/virtual:

# Forward mail for admin@example.com to jo.bloggs@hotmail.com
admin@example.com  jo.bloggs@hotmail.com

If we want to send to two different addresses at once, we specify:

admin@example.com  jo.bloggs@hotmail.com jos.wife@hotmail.com

Finally, we need to create a hash (later versions of Postfix don’t require this)

postmap /etc/postfix/virtual

It’s the same principle as passing mail into a local user’s mailbox.

How to create a self-signed (wildcard) certificate

Posted on September 25, 2014  //  Updated on May 17, 2017  //  cli howto

This is a quick step to generate a self-signed certificate:

openssl genrsa 2048 > host.key
openssl req -new -x509 -nodes -sha1 -days 3650 -key host.key > host.cert
#[enter *.domain.com for the Common Name]
openssl x509 -noout -fingerprint -text < host.cert > host.info
cat host.cert host.key > host.pem
chmod 400 host.key host.pem

source: http://blog.celogeek.com/201209/209/how-to-create-a-self-signed-wildcard-certificate/

Seafile 3 GUI client and Fedora 20

Posted on April 30, 2014  //  Updated on May 17, 2017  //  software

Currently there is no official rpm package available for the GUI version of the Seafile 3 client. You can find extensive build instructions here:

Build and Use Seafile client from Source

I had to add the Vala package to the dependencies:

sudo yum install vala vala-compat wget gcc libevent-devel openssl-devel gtk2-devel libuuid-devel sqlite-devel jansson-devel intltool cmake qt-devel fuse-devel

Current versions:

Here’s a little fix up for the script parts:

#!/usr/bin/env bash

echo "Building and installing seafile client"

export version=3.0.2 # change this to your preferred version
alias wget='wget --content-disposition -nc'
wget https://github.com/haiwen/libsearpc/archive/v${version}.tar.gz
wget https://github.com/haiwen/ccnet/archive/v${version}.tar.gz
wget https://github.com/haiwen/seafile/archive/v${version}.tar.gz
wget https://github.com/haiwen/seafile-client/archive/v${version}.tar.gz
tar xf libsearpc-${version}.tar.gz
tar xf ccnet-${version}.tar.gz
tar xf seafile-${version}.tar.gz
tar xf seafile-client-${version}.tar.gz

export PREFIX=/usr
export PKG_CONFIG_PATH="$PREFIX/lib/pkgconfig:$PKG_CONFIG_PATH"
export PATH="$PREFIX/bin:$PATH"

echo "Building and installing libsearpc"

cd libsearpc-${version}
./autogen.sh
./configure --prefix=$PREFIX
make
sudo make install

cd ..

echo "Building and installing ccnet"

cd ccnet-${version}
./autogen.sh
./configure --prefix=$PREFIX
make
sudo make install

cd ..

echo "Building and installing seafile"

cd seafile-${version}/
./autogen.sh
./configure --prefix=$PREFIX --disable-gui
make
sudo make install

cd ..

echo "Building and installing seafile-client

cd seafile-client-${version}
cmake -DCMAKE_BUILD_TYPE=Release -DCMAKE_INSTALL_PREFIX=$PREFIX .
make
sudo make install

Run the lib linker, just in case sudo ldconfig

Start the client with seafile-applet

Skype and Fedora 20

Posted on December 28, 2013  //  Updated on May 17, 2017  //  software

Thanks to Negativo17’s blog I got skype running. Here’s the step by step:

Run all commands as root or through sudo

  1. Add the the negativo17 repo of skype
wget http://negativo17.org/repos/fedora-skype.repo -O \ /etc/yum.repos.d/fedora-skype.repo
  1. Install skype normally via yum
yum install skype

NB: It considers the sound bug in Fedora 20:

On Fedora 20+, the real Skype binary is used through a wrapper that sets PULSE_LATENCY_MSEC=30 before running the real binary. As of Skype 4.2.0.11 this is required for proper operation.

NB: Always consider the trust implications of 3rd-party repo providers.

Keepass2 and Fedora 20

Posted on December 28, 2013  //  Updated on May 17, 2017  //  software

You need to install the mono environment and use the portable version of keepass: sudo yum -y install mono-core mono-winforms

I have yet to figure out how to make a convenient link to the Gnome menu structure.

Estonian ID card and Fedora 20

Posted on December 28, 2013  //  Updated on May 17, 2017  //  estonia identification

Additionally to the standard packages which you would install like so from the standard repository of Fedora:

sudo yum install qesteidutil qdigidoc mozilla-esteid

you should also install the pcscd package:

sudo yum install pcsc-lite

Finally, some useful tool here:

sudo yum install pcsc-tools

Source: http://symbolik.wordpress.com/2007/02/25/using-dod-cac-and-smartcard-readers-on-linux/

make a swap file on the fly

Posted on October 16, 2013  //  Updated on May 17, 2017

Disclaimer: this should only be used if you can’t partition your drive by yourself, or it would be a hazzle to do so. I’ve used that method to make one compile process work, otherwise I don’t really need it.

  1. Check for present swap space

    if there’s any output you might consider another solutions

    sudo swapon -s

  2. Create the actual file

    bs = times count equals filesize, in this case 1gb

    sudo dd if=/dev/zero of=/swapfile bs=1024 count=1024k

  3. Alternative:

  4. Create a linux swap area

    sudo mkswap /swapfile

    Output looks something like this:

    Setting up swapspace version 1, size = 262140 KiB no label, UUID=103c4545-5fc5-47f3-a8b3-dfbdb64fd7eb

    sudo swapon -s

  5. Activate the swap file

    sudo swapon /swapfile

    Now `swapon -s` should show something like this

    Filename Type Size Used Priority /swapfile file 262140 0 -1

  6. Make it persistent in the /etc/fstab with the following entry

    /swapfile none swap sw 0 0

  7. Make swappiness 0, otherwise performance will be poor. So it’s just an emergency buffer

    echo 0 > /proc/sys/vm/swappiness

  8. make swappiness persistent

    echo 'vm.swappiness=0' > /etc/sysctl.conf

  9. A bit of good practice, since this is root’s business

    chown root:root /swapfile chmod 0600 /swapfile