For example, we have TV “seasons” somewhere from about January 10 to Midsummer, or 22 weeks, followed by 11 weeks of summer, then 16 weeks for the fall season, and three weeks of Christmas and New Year.
-
We are governed by the sun, not by the government. How this is so can be particularly evident when you look at your TV viewing habits, and there is a whole science behind what we choose to watch and why, Raul Rebane reported in Vikerraadio's daily commentary earlier this week. -
Install digidoc client on arch linux
Assuming some client for AUR
-
Install packages
yay -S qdigidoc4 ccid pcsclite web-eid-firefox
-
services
sudo systemctl start pcscd.service && sudo systemctl enable pcscd.service
Maybe there’s an alternative to web-eid-firefox, but only after installing it, the error ‘PKCS#11 cannot be loaded’ in digidoc4client disappeared.
Smartcards - ArchWiki
If the card reader does not have a PIN pad, append the line(s) and set enable_pinpad = false in the opensc configuration file /etc/opensc.conf. -
-
Jinja2 indentation and other layout problems
If you don’t want your indentation get messed up for example in a docker-compose file, add this to the very first line.
#jinja2: lstrip_blocks: "true", trim_blocks: "false"
It will prevent any simple or nested if/for statements to interfere with the layout.
Jinja2 lstrip_blocks as a default · Issue #10725 · ansible/ansible
Dear Ansible devs, We often have long and complex templates, with lots of Jinja2 loops and conditionals. It's handy to indent them, so to make it easier to read the template. I see that "trim_block... -
Docker Hub images domain
The domain for a docker image is docker.io.
When there’s no organization/user, it seems
/library
is added often.So the ubuntu image fqdn is
docker.io/library/ubuntu
`Registry
The Docker Hub registry implementation -
Reset arch linux key ring
If again you have problems with some pubkey not present, do this
mv /etc/pacman.d/gnupg /etc/pacman.d/gnupg.bkp pacman-key --init pacman-key --wpopulate
-
Search Github for content in particular file type
github_ _search path:/(^|\/)\.woodpecker\.yml$/ build_args
On github, search content in a particular kind of file, in this case woodpecker ci definition files.
-
update fuel php to 1.9 dev
update fuel php to 1.9 dev
- copy composer.json from githube repo into root dir
-
update composer by running:
curl -s https://getcomposer.org/installer | php
- chown to local user
-
run composer against new composer.json:
php composer.phar update --prefer-dist php composer.phar install --prefer-dist
-
make sure file ownership is proper
chown -R user:group folder
- that’s it
GitHub - fuel/fuel: Fuel PHP Framework v1.x is a simple, flexible, community driven PHP 5.3+ framework, based on the best ideas of other frameworks, with a fresh start! FuelPHP is now fully PHP 8.0 compatible.
Fuel PHP Framework v1.x is a simple, flexible, community driven PHP 5.3+ framework, based on the best ideas of other frameworks, with a fresh start! FuelPHP is now fully PHP 8.0 compatible. - fuel/... -
Copy text in yakuake tmux fishshell
Took a while, but i found it… shift + lmb click and drag over the text, then ctrl + shift + c to copy to desktop environment clipboard. d’oh
-
Loop to echo out container stats to a file
for i in {1..2880}; do echo "------ $(date) ------" >> docker_stats_CONTAINER_NAME.txt; docker stats $(docker ps --format '{{.Names}}' | grep 'CONTAINER_NAME') --no-stream >> docker_stats_CONTAINER_NAME.txt; sleep 300; done
-
Synchronizing a list of checked and unchecked items
Example showing a list of available premium_licenses, and have the ones checkmarked that are chosen, as well as update the chosen set with newly checked and unchecked items.
class Client::SiteController < Client::ApplicationController after_action :notify_admin def update @site = Site.find params[:id] update_site_premium_licenses end private def update_site_premium_licenses ids_before = @site.bulk_premium_license_ids @site.bulk_premium_license_ids = site_params[:bulk_premium_license_ids].select { |x| x.to_i > 0 } ids_after = @site.bulk_premium_license_ids @licenses_added = ids_after - ids_before @licenses_removed = ids_before - ids_after @site.save !@site.errors.present? end def notify_admin AdminNotification.with(remove: @licenses_removed, add: @licenses_added, site: @site).deliver(email_address) end def site_params params.require(:site).permit(bulk_premium_license_ids: []) end
The view is a collection of check-boxes and a submit button. CSS classes reference Bulma.
<%= form_with model: [:client, site] do |form| %> <div class="field has-check"> <div class="field"> <p><%= t("subscriptionsDir.licenses.explainer") %></p> </div> <div class="field"> <div class="control"> <%= collection_check_boxes(:site, :bulk_premium_license_ids, BulkPremiumLicense.all, :id, :title) do |b| %> <%= b.label(class: "b-checkbox checkbox", for: nil) do %> <%= b.check_box(checked: site.bulk_premium_license_ids.include?(b.object.id)) %> <%= tag.span class: "check is-primary" %> <%= tag.span b.object.title, class: "control-label" %> <% end %> <%= tag.br %> <% end %> </div> </div> <div class="field"> <div class="control"> <%= form.submit t("subscriptionsDir.licenses.submit"), class: "button is-primary" %> </div> </div> </div> <% end %>
Notifications are being sent via noticed gem
The relationship is a simple site has_many premium_licenses, and site has
-
Change Mysql Database Name
the easiest way to change database name is to copy to old stuff into the new stuff via a dump:
mysqldump source_db | mysql destination_db
-
Add an admin to a wordpress database
INSERT INTO `wordpressdatabase`.`wp_users` (`ID`, `user_login`, `user_pass`, `user_nicename`, `user_email`, `user_status`, `display_name`) VALUES ('1000', 'username', MD5('password'), 'username', 'contact@example.com', '0', 'username'); INSERT INTO ` wordpressdatabase`.`wp_usermeta` (`umeta_id`, `user_id`, `meta_key`, `meta_value`) VALUES (NULL, '5', 'wp_capabilities', 'a:1:{s:13:"administrator";b:1;}'); INSERT INTO ` wordpressdatabase`.`wp_usermeta` (`umeta_id`, `user_id`, `meta_key`, `meta_value`) VALUES (NULL, '1000', 'wp_user_level', '10');
-
Scope of Plausible selfhosted api key generator
Plausible selfhosted api key generator in the ui only generates a key with scope of
stats:read:*
but if you want to call any provisioning endpoints you need the scope ofsites:provision:*
easiest way is to generate a key, connect to the database, and change the scopes field in the
api_keys
table to the needed scope.Here’s the related github discussion
-
Quickes way to prepare Windows Terminal WinRM for Ansible
@Controlling windows terminals with Ansible needs an initial configuration step on the terminal that activates WinRM, enables https transport, and creates a self-signed certificate. In this way one can manage small scale fleets that are not part of an ActiveDirectory Domain.
The most reduced procedure involves these two files:
A batch file that one can easily call with “Run as administrator…”. It calls this well known powershell script and makes some of its configuration options explicit.
Here is a copy, in case the repository goes away at some point in the future (archived version Version 1.9 - 2018-09-21)
The batch file expects the script file to be in the same directory.
Batch file content:
powershell -ExecutionPolicy ByPass -File %~dp0\prep_ansible.ps1 -Verbose -CertValidityDays 3650 -ForceNewSSLCert -SkipNetworkProfileCheck
-
Call Actionmailer from Rake Task
If you call actionmailer from a rake task, you can’t use activejob, as the thread pool is killed once the rake tasks finishes. so everything is real time, which is not a problem at all, given it’s a rake task…
https://guides.rubyonrails.org/action_mailer_basics.html#calling-the-mailer
-
Redirect
One way to redirect inside the rails router based on the client’s Accept-Language header.
Previously i thought i had to do this inside the proxy webserver nginx, only really possible with the lua-enhanced fork of openresty, or the selfcompiled nginx version.
Or to go into the rack middleware world and figure out how to do it there - it’s probably still the fastest, cleanest to do it there.
There are more ways above that: routes file, and of course application controller.
I went for the routes file and added this directive:
root to: redirect { |params, request| "/#{best_locale_from_request!(request)}" }, status: 302, as: :redirected_root
The curly braces syntax is obligatory, do/end block does not work.
The actual work is being done with the help of teh
accept_language
gem and these two methods, split up for easier reading i presume:def best_locale_from_request(request) return I18n.default_locale unless request.headers.key?("HTTP_ACCEPT_LANGUAGE") string = request.headers.fetch("HTTP_ACCEPT_LANGUAGE") locale = AcceptLanguage.parse(string).match(*I18n.available_locales) # If the server cannot serve any matching language, # it can theoretically send back a 406 (Not Acceptable) error code. # But, for a better user experience, this is rarely done and more # common way is to ignore the Accept-Language header in this case. return I18n.default_locale if locale.nil? locale end
I’ve put them both into the routes file, but there might be a better place for that.
The available locales array grew a bit, in order to prevent edge cases:
# config/application.rb config.i18n.available_locales = [:en, :"en-150", :"en-001", :"en-DE", :de, :"de-AT", :"de-CH", :"de-DE", :"de-BE", :"de-IT", :"de-LI", :"de-LU", :et, :"et-EE"]
turns out the gem always forwards the geography part as well, so in order to make sure nobody is left out, i’ve added this for now. this might become tricky later on as paths are created based on that, and the language switcher might be a bit more tricky. maybe it makes sense to cut the second part off somehow.
Resources:
Accept-Language gem: https://github.com/cyril/accept_language.rb
A rack app i did not get to work, but apparently does the i18n settings as well: https://github.com/blindsidenetworks/i18n-language-mapping
This was very helpful for the redirect syntax: https://www.paweldabrowski.com/articles/rails-routes-less-known-features
-
Moving lvm-thin volumes on proxmox between vm-s or ct-s
Following this official howto
lvs
shows you all volumes in their volume group (in my case ‘ssd’)LV VG Attr LSize Pool Data% Meta% data pve twi-a-tz-- 32.12g 0.00 1.58 root pve -wi-ao---- 16.75g swap pve -wi-ao---- 8.00g guests ssd twi-aotz-- <2.33t 74.93 45.51 vm-100-disk-0 ssd Vwi-a-tz-- 12.00g guests 72.69 vm-101-disk-0 ssd Vwi-a-tz-- 12.00g guests 85.22 vm-101-disk-1 ssd Vwi-a-tz-- 50.00g guests 99.95 vm-102-disk-0 ssd Vwi-a-tz-- 12.00g guests 97.57 vm-102-disk-1 ssd Vwi-a-tz-- 50.00g guests 64.54 vm-103-disk-0 ssd Vwi-a-tz-- 12.00g guests 74.37 vm-103-disk-1 ssd Vwi-a-tz-- 150.00g guests 52.42 vm-104-disk-0 ssd Vwi-a-tz-- 12.00g guests 90.74 vm-104-disk-1 ssd Vwi-a-tz-- 10.00g guests 95.27 vm-105-disk-0 ssd Vwi-a-tz-- 12.00g guests 55.79 vm-105-disk-1 ssd Vwi-a-tz-- 10.00g guests 32.89 vm-106-disk-0 ssd Vwi-a-tz-- 12.00g guests 77.78 vm-106-disk-1 ssd Vwi-a-tz-- 10.00g guests 99.82 vm-107-disk-0 ssd Vwi-a-tz-- 32.00g guests 0.00 vm-107-disk-1 ssd Vwi-a-tz-- 500.00g guests 95.41 vm-108-disk-0 ssd Vwi-aotz-- 8.00g guests 43.73 vm-109-disk-0 ssd Vwi-a-tz-- 12.00g guests 52.41 vm-109-disk-1 ssd Vwi-a-tz-- 50.00g guests 2.22 vm-110-disk-0 ssd Vwi-a-tz-- 12.00g guests 51.14 vm-110-disk-1 ssd Vwi-a-tz-- 50.00g guests 2.22 vm-111-disk-0 ssd Vwi-a-tz-- 12.00g guests 84.85 vm-111-disk-1 ssd Vwi-a-tz-- 100.00g guests 16.97 vm-112-disk-0 ssd Vwi-a-tz-- 8.00g guests 13.53 vm-113-disk-0 ssd Vwi-a-tz-- 8.00g guests 11.55 vm-114-disk-0 ssd Vwi-a-tz-- 16.00g guests 84.31 vm-115-disk-0 ssd Vwi-a-tz-- 16.00g guests 97.12 vm-116-disk-0 ssd Vwi-a-tz-- 8.00g guests 31.49 vm-117-cloudinit ssd Vwi-aotz-- 4.00m guests 50.00 vm-117-disk-0 ssd Vwi-aotz-- 10.00g guests 39.71 vm-117-disk-1 ssd Vwi-aotz-- 1000.00g guests 97.47
If the id of the new ct or vm is not equal to the id of the volume’s previous attachment, rename them, i.e.
lvrename ssd/vm-101-disk-1 ssd/vm-117-disk-2
This will make vm-101-disk-1 available as vm-117-disk-2, you have to increase the count in the end of the name.
Then edit the config of the actual vm.
Take the line from
/etc/pve/qemu-server/<vm id>.conf
that describes the volume to the new<vm id>.conf
The tricky thing was to run
qm rescan
afterwards which fixed syntax and made the volume appear in the web gui where i could finally attache it to the new vm. -
WakeOnLan, Archlinux, systemd-networkd, Asus Pro WS X570-ACE
The board has two integrated ethernet adapters, here’s the
lshw
data:sudo lshw -c network \*-network description: Ethernet interface product: I211 Gigabit Network Connection vendor: Intel Corporation physical id: 0 bus info: pci@0000:05:00.0 logical name: enp5s0 version: 03 serial: 24:4b:fe:<redacted> size: 1Gbit/s capacity: 1Gbit/s width: 32 bits clock: 33MHz capabilities: pm msi msix pciexpress bus\_master cap\_list ethernet physical tp 10bt 10bt-fd 100bt 100bt-fd 1000bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=igb driverversion=5.12.8-zen1-1-zen duplex=full firmware=0. 6-1 ip=<redacted> latency=0 link=yes multicast=yes port=twisted pair speed=1Gbit/s resources: irq:61 memory:fc900000-fc91ffff ioport:e000(size=32) memory:fc920000-fc923fff \*-network description: Ethernet interface product: RTL8111/8168/8411 PCI Express Gigabit Ethernet Controller vendor: Realtek Semiconductor Co., Ltd. physical id: 0.1 bus info: pci@0000:06:00.1 logical name: enp6s0f1 version: 1a serial: 24:4b:fe:<redacted> size: 1Gbit/s capacity: 1Gbit/s width: 64 bits clock: 33MHz capabilities: pm msi pciexpress msix bus\_master cap\_list ethernet physical tp mii 10bt 10bt-fd 100bt 100bt-fd 1000bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=r8169 driverversion=5.12.8-zen1-1-zen duplex=full firmware=rtl8168fp-3\_0.0.1 11/16/19 ip=<redacted> latency=0 link=yes multicast=yes port=twisted pair speed=1Gbit/s resources: irq:24 ioport:d800(size=256) memory:fc814000-fc814fff memory:fc808000-fc80bfff
It seems that the UEFI entry to activate Wake on Lan for PCIe devices only affects the Intel port, i’ve persistently activated WOL for the realtek port by adding a .link file to /etc/systemd/network/foobar.link
[Match] MACAddress=<redacted> [Link] WakeOnLan=magic # below lines are cloned from original entry in # /usr/lib/systemd/network/99-default.link # which is the default link file for all adapters whose section is hereby overwritten NamePolicy=keep kernel database onboard slot path AlternativeNamesPolicy=database onboard slot path MACAddressPolicy=persistent
The arch wiki shows a couple of alternative ways, but this seems to be the most straight forward for me.
-
Upgrade Postgresql from 11 upwards
On Ubuntu 18.04
Multiple installations (11, 12, 13) be wary of that, as
pg_upgradcluster
for example will always go for the highest version.copied configuration files for new version
cp -R /etc/posgresql/11 /etc/posgresql/12
initialized new version db
/usr/lib/postgresql/12/bin/initdb -D /srv/postgres/12/main
stopped the current server and killed all connections
/usr/lib/postgresql/11/bin/pg_ctl -D /srv/postgres/11/main/ -mf stop
ran checked upgrade with linked files
time /usr/lib/postgresql/12/bin/pg_upgrade --old-bindir /usr/lib/postgresql/11/bin/ --new-bindir /usr/lib/postgresql/12/bin/ --old-datadir /srv/postgres/11/main/ --new-datadir /srv/postgres/12/main/ --link --check
had to fix diverse configuration file problems that are obvious when running
"/usr/lib/postgresql/11/bin/pg_ctl" -w -l "pg_upgrade_server.log" -D "/srv/postgres/11/main" -o "-p 50432 -b -c listen_addresses='' -c unix_socket_permissions=0700 -c unix_socket_directories='/var/lib/postgresql'" start cat pg_upgrade_server.log
mostly faulty references to configuration files, or having to make explicit the non-standard data dir location.
then the systemd related things
systemctl disable postgres@11-main systemctl enable postgres@12-main
This place was most helpful: https://blog.crunchydata.com/blog/how-to-perform-a-major-version-upgrade-using-pg_upgrade-in-postgresql
-
run openvpn in client mode automatically after linux boot
scenario: send out a raspberry pi model b rev1, all setup with raspberryi os / raspbian.
the hardware specs are nothing much, but the machine is reliable, even when apparently half the ram chips are dead….
install openvpn, then take the config file from the server you want to connect to - in my case an ovpn file generated by pivpn - and put it into the config folder `/etc/openvpn/`. if your vpn profile is password protected, just add a simple textfile with the cleartext pass and reference it in your vpn profile file like so:
askpass /etc/openvpn/passwordfilename
make sure openvpn.service is started and enabled. systemctl enable openvpn && systemctl restart openvpn
should be it,
ip a
should show you the tunnel interface already.ps: for the routing, make sure that your that your router has a static entry that sends all the traffic to the vpn subnet to the vpn server, but that is something that depends really on your own net topology.
-
update gnubee debian jessie to buster, to bullseye
thanx to https://feeding.cloud.geek.nz/posts/installing-debian-buster-on-gnubee2/
Upgrade to stretch (Debian 9) and then buster (Debian 10)
To upgrade to stretch, put this in
/etc/apt/sources.list
:deb http://httpredir.debian.org/debian stretch main deb http://httpredir.debian.org/debian stretch-updates main deb http://security.debian.org/ stretch/updates main
Then upgrade the packages:
apt update apt full-upgrade apt autoremove reboot
To upgrade to buster, put this in
/etc/apt/sources.list
:deb http://httpredir.debian.org/debian buster main deb http://httpredir.debian.org/debian buster-updates main deb http://security.debian.org/debian-security buster/updates main
and upgrade the packages:
apt update apt full-upgrade apt autoremove reboot
Then to bullseye (Debian 11)
- Make sure the system is fully up to date
apt update apt full-upgrade apt autoremove reboot
- Edit
/etc/apt/sources.list
- replace each instance of buster with bullseye
- find the security line, replace buster/updates with bullseye-security
- this is an example:
deb http://security.debian.org/ bullseye-security main contrib non-free deb http://httpredir.debian.org/debian bullseye main contrib non-free deb http://httpredir.debian.org/debian bullseye-updates main contrib non-free
- Again upgrade the system
apt update apt full-upgrade apt autoremove reboot
-
instant domain name for ipv6 device
https://ungleich.ch/u/blog/has-a-name-for-every-ipv6-address/
TL;DR
You can use IPv6address.has-a.name as a domain name for any of your computers, containers or VMs. The required format is 1234-5678-9abc-def0-1234-5678-9abc-def0.has-a.name. This is already a valid name and points to the IPv6 address 1234:5678:9abc:def0:1234:5678:9abc:def0. Alternatively you can also use the domain has-aaaa.name, which implies IPv6 stronger.
Both domains support IPv6 abbreviation using dashes, you can f.i. use 2a0a-e5c0–3.has-aaaa.name.
-
Configure Ubuntu 18.04 with grub2 to activate serial console
Thanks to hiroom2
1 /etc/default/grub
- Change GRUB terminal to console and ttyS0. This will provide one GRUB to a monitor display and serial console.
- Change linux kernel console to tty1 and ttyS0. This setting will be taken over to userland, and there will be two login prompt for tty1 and ttyS0.
GRUB_CMDLINE_LINUX="console=tty1 console=ttyS0,115200" GRUB_TERMINAL="console serial" GRUB_SERIAL_COMMAND="serial --speed=115200 --unit=0 --word=8 --parity=no --stop=1"
-
subtle changes in key format of key pairs generated with `ssh-keygen` on linux
I just came across an unexpected ssh key subtlety you might want to consider while creating a drone ci deployment pipeline using drone’s ansible plugin.
Part of the pipeline includes deploying code to a remote host via ssh. I generated a new key pair with
ssh-keygen
. This created a key with openssh new format starting with:-----BEGIN OPENSSH PRIVATE KEY-----
Apparently ansible does not like this format and on the “Gathering facts” step erred out with the message “Invalid key”. Googling that was not very successful, and I could not find that particular message in the ansible source, until i eventually found an unrelated closed issue on github which pointed me towards possible problems with key formats.
Eventually i generated a new key pair like so
ssh-keygen -m PEM
, the-m
option setting the key format. The key then had the starting line-----BEGIN RSA PRIVATE KEY-----
As far as i understand both keys are actually RSA keys, the latter’s PEM format being implied, whereas the former uses some new openssh format i was not previously aware of.
Earlier runs of
ssh-keygen
did produce keys in the PEM format and as i am running Archlinux with OpenSSH_8.0p1, OpenSSL 1.1.1c 28 May 2019One of the rolling updates to my system probably brought along this unexpected change.
Hope that helps somebody.
-
Compile Go on MIPS/MIPS32
I’ve been trying to compile go programs on the gnubee which runs on a mips architecture.
Found this on github:
I have successfully cross compile go program into mips32 bin with below command, you may try this also.
GOARCH=mips32
is for ar71xx, change toGOARCH=mips32le
if it is ramips.cd git clone https://github.com/gomini/go-mips32.git cd go-mips32/src export GOOS=linux export GOARCH=mips32 sudo mkdir /opt/mipsgo ./make.bash cd .. sudo cp -R * /opt/mipsgo export GOROOT=/opt/mipsgo export PATH=/opt/mipsgo/bin:$PATH vi helloworld.go go build helloworld.go
thanks, bettermanbao
-
Page: 1 of 2
- Older Posts →