Search Github for content in particular file type
On github, one can search content in a particular kind of file, in this case woodpecker ci definition files:
github_ _search
path:/(^|\/)\.woodpecker\.yml$/ build_args
This is the site's publishing timeline, everything is here for you to browse. I've separated out a couple of things that i would like to browse quickly myself, or where i think people might want to follow these separately.
On github, one can search content in a particular kind of file, in this case woodpecker ci definition files:
github_ _search
path:/(^|\/)\.woodpecker\.yml$/ build_args
https://www.cybertec-postgresql.com/en/error-permission-denied-schema-public/
change in psql 15
besides creating the database, role and granting privileges, one now has to also grant all on schema public of same database:
create database "example";
create user "example" login protected with password 'password';
grant all on database 'example' to example;
grant all on schema 'public.example' to example;
when you want to change user inside an existing session, do:
set role <EXAMPLE>;
to create an extension:
\c <DATABASE>
create extension <EXTENSION>;
Here is a list of ideas that you can do (now) that will create opportunities for yourself in the future.
Justin JacksonHere are some ideas on things you can do (now) that will create opportunities for yourself in the future:
Podcast
Give a talk
Write a blog
Go to meetups
Take a course
Learn new skills
Get a remote job
Join a community
Grow your network
Promote your work
Go to a conference
Start a newsletter
Publish on YouTube
Apply for a new job
Create a side-project
Explore a new industry
Collaborate with a friend
Build something in public
Become an expert on a topic
Ship a free project on the internet
Sell a small product on the internet
Reddit and its partners use cookies and similar technologies to provide you with a better experience.
www.reddit.comFirst comment:
You need to edit proxy settings for uptime kuma, under Custom Locations add a new location with location as / and then enter your scheme/hostname/port for uptime kuma. Then go to the gear icon besides location and enter
add_header 'Access-Control-Allow-Origin' *;
and then save.
Made static cms work with gitea/forgejo.
This is a bit niche, but I have one client app where that was needed.
Update fuel php to 1.9 dev
composer.json
from githube repo into the project’s root directoryupdate composer
by running:
curl -s https://getcomposer.org/installer | php
chown
to local userrun composer
against new composer.json
:
php composer.phar update --prefer-dist
php composer.phar install --prefer-dist
make sure file ownership is proper
chown -R user:group folder
Fuel PHP Framework v1.x is a simple, flexible, community driven PHP 5.3+ framework, based on the best ideas of other frameworks, with a fresh start! FuelPHP is now fully PHP 8.0 compatible. - fuel/...
GitHubTook a while, but i found it… shift + lmb click and drag over the text, then ctrl + shift + c to copy to desktop environment clipboard. d’oh
This was helpful to get an initial impression of the resource requirements of a couple of running containers before migration to a new infrastructure environment.
for i in {1..2880}; do
echo "------ $(date) ------" >> docker_stats_CONTAINER_NAME.txt;
docker stats $(docker ps --format '{{.Names}}' | grep 'CONTAINER_NAME') --no-stream >> docker_stats_CONTAINER_NAME.txt;
sleep 300;
done
We were running Percona MySQL version 8 and since some time a bunch of deprecation warnings have been popping up during the service start.
'Disabling symbolic links using --skip-symbolic-links (or equivalent) is the default. Consider not using this option as it' is deprecated and will be removed in a future release.
'NO_ZERO_DATE', 'NO_ZERO_IN_DATE' and 'ERROR_FOR_DIVISION_BY_ZERO' sql modes should be used with strict mode. They will be merged with strict mode in a future release.
'default_authentication_plugin' is deprecated and will be removed in a future release. Please use authentication_policy instead.
Deprecated configuration parameters innodb_log_file_size and/or innodb_log_files_in_group have been used to compute innodb_redo_log_capacity=104857600. Please use innodb_redo_log_capacity instead.
These concern the following config entries:
replaced:
innodb_log_file_size = 50M
with
innodb_redo_log_capacity = 52428800
replaced:
default-authentication-plugin=mysql_native_password
with
authentication_policy = 'mysql_native_password'
removed:
symbolic-links=0
expanded: `sql_mode = NO_ENGINE_SUBSTITUTION,STRICT_ALL_TABLES
to
sql_mode = NO_ENGINE_SUBSTITUTION,STRICT_ALL_TABLES,NO_ZERO_DATE,NO_ZERO_IN_DATE,ERROR_FOR_DIVISION_BY_ZERO
Also you can (i think since some time version 8) bind the mysql daemon to multiple interfaces, so now i’m letting it listen to localhost and the private network address to access the db in an easier way than through ssh tunneling, i.e.:
bind-address = 127.0.0.1,10.1.2.3
Example showing a list of available premium_licenses, and have the ones checkmarked that are chosen, as well as update the chosen set with newly checked and unchecked items.
class Client::SiteController < Client::ApplicationController
after_action :notify_admin
def update
@site = Site.find params[:id]
update_site_premium_licenses
end
private
def update_site_premium_licenses
ids_before = @site.bulk_premium_license_ids
@site.bulk_premium_license_ids = site_params[:bulk_premium_license_ids].select { |x| x.to_i > 0 }
ids_after = @site.bulk_premium_license_ids
@licenses_added = ids_after - ids_before
@licenses_removed = ids_before - ids_after
@site.save
!@site.errors.present?
end
def notify_admin
AdminNotification.with(remove: @licenses_removed, add: @licenses_added, site: @site).deliver(email_address)
end
def site_params
params.require(:site).permit(bulk_premium_license_ids: [])
end
The view is a collection of check-boxes and a submit button. CSS classes reference Bulma.
<%= form_with model: [:client, site] do |form| %>
<div class="field has-check">
<div class="field">
<p><%= t("subscriptionsDir.licenses.explainer") %></p>
</div>
<div class="field">
<div class="control">
<%= collection_check_boxes(:site, :bulk_premium_license_ids, BulkPremiumLicense.all, :id, :title) do |b| %>
<%= b.label(class: "b-checkbox checkbox", for: nil) do %>
<%= b.check_box(checked: site.bulk_premium_license_ids.include?(b.object.id)) %>
<%= tag.span class: "check is-primary" %>
<%= tag.span b.object.title, class: "control-label" %>
<% end %>
<%= tag.br %>
<% end %>
</div>
</div>
<div class="field">
<div class="control">
<%= form.submit t("subscriptionsDir.licenses.submit"), class: "button is-primary" %>
</div>
</div>
</div>
<% end %>
Notifications are being sent via noticed gem.
The easiest way to change database name is to copy to old stuff into the new stuff via a dump:
mysqldump source_db | mysql destination_db
Because it comes up so often:
INSERT INTO `wordpressdatabase`.`wp_users` (`ID`, `user_login`, `user_pass`, `user_nicename`, `user_email`, `user_status`, `display_name`) VALUES ('1000', 'username', MD5('password'), 'username', 'contact@example.com', '0', 'username');
INSERT INTO ` wordpressdatabase`.`wp_usermeta` (`umeta_id`, `user_id`, `meta_key`, `meta_value`) VALUES (NULL, '5', 'wp_capabilities', 'a:1:{s:13:"administrator";b:1;}');
INSERT INTO ` wordpressdatabase`.`wp_usermeta` (`umeta_id`, `user_id`, `meta_key`, `meta_value`) VALUES (NULL, '1000', 'wp_user_level', '10');
Plausible selfhosted api key generator in the UI only generates a key with scope of stats:read:*
but if you want to call any provisioning endpoints you need the scope of sites:provision:*
easiest way is to generate a key, connect to the database, and change the scopes field in the api_keys
table to the needed scope.
Here’s the related github discussion
today in Tallinn a woman got killed by a reversing van on a sidewalk while waiting for the bus… and it’s not even front page news.
this city is so utterly fucked up with normalizing traffic violence, it’s beyond believe.
Täna toimus traagiline liiklusõnnetus Tallinnas Lasnamäel Punasel tänaval, kus hukkus 76-aastane naine.
PostimeesBefore thinking the app from top to bottom (fancy well-designed ui and ux) or bottom to top (thought out database model with all attributes and interactions), focus on figuring out if the app can do the most important thing it should do, try that out in some place, like the console and then move on from there.
All apps are essentially ETL bodies. We take data from somewhere (api, forms, imported files), do something with it and push it on (api or webhook, html site, etc). The transformation is the thing that is the business logic, the service, the work. That can really start living in a lib module, namespaced. Starting with self.function
model functions.
Have the maximum possibility space by waiting with a decision until the last moment when you have to take it.
I started a screencast series last week, and I can call it a series now because there's a second one.
justin․searls․coControlling windows terminals with Ansible needs an initial configuration step on the terminal that activates WinRM, enables https transport, and creates a self-signed certificate. In this way one can manage small scale fleets that are not part of an ActiveDirectory Domain.
The most reduced procedure involves these two files:
A batch file that one can easily call with “Run as administrator…”. It calls this well known powershell script and makes some of its configuration options explicit.
Here is a copy, in case the repository goes away at some point in the future (archived version Version 1.9 - 2018-09-21)
The batch file expects the script file to be in the same directory.
Batch file content:
powershell -ExecutionPolicy ByPass -File %~dp0\prep_ansible.ps1 -Verbose -CertValidityDays 3650 -ForceNewSSLCert -SkipNetworkProfileCheck
If you call ActionMailer from a rake task, you can’t use ActiveJob, as the thread pool is killed once the rake tasks finishes. So everything is real time, which is not a problem at all, given it’s a rake task.
Die erste Frage ist Siebträger oder Vollautomat.
Ich ignoriere die Vollautomaten, weil:
Hier gibt es auch eine große Auswahl. Die wichtigen Kriterien:
Vollmetall ist selbsterklährend. Hier sind einfach alle Teile, Rohre, Ventile, etc aus Metall, und dementsprechend langlebiger, aber auch teurer.
Ob Einkreiser, Zweikreise, Dualboiler hängt dann vom Betrieb ab, aber für die ein paar Kaffees am morgen und über den Tag gehen die ersten beiden Versionen. Dualboiler sind da zu teuer. Schlussendlich ist das auch eine Komfortfrage: mit Einkreiser macht man halt erst den Kaffee, und muss dann auf Dampf umschalten und warten, bis Dampfdruck aufgebaut ist. Das macht das Kaffeemachen ein bisschen länger. Bei einem Zweikreiser kann man beides paralell bzw. gleich hintereinander machen.
Um immer gleiche Kaffeequalität zu haben ist eine Kaffemühle daneben schon wichtig. Die kleinen Mühlen mit Messer die viele Leute in der Küche haben sind schwierig, weil das Mahlgut immer anders grob wird. das hängt halt vom Halten und der Dauer ab, ist also eher zufällig.
Wir haben diese Kombi:
Ich hatte die vor 5 Jahren als Bundle bei www.stoll-espresso.de gekauft, Mühle für 290, Machine für 530. Dazu kam dann noch ein bisschen optionales Zubehör (Milchkann, Reinigungsbürste und Pulver, Edelstahltamper), und eine 5-Jahre extra Garantie. Also mit dem Setup bist du pi mal daumen bei rund 1000 Euro.
Das gute an der Maschiene ist die Wartungsmöglichkeit. Wenn man ein bisschen Geschick und Erfahrung mit Elektrik hat, kann man das Gerät einfach aufschrauben und drinnen ist alles reparier- und ersetztbar, mit nur wenigen Spezialteilen die man einzeln nachbestellen kann. Ich habe von Leuten gelesen, die ihre Silva Jahrzehnte in betrieb hatten.
Ein Problem was ich mit dem Garantieschutz jetzt mal mit Stoll besprechen muss ist der Metallfuss. Im Gegensatz zum Rest der Maschine ist der nicht aus Edelstahl und rostet. Und ich hoffe den ersetzt zu bekommen.
Wenn ich nochmal neu kaufen würde, dann würde ich die gleichen wieder nehmen, aber in der nächsten Version:
Dualboiler, nur um den Kaffee, Schaum, aber auch Kakao oder heißen Saft schneller machen zu können. Aber ist da der doppelte Preis gerechtfertigt? Ist also eher Luxus.
Der hat einen Dosierhebel. Jetzt wieg ich das Mahlgut, damit immer gleich viel in den Kaffee kommt. Das ist weichtig, weil ein paar Gramm mehr oder weniger schon darüber entscheiden ob entweder braunes Kaffeewasser kommt, oder gar kein Kaffee, weil zu viel Mahlgut im Siebträger ist… Mit so einem Dosierhebel kommt halt immer gleich viel raus.
Sollte das mit dem Budget oder Erwartungen alles überhaupt nicht hinhauen, dann kann ich aus eigener Erfahrung Kapselmaschienen mit wiederverwendbaren Kapseln empfehlen, da aber keine konkrete. Ich hatte mal eine Weile eine Nespresso. Aber da muss dann halt noch ein Milchschäumer dazu, wenn man irgendwelche Milchkaffeeversionen bevorzugt. Aber ich glaube mittlerweile gibt es Maschinen mit Milchschäumer. Das Handling mit den Kapselnachfüllen brauch ein bisschen Übung. Dafür ist die Zubereitung natürlich massiv vereinfacht und viel schneller. Einwegkapseln lehne ich aus Müllgründen jedoch grundsätzlich ab, auch wenn deren Kaffeegeschmack ganz ordentlich ist.
Ich bin irgendwann auf die Zuriga gestoßen. Das Produkt ist noch mal eine Preisklasse drüber, und es gibt es noch nicht so lange, aber es ist so schön anzusehen.
One way to redirect inside the Rails router based on the client’s Accept-Language
header.
Previously I thought I had to do this inside the proxy webserver Nginx, only really possible with the LUA-enhanced fork of Openresty, or the selfcompiled Nginx version.
Or to go into the rack middleware world and figure out how to do it there - it’s probably still the fastest, cleanest to do it there.
There are more ways above that: routes file, and of course application controller.
I went for the routes file and added this directive:
root to: redirect { |params, request|
"/#{best_locale_from_request!(request)}"
}, status: 302, as: :redirected_root
The curly braces syntax is obligatory, do/end block does not work.
The actual work is being done with the help of the accept_language
gem and these two methods, split up for easier reading i presume:
def best_locale_from_request(request)
return I18n.default_locale unless request.headers.key?("HTTP_ACCEPT_LANGUAGE")
string = request.headers.fetch("HTTP_ACCEPT_LANGUAGE")
locale = AcceptLanguage.parse(string).match(*I18n.available_locales)
# If the server cannot serve any matching language,
# it can theoretically send back a 406 (Not Acceptable) error code.
# But, for a better user experience, this is rarely done and more
# common way is to ignore the Accept-Language header in this case.
return I18n.default_locale if locale.nil?
locale
end
I’ve put them both into the routes file, but there might be a better place for that.
The available locales array grew a bit, in order to prevent edge cases:
# config/application.rb
config.i18n.available_locales = [:en, :"en-150", :"en-001", :"en-DE", :de, :"de-AT", :"de-CH", :"de-DE", :"de-BE", :"de-IT", :"de-LI", :"de-LU", :et, :"et-EE"]
Turns out the gem always forwards the geography part as well, so in order to make sure nobody is left out, I have added this for now. this might become tricky later on as paths are created based on that, and the language switcher might be a bit more tricky. maybe it makes sense to cut the second part off somehow.
Accept-Language gem: https://github.com/cyril/accept_language.rb
A rack app i did not get to work, but apparently does the i18n settings as well: https://github.com/blindsidenetworks/i18n-language-mapping
This was very helpful for the redirect syntax: https://www.paweldabrowski.com/articles/rails-routes-less-known-features
Following this official howto
lvs shows you all volumes in their volume group (in my case ‘ssd’)
LV VG Attr LSize Pool Data% Meta%
data pve twi-a-tz-- 32.12g 0.00 1.58
root pve -wi-ao---- 16.75g
swap pve -wi-ao---- 8.00g
guests ssd twi-aotz-- <2.33t 74.93 45.51
vm-100-disk-0 ssd Vwi-a-tz-- 12.00g guests 72.69
vm-101-disk-0 ssd Vwi-a-tz-- 12.00g guests 85.22
vm-101-disk-1 ssd Vwi-a-tz-- 50.00g guests 99.95
vm-102-disk-0 ssd Vwi-a-tz-- 12.00g guests 97.57
vm-102-disk-1 ssd Vwi-a-tz-- 50.00g guests 64.54
vm-103-disk-0 ssd Vwi-a-tz-- 12.00g guests 74.37
vm-103-disk-1 ssd Vwi-a-tz-- 150.00g guests 52.42
vm-104-disk-0 ssd Vwi-a-tz-- 12.00g guests 90.74
vm-104-disk-1 ssd Vwi-a-tz-- 10.00g guests 95.27
vm-105-disk-0 ssd Vwi-a-tz-- 12.00g guests 55.79
vm-105-disk-1 ssd Vwi-a-tz-- 10.00g guests 32.89
vm-106-disk-0 ssd Vwi-a-tz-- 12.00g guests 77.78
vm-106-disk-1 ssd Vwi-a-tz-- 10.00g guests 99.82
vm-107-disk-0 ssd Vwi-a-tz-- 32.00g guests 0.00
vm-107-disk-1 ssd Vwi-a-tz-- 500.00g guests 95.41
vm-108-disk-0 ssd Vwi-aotz-- 8.00g guests 43.73
vm-109-disk-0 ssd Vwi-a-tz-- 12.00g guests 52.41
vm-109-disk-1 ssd Vwi-a-tz-- 50.00g guests 2.22
vm-110-disk-0 ssd Vwi-a-tz-- 12.00g guests 51.14
vm-110-disk-1 ssd Vwi-a-tz-- 50.00g guests 2.22
vm-111-disk-0 ssd Vwi-a-tz-- 12.00g guests 84.85
vm-111-disk-1 ssd Vwi-a-tz-- 100.00g guests 16.97
vm-112-disk-0 ssd Vwi-a-tz-- 8.00g guests 13.53
vm-113-disk-0 ssd Vwi-a-tz-- 8.00g guests 11.55
vm-114-disk-0 ssd Vwi-a-tz-- 16.00g guests 84.31
vm-115-disk-0 ssd Vwi-a-tz-- 16.00g guests 97.12
vm-116-disk-0 ssd Vwi-a-tz-- 8.00g guests 31.49
vm-117-cloudinit ssd Vwi-aotz-- 4.00m guests 50.00
vm-117-disk-0 ssd Vwi-aotz-- 10.00g guests 39.71
vm-117-disk-1 ssd Vwi-aotz-- 1000.00g guests 97.47
If the id of the new ct or vm is not equal to the id of the volume’s previous attachment, rename them, i.e.
lvrename ssd/vm-101-disk-1 ssd/vm-117-disk-2
this will make vm-101-disk-1 available as vm-117-disk-2, you have to increase the count in the end of the name.
then edit the config of the actual vm.
take the line from /etc/pve/qemu-server/<vm id>.conf
that describes the volume to the new <vm id>.conf
the tricky thing was to run qm rescan
afterwards which fixed syntax and made the volume appear in the web gui where i could finally attache it to the new vm.
The board has two integrated ethernet adapters, here’s the lshw
data:
sudo lshw -c network
\*-network
description: Ethernet interface
product: I211 Gigabit Network Connection
vendor: Intel Corporation
physical id: 0
bus info: pci@0000:05:00.0
logical name: enp5s0
version: 03
serial: 24:4b:fe:<redacted>
size: 1Gbit/s
capacity: 1Gbit/s
width: 32 bits
clock: 33MHz
capabilities: pm msi msix pciexpress bus\_master cap\_list ethernet physical tp 10bt 10bt-fd 100bt 100bt-fd 1000bt-fd autonegotiation
configuration: autonegotiation=on broadcast=yes driver=igb driverversion=5.12.8-zen1-1-zen duplex=full firmware=0. 6-1 ip=<redacted> latency=0 link=yes multicast=yes port=twisted pair speed=1Gbit/s
resources: irq:61 memory:fc900000-fc91ffff ioport:e000(size=32) memory:fc920000-fc923fff
\*-network
description: Ethernet interface
product: RTL8111/8168/8411 PCI Express Gigabit Ethernet Controller
vendor: Realtek Semiconductor Co., Ltd.
physical id: 0.1
bus info: pci@0000:06:00.1
logical name: enp6s0f1
version: 1a
serial: 24:4b:fe:<redacted>
size: 1Gbit/s
capacity: 1Gbit/s
width: 64 bits
clock: 33MHz
capabilities: pm msi pciexpress msix bus\_master cap\_list ethernet physical tp mii 10bt 10bt-fd 100bt 100bt-fd 1000bt-fd autonegotiation
configuration: autonegotiation=on broadcast=yes driver=r8169 driverversion=5.12.8-zen1-1-zen duplex=full firmware=rtl8168fp-3\_0.0.1 11/16/19 ip=<redacted> latency=0 link=yes multicast=yes port=twisted pair speed=1Gbit/s
resources: irq:24 ioport:d800(size=256) memory:fc814000-fc814fff memory:fc808000-fc80bfff
It seems that the UEFI entry to activate Wake on Lan for PCIe devices only affects the Intel port, I have persistently activated WOL for the Realtek port by adding a .link file to /etc/systemd/network/foobar.link
[Match]
MACAddress=<redacted>
[Link]
WakeOnLan=magic
# below lines are cloned from original entry in
# /usr/lib/systemd/network/99-default.link
# which is the default link file for all adapters whose section is hereby overwritten
NamePolicy=keep kernel database onboard slot path
AlternativeNamesPolicy=database onboard slot path
MACAddressPolicy=persistent
The arch wiki shows a couple of alternative ways, but this seems to be the most straight forward for me.
On Ubuntu 18.04
Be wary of multiple installations (11, 12, 13), as pg_upgradcluster
for example will always go for the highest version.
cp -R /etc/posgresql/11 /etc/posgresql/12
/usr/lib/postgresql/12/bin/initdb -D /srv/postgres/12/main
/usr/lib/postgresql/11/bin/pg_ctl -D /srv/postgres/11/main/ -mf stop
time /usr/lib/postgresql/12/bin/pg_upgrade --old-bindir /usr/lib/postgresql/11/bin/ --new-bindir /usr/lib/postgresql/12/bin/ --old-datadir /srv/postgres/11/main/ --new-datadir /srv/postgres/12/main/ --link --check
/usr/lib/postgresql/11/bin/pg_ctl -w \
-l pg_upgrade_server.log \
-D /srv/postgres/11/main \
-o "-p 50432 -b -c listen_addresses='' -c unix_socket_permissions=0700 -c unix_socket_directories='/var/lib/postgresql'" start
cat pg_upgrade_server.log
Those were mostly faulty references to configuration files, or having to explicitly state the non-standard data directory location.
Lastly, the systemd
related things:
systemctl disable postgres@11-main
systemctl enable postgres@12-main
Found here: https://httptoolkit.tech/blog/http-wtf/
Caching has never been easy, but HTTP cache headers can be particularly confusing. The worst examples of this are no-cache
and private
. What does the below response header do?
Cache-Control: private, no-cache
This means “please store this response in all browser caches, but revalidate it when using it”. In fact, this makes responses more cacheable, because this applies even to responses that wouldn’t normally be cacheable by default.
Specifically, no-cache
means that your content is explicitly cacheable, but whenever a browser or CDN wants to use it, they should send a request using If-Match
or If-Modified-Since
to ask the server whether the cache is still up to date first. Meanwhile private
means that this content is cacheable, but only in end-client browsers, not CDNs or proxies.
If you were trying to disable caching because the response contains security or privacy sensitive data that shouldn’t be stored elsewhere, you’re now in big trouble. In reality, you probably wanted no-store
.
If you send a response including a Cache-Control: no-store
header, nobody will ever cache the response, and it’ll come fresh from the server every time. The only edge case is if you send that when a client already has a cached response, which this won’t remove. If you want to do that and clear existing caches too, add max-age=0
.
Twitter notably hit this issue. They used Pragma: no-cache
(a legacy version of the same header) when they should have used Cache-Control: no-store
, and accidentally persisted every user’s private direct messages in their browser caches. That’s not a big problem on your own computer, but if you share a computer or you use Twitter on a public computer somewhere, you’ve now left all your private messages conveniently unencrypted & readable on the hard drive. Oops.
After 827 days of running time my RaspiBlitz BTC lightning node refused to mount the external hard drive (Toshiba HDTB410EK3AA Canvio Basics, USB 3.0, 1TB). Smart errors of the weirdest kind. I remembered Gibson’s spammy advertisements during the Security Now! Podcast, praising SpinRite for recovery. As there was no physical damage / interaction that would have caused that i gave it a try.
After i bought the license, i downloaded the exe causing first problem, how to run on Linux? I have a Windows 7 laptop for such cases, so i executed the program and tried all the different options to create a bootable USB, finally succeeding by writing out the diskette spinrite.img to harddisk, then dd-ing it onto a usb flash drive:
dd if=/path/to/SpinRite.img conv=notrunc of=/dev/<your usb device, i.e. sda>
After rebooting the same laptop with the external USB disk attached, SpinRite started right away, and luckily for me, the drive was instantly recognized; no need for driver voodoo on the included FreeDOS distribution - that was my biggest concern. Probably the fact that the external disk is not a casing with some exotic usb-controller, but a disk with an integrated usb port helped a lot. A small downer was the unavailability of smart data for SpinRite - I don’t have a theory about that.
The first run failed with a program abort:
This is ongoing.
Context: Remote raspberry pi model b rev1, all setup with raspberry os / raspbian.
the hardware specs are nothing much, but the machine is reliable, even when apparently half the ram chips are dead.
/etc/openvpn/
.askpass /etc/openvpn/passwordfilename
openvpn.service
is started and enabled.
systemctl enable openvpn && systemctl restart openvpn
ip a
should show you the tunnel interface already.
NB: for the routing, make sure that your that your router has a static entry that sends all the traffic to the vpn subnet to the vpn server, but that is something that depends really on your own net topology.
Why millennials are facing the scariest financial future of any generation since the Great Depression.
The Huffington PostSome ideas here for social change, however, based on US analysis.
raise the minimum wage and tie it to inflation
roll back anti-union laws to give workers more leverage against companies that treat them as if they’re disposable
tilt the tax code away from the wealthy
attach benefits to work instead of jobs: For every hour you work, your boss chips in to a fund that pays out when you get sick, pregnant, old or fired. The fund follows you from job to job, and companies have to contribute to it whether you work there a day, a month or a year.
construction workers have an “hour bank” that fills up when they’re working and provides benefits even when they’re between jobs
Hollywood actors and technical staff have health and pension plans that follow them from movie to movie
in low employment / mid to high human resource areas launch a program that simply reimbursed employers for the wages they paid to eligible new hire
improve existing poverty fighting programs and handouts over basic income