How to fix `The following signatures couldn’t be verified because the public key is not available: NO_PUBKEY`

You might have run into this error when updating packages. The error is probably because you don’t have the public key corresponding to whatever private key the package was signed with. Hopefully, you can just download the public key from keyserver.ubuntu.com and get on with your day. In my case, it was a Google SDK key 8B57C5C2836F4BEB that was missing. Just use the following command, replacing the key ID with the key you need.

sudo apt-key adv –keyserver keyserver.ubuntu.com –recv-keys 8B57C5C2836F4BEB

Now, sudo apt-get update should just work. If it doesn’t, feel free to leave a comment.

Set up Ubuntu 22.04 Server 64-bit on Virtualbox 6.1

Download the Ubuntu ISO here.

Create a new VM. Here’s the specs I use.

  • 2 GB RAM
  • 25 GB HDD, dynamically allocated

Once the VM is created, configure a few more properties.

  • In System, increase CPUs to 2, and enable PAE
  • In Storage, attach the Ubuntu ISO to the optical drive
  • In Network > Advanced, set up a port forwarding rule TCP 2222 to 22 for SSH

Start up the new VM and follow the installation prompts. Once installed and rebooted, pause the VM and restart in headless mode.

I ran into errors during installation which I worked around by disabling the network interface for the installation.

From your host CLI, copy a SSH key to the VM.

ssh-copy-id -i ~/.ssh/id_ed25519 user@localhost -p 2222

Don’t have a SSH key?

Generate one.

ssh-keygen -t id_ed25519

Log in to your VM.

ssh user@localhost -p 2222

Do you need to set up a web proxy?

Configure it in /etc/apt/apt.conf and /etc/environment.

sudo tee -a /etc/apt/apt.conf <<EOF
Acquire::http::Proxy "http://proxy.example.com:8080";
Acquire::https::Proxy "http://proxy.example.com:8080";
EOF
sudo cp /etc/environment /tmp/environment
sudo tee -a /etc/environment <<EOF
http_proxy=http://proxy.example.com:8080
https_proxy=http://proxy.example.com:8080
HTTP_PROXY=http://proxy.example.com:8080
HTTPS_PROXY=http://proxy.example.com:8080
EOF

Update packages.

sudo apt update && sudo apt upgrade -y

Set up automatic updates

Install unattended-upgrades. This will update your OS automatically.

sudo apt install unattended-upgrades -y sudo systemctl enable unattended-upgrades

Back up original configuration, then enable updates, email notifications, remove unused packages, and automatic reboot.

sudo cp /etc/apt/apt.conf.d/50unattended-upgrades /tmp/50unattended-upgrades
sudo sed -i 's/\/\/"${distro_id}:${distro_codename}-updates";/"${distro_id}:${distro_codename}-updates";/' /etc/apt/apt.conf.d/50unattended-upgrades
sudo sed -i 's/\/\/Unattended-Upgrade::Mail "";/Unattended-Upgrade::Mail "me@example.com";/' /etc/apt/apt.conf.d/50unattended-upgrades
sudo sed -i 's/\/\/Unattended-Upgrade::MailReport "on-change";/Unattended-Upgrade::MailReport "only-on-error";/' /etc/apt/apt.conf.d/50unattended-upgrades
sudo sed -i 's/\/\/Unattended-Upgrade::Remove-Unused-Kernel-Packages "true";/Unattended-Upgrade::Remove-Unused-Kernel-Packages "true";/' /etc/apt/apt.conf.d/50unattended-upgrades
sudo sed -i 's/\/\/Unattended-Upgrade::Remove-Unused-Dependencies "true";/Unattended-Upgrade::Remove-Unused-Dependencies "true";/' /etc/apt/apt.conf.d/50unattended-upgrades
sudo sed -i 's/\/\/Unattended-Upgrade::Automatic-Reboot "false";/Unattended-Upgrade::Automatic-Reboot "true";/' /etc/apt/apt.conf.d/50unattended-upgrades
sudo sed -i 's/\/\/Unattended-Upgrade::Automatic-Reboot-Time "02:00";/Unattended-Upgrade::Automatic-Reboot-Time "02:00";/' /etc/apt/apt.conf.d/50unattended-upgrades

Apply the changes.

sudo systemctl restart unattended-upgrades

Take a VM snapshot.

Want to install Docker?

How To Add Secrets To Your Docker Containers

This post is inspired by this SO thread.

Note: You must be running Docker in swarm mode, and docker-compose can’t use external secrets.

You might have heard that you should use Docker Secrets to manage sensitive data like passwords and API keys shared with containers. For example, you can set the default database admin password with the POSTGRES_PASSWORD environment variable:

# docker-compose.yml
version: "3.9"
services:
  postgresql:
    image: postgres
    environment:
      POSTGRES_PASSWORD: P@ssw0rd!

But now anybody who can read docker-compose.yml can read your password! To avoid this, it is common practice is to use secrets to set sensitive environment variables at runtime.

One way to create a secret is in the secrets block of a docker-compose configuration. Then, we can add that secret to our services. By default, secrets will be mounted in the container’s /run/secrets directory. We also need to set the environment variable POSTGRES_PASSWORD_FILE to the path of our secret /run/secrets/postgres-passwd. The postgres container uses environment variables ending in _FILE to set the corresponding sensitive environment variable, e.g. POSTGRES_PASSWORD.

# docker-compose.yml
version: "3.9"
services:
  postgresql:
    image: postgres
    environment:
      POSTGRES_PASSWORD_FILE: /run/secrets/postgres-passwd
    secrets:
      - postgres-passwd
secrets:
  postgres-passwd:
    file: ./secrets/postgres-passwd

That’s it! Now your container deployment will use a POSTGRES_PASSWORD that isn’t in a docker-compose.yml for all to see. The actual secret is in ./secrets/postgres-passwd which can be protected with traditional file permissions. You can even randomly generate secrets instead of using files.

openssl rand 33 -base64 | docker secret create postgres-passwd -

What about containers that don’t support secrets?

Remember what I said about the postgres container using environment variables ending in _FILE? It uses a custom docker-entrypoint.sh script to do that. Specifically, it uses the file_env function to set the specific environment variables it’s looking for. To add support for secrets to a container, create your own docker-entrypoint.sh script with that file_env function and call it to set the environment variables you want. The last line of your entrypoint script should be exec "$@" so you can run whatever was originally the entrypoint as command parameters.

# docker-entrypoint.sh
#!/bin/bash

set -e

# usage: file_env VAR [DEFAULT]
#    ie: file_env 'XYZ_DB_PASSWORD' 'example'
# (will allow for "$XYZ_DB_PASSWORD_FILE" to fill in the value of
#  "$XYZ_DB_PASSWORD" from a file, especially for Docker's secrets feature)
file_env() {
	local var="$1"
	local fileVar="${var}_FILE"
	local def="${2:-}"
	if [ "${!var:-}" ] && [ "${!fileVar:-}" ]; then
		echo >&2 "error: both $var and $fileVar are set (but are exclusive)"
		exit 1
	fi
	local val="$def"
	if [ "${!var:-}" ]; then
		val="${!var}"
	elif [ "${!fileVar:-}" ]; then
		val="$(< "${!fileVar}")"
	fi
	export "$var"="$val"
	unset "$fileVar"
}

file_env 'POSTGRES_PASSWORD'

exec "$@"

In your docker-compose service configuration, mount the script and set the entrypoint and command values. This example uses the env command so we can see our variables were set.

# docker-compose.yml
version: "3.9"
services:
  test:
    image: debian
    volumes:
      - ./docker-entrypoint.sh:/docker-entrypoint.sh
    entrypoint: /docker-entrypoint.sh
    command: env
    environment:
      POSTGRES_PASSWORD_FILE: /run/secrets/postgres-passwd
    secrets:
      - postgres-passwd
secrets:
  postgres-passwd:
    file: ./secrets/postgres-passwd

Now deploy your service and see if it works!

docker-compose up

How To Set Up Minikube Behind A VPN And Proxy On A Mac

I’ve been fighting with my local Minikube environment behind a VPN for months now. I resigned to using a remote server that doesn’t use the VPN, but decided to revisit the local environment setup recently and finally figured it out!

What I Think The Problem Was

I’ve always used the Virtualbox driver for Minikube. It works fine behind a proxy by just setting the HTTP_PROXY, HTTPS_PROXY, and NO_PROXY variables. I think what messes up a lot of people is they forget to add all the Minikube-related IPs to their NO_PROXY variables. A cool thing about Kubernetes is it respects CIDR notation in your NO_PROXY. I even wrote a script to set up all the proxy variables.

But for some reason, Virtualbox doesn’t like my VPN…

$ minikube delete
🔥  Deleting "minikube" in virtualbox ...
💀  Removed all traces of the "minikube" cluster.
$ . proxy.sh http://proxy.example.com:8080
$ minikube start --driver virtualbox
😄  minikube v1.23.2 on Darwin 11.6.1
    ▪ MINIKUBE_ACTIVE_DOCKERD=minikube
✨  Using the virtualbox driver based on user configuration
👍  Starting control plane node minikube in cluster minikube
🔥  Creating virtualbox VM (CPUs=2, Memory=6000MB, Disk=20000MB) ...
🌐  Found network options:
    ▪ HTTP_PROXY=http://proxy.example.com:8080
    ▪ HTTPS_PROXY=http://proxy.example.com:8080
    ▪ NO_PROXY=127.0.0.1,localhost,10.0.0.8/16,172.16.0.0/12,192.168.0.0/16,.example.com,.internal
    ▪ http_proxy=http://proxy.example.com:8080
    ▪ https_proxy=http://proxy.example.com:8080
    ▪ no_proxy=127.0.0.1,localhost,10.0.0.8/16,172.16.0.0/12,192.168.0.0/16,.example.com,.internal
❌  minikube is unable to connect to the VM: dial tcp 192.168.99.225:22: i/o timeout

        This is likely due to one of two reasons:

        - VPN or firewall interference
        - virtualbox network configuration issue

        Suggested workarounds:

        - Disable your local VPN or firewall software
        - Configure your local VPN or firewall to allow access to 192.168.99.225
        - Restart or reinstall virtualbox
        - Use an alternative --vm-driver
        - Use --force to override this connectivity check


❌  Exiting due to GUEST_PROVISION: Failed to validate network: dial tcp 192.168.99.225:22: i/o timeout

╭───────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                           │
│    😿  If the above advice does not help, please let us know:                             │
│    👉  https://github.com/kubernetes/minikube/issues/new/choose                           │
│                                                                                           │
│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                           │
╰───────────────────────────────────────────────────────────────────────────────────────────╯

After reading the Minikube VPN documentation, I suspected my VPN client was configured to not allow local traffic, but when I tried to configure it, I couldn’t find any option to do so. So after reading the Pulse Secure VPN client documentation, I also suspected my IT department doesn’t allow users to configure the client. I tried the VMware driver with similar results. (The VMware driver seems to work now, and it works well, so maybe use that if you have VMware Fusion.) I broke up with Docker Desktop the first time because it didn’t respect NO_PROXY, and then again after they changed their license, and after all that, I just don’t want to use the Docker driver. So finally I checked out the HyperKit driver and that’s what solved all my problems. All I know about HyperKit is that it’s a Mac hypervisor developed by the Moby project, and it works with my VPN, so let’s get things set up…

Install Minikube and HyperKit

Install Homebrew if you haven’t already. Once you have Homebrew set up, install Minikube and HyperKit.

brew install minikube hyperkit

Create a Minikube Instance

Make sure you set up your proxy variables if required. Use my script if you want. The following command sets up a Minikube instance with a default configuration using HyperKit, but there’s a ton of options you can change. See minikube --help for those.

minikube start --driver hyperkit

There was an error about not being able to connect to k8s.gcr.io, but I just ignored it as it didn’t seem to affect the instance creation. Once the VM was created, I set up my Kubernetes/Docker clients. I wrote a script for that so you can just run . minikube.sh --driver hyperkit to set everything up. When I tried to pull an image, I got another error:

Error response from daemon: Get "https://registry-1.docker.io/v2/": proxyconnect tcp: dial tcp: lookup proxy.example.com on 172.16.129.1:53: read udp 172.16.129.14:44065->172.16.129.1:53: read: connection refused

It took me a while to resolve this, but when a co-worker posted a DNS config he used to get Docker running behind the VPN, I was able to adopt his method to configure Minikube’s DNS to use my host’s DNS server. Maybe Minikube has an option that does this automagically?

minikube ssh sudo resolvectl dns eth0 192.168.0.53
minikube ssh sudo resolvectl domain eth0 example.com

And then I was rocking!

    $ docker image pull timescale/timescaledb-postgis:2.3.0-pg13
    2.3.0-pg13: Pulling from timescale/timescaledb-postgis
    540db60ca938: Pull complete 
    a3cb73039552: Pull complete 
    39855706e49a: Pull complete 
    19d88c3ceadb: Downloading [===============================>                   ]  37.52MB/59.45MB
    9ef572e3c9bb: Download complete 
    261ea2d28080: Download complete 
    1716633ec467: Download complete 
    051e02f33f5f: Download complete 
    79ceec13a19e: Download complete 
    f43ce1b5bcdc: Download complete 
    9e276ca2472d: Download complete 
    3e6c8f42f360: Download complete 
    c9bfa14dc9b6: Download complete 
    101a245191b9: Downloading [===================>                               ]  16.91MB/44.11MB

Hope that works out for you with your VPN, but if not, let me know in the comments.

UPDATE: Things Are Still Broken

I thought my troubles were over, but it seems VMware and HyperKit drivers want me to use `minikube mount` to mount my host files in the VM so bind mounts will work. Virtualbox driver seem to do that automagically. I ran into this issue which required specifying the network IP address of the VM.

minikube mount /Users/me:/Users/me –ip 172.16.129.1

Then I ran into an error when running pytest:

OSError: [Errno 526] Unknown error 526

I followed some advice in this issue and increased msize to no avail.

minikube mount /Users/me:/Users/me –ip 172.16.129.1 –msize 524288

So now I’m back to just using a server without a VPN and Virtualbox. :*(

How to install Steam on ChromeOS

Newer Chromebooks are actually pretty good at playing games and thanks to GPU acceleration and Crostini, a Debian Linux container custom-built for ChromeOS, we can easily run Linux applications on ChromeOS without any special hacks.

You’ll need to enable the Linux environment in ChromeOS Settings. You can change your Linux disk size, but 20GB is good enough for me to have a couple games and still have room for dev tools. You probably won’t be able to make it a disk much bigger and still have room for your ChromeOS environment, though.

Once it’s set up, you’ll have to set up a username in the Linux terminal.

Now that you have the Linux environment set up, download steam.deb, the Steam installer for Linux. Once downloaded, open the Files app, right-click on the .deb file, select Open with…, then Install with Linux.

It might take a few minutes to install, and then it should appear in your programs under the Linux apps group.

Troubleshooting

When I started Steam the first time, I ran into some issues with missing libraries, but was able to resolve them by installing the necessary packages from the Linux terminal.

sudo dpkg --add-architecture i386
sudo apt update
sudo apt-get install libgl1-mesa-glx:i386

Well, that was pretty easy. Hope it works out for you. If not, I’d be interested to hear about your issues in the comments.

How to set up VNC on Ubuntu on Raspberry Pi

Credit: This is more or less this DigitalOcean post with my own notes thrown in.

I don’t like using VNC on my Pi because of the hefty resource requirements for a desktop environment, but I wanted to get a G Photos token for rclone so I could sync off a bunch of 360 videos eating up all my G Drive space, and I needed a graphical web browser to do that.

1. Install XFCE

This is only required if you don’t already have a desktop environment. You can also use another desktop environment, but XFCE is pretty lightweight.

sudo apt update sudo apt install xfce4 xfce4-goodies

When you’re prompted to do so, pick gdm3 or lightdm display manager. I picked lightdm because it uses less resources. Here’s an article comparing the display managers.

Side note:

During installation, I got some errors.

Errors were encountered while processing: avahi-daemon libnss-mdns:arm64 avahi-utils

I found this article, but it didn’t help me. So I uninstalled and reinstalled everything thinking I could select gdm3 maybe not encounter the same issues, but it didn’t prompt me, and since it worked anyway, I didn’t bother figuring it out. If you know why those errors occurred, please leave a comment.

2. Install and configure TightVNC Server

You could use another VNC server, but TightVNC works for me. Also, there’s a PortableApps version of the client which I can install on a USB stick and use on any Windows machine. Anyways, install the server.

sudo apt update sudo apt install tightvncserver

Run the server for the first time. This will initialize its configuration.

vncserver

You should be prompted to enter a password and get some output like:

You will require a password to access your desktops.

Password: 
Verify:
Would you like to enter a view-only password (y/n)? n
xauth:  file /home/ubuntu/.Xauthority does not exist

New 'X' desktop is localhost:1

Creating default startup script /home/ubuntu/.vnc/xstartup
Starting applications specified in /home/ubuntu/.vnc/xstartup
Log file is /home/ubuntu/.vnc/localhost:1.log

Notice that it created a startup script ~/.vnc/xstartup. You’ll need to edit that file to get XFCE to work, but shut down the server first.

vncserver -kill :1

Back up the original ~/.vnc/xstartup.

mv ~/.vnc/xstartup ~/.vnc/xstartup.bak

Create a new ~/.vnc/xstartup with the following contents:

#!/bin/bash xrdb $HOME/.Xresources startxfce4 &

Make it executable.

chmod +x ~/.vnc/xstartup

The VNC server should work now, but VNC traffic is unencrypted, so if you care about people being able to snoop on what you’re typing (like passwords!) and whatever else you’re doing on your VNC connection, you should set up a SSH tunnel to encrypt your traffic.

3. Run the server securely

Run the VNC server such that it only accepts local connections.

vncserver -localhost

4. Securely connect to the server

Set up a SSH tunnel from your local machine to the remote server. This is how you’ll make the VNC connection “locally”.

On your local machine (the VNC client), run the following command, replacing <remote user> with your username on the server and <remote host> with the server’s hostname or IP address. You can also change 59001 to a different port if you want, but make the port isn’t already being used by another service and it’s above 1024.

ssh -L 59001:localhost:5901 -C -N -l <remote user> <remote host>

On your VNC client, connect to localhost:59001 (or whatever port you used) and you should see your Pi desktop.

That’s it, but if it doesn’t work for you, I’d be interested to hear about it in the comments.

How to edit text in a string with sed

Say you’ve got a bunch of files to organize from something like mypic20190327.jpg to 2019/03/mypic20190327.jpg. sed is a nifty tool to help you with that. You’ll need to use regex to match each part, then reference each matched part in the result, and you could use extended regex to save you some escape characters. Since this approach, touches on several features of sed, I wanted to write it down.

for filename in $(ls); do dir=$(sed -r 's/([[:digit:]]{4})([[:digit:]]{2}).*/\1\/\2/g' <<< $filename); mkdir -p $dir; mv $filename $dir; done

First, we iterate all the files in the directory.

for filename in $(ls)

Then, we calculate the directory (e.g., 2019/03) from the filename by extracting the first 4 digits ([[:digit:]]{4}) into variable 1 and the next two digits ([[:digit:]]{2}) into variable 2. Note the use of parentheses to indicate variable storage.

Note the use of the -r flag in the command which indicates the use of extended regex so we don’t have to escape our parentheses and curly braces. If you’re not using GNU sed, you might use -E or something else instead.

sed -r

Note the .* at the end of the expression so our result replaces the entire line and not just the year and month parts.

([[:digit:]]{4})([[:digit:]]{2}).*

We refer to our stored variables using \1 and \2 in our result string to make a directory path year/month.

\1\/\2

Note the here-string syntax used to indicate sed should use the literal string as input instead of interpreting it as the name of the file from which to read and match data.

<<< $filename

Now that we have our directory name, we make it with mkdir using the -p flag to make any intermediary directories. And finally, we do the move.

How to completely dim brightness in XFCE

At least for me, the GUI brightness controls only allow the brightness to be dimmed so far.

$ cat /sys/class/backlight/intel_backlight/brightness 
618

By editing that file, you can bring it down further.

echo "1" | sudo tee /sys/class/backlight/intel_backlight/brightness

For me, the value 0 works fine as well, but it doesn’t appear to be any different than 1, and I get this weird feeling that some XFCE is going decide to black out the screen on 0 someday so I just play it safe with 1.

Signing your git commits

Install GPG.

sudo apt-get update && sudo apt-get install gnupg -y

Generate a GPG key.

gpg --full-generate-key

The default prompts are probably sufficient, but going for more security (like 4096-bit security) probably won’t hurt. After you get through creating the key, it’ll automatically be imported into your trusted key list.

Next, get your GPG key fingerprint.

gpg --fingerprint YOUR_GPG_KEY_NAME

Next, go into your Git project directory and configure it with your name, email, and GPG signing key.

git config user.name YOUR_NAME
git config user.email YOUR_EMAIL
git config user.signingkey YOUR_GPG_KEY_FINGERPRINT

You can sign your commits by adding the -S flag.

git commit -S

Or you can configure your project to sign your commits by default:

git config commit.gpgsign true

You can also add the --global flag to your git config commands to set the configuration globally instead of for just one project.

git config --global commit.gpgsign true

References

My $99 development Chromebook

This is inspired by Kenneth White’s $169 development Chromebook. This is mostly just some updates for 2019.

Last Black Friday, Wal-Mart had a Samsung Chromebook 3 for $99. It’s powered by a dual-core Intel Celeron with 4GB RAM, 16GB SSD, Bluetooth, 802.11n, SD card slot, two USB ports, webcam, and 10+ hours of battery runtime. I thought it would only be good for doing browser-based activities in CoderDojo, but it’s grown on me as a tablet replacement. Now, I’m hoping to even replace my notebook.

Following White’s advice, I made a Google account just for the device, but ultimately decided to go back to using my personal account because I wanted access to my Google Play purchases. But like White said, I can always Powerwash and use the device account when I travel to potentially hostile environments.

I changed my authentication to be through a hardware key (and Google Authenticator as a backup) which turns out to be very easy to use.

I’ve more or less gone with White’s guide in my build. In addition to his goals, I added a few:

  • Docker: I’m looking into this, but it looks like I might need a 32-bit x86 version to run under Termux.
  • ADB: I’m not even close to getting this working yet.

Some things to note:

  • It takes some getting used to Termux’s long-press on touchpad to copy/paste. There’s no CTRL+C/V options AFAIK.
  • If you store data in SD card, remember that it’s not encrypted by default.