Design a site like this with WordPress.com
Get started

Getting Up And Running With Jupyter Notebooks In Docker

Jupyter notebooks are handy for data science and other use cases that rely on iterative programming. If you’re already comfortable running Docker, here’s an easy one-liner to get up and running.

docker run --rm -p 10000:8888 -v ${PWD}:/home/jovyan/work jupyter/datascience-notebook:2023-03-09

The first port number (10000) can be changed to expose the notebook to a different port. The volume (-v) flag can be changed from the present directory (${PWD}) to a directory containing data you’d like to work with. To connect to the notebook, you’ll need to get the connection string from the output. It’ll look like this:

[I 2023-03-13 17:23:37.887 ServerApp] Jupyter Server 2.4.0 is running at:
[I 2023-03-13 17:23:37.887 ServerApp] http://bc7b3ff32303:8888/lab?token=6fd0237c4df26bc1e1af4a7c386c28c8573498850701f200
[I 2023-03-13 17:23:37.887 ServerApp]     http://127.0.0.1:8888/lab?token=6fd0237c4df26bc1e1af4a7c386c28c8573498850701f200

If you’re doing this from a Chromebook, you’ll have to get the real IP address from the output of ip addr show dev eth0. The port number is actually 10000 or whatever port you exposed. So the URL to paste in your browser will be something like this:

http://100.115.92.199:10000/lab?token=6fd0237c4df26bc1e1af4a7c386c28c8573498850701f200

Easy peasy!

Advertisement

Bridge OS Kernel Panics On Macbook Pro

I’ve been getting “Your computer was restarted because of a problem” errors like this lately:

panic(cpu 0 caller 0xfffffff01e07db08): macOS watchdog detected
Debugger message: panic
Memory ID: 0x6
OS release type: User
OS version: 20P3045
macOS version: 22D68
Kernel version: Darwin Kernel Version 22.3.0: Thu Jan  5 20:17:53 PST 2023; root:xnu-8792.81.2~1/RELEASE_ARM64_T8010
KernelCache UUID: 66D3D9EAC3796DA60EBA5A1BCB588E81
Kernel UUID: A133ECF3-E2EF-3DEA-8A4C-D21463BB0B0D
Boot session UUID: 2D1ABF32-DA0B-4CF2-B82F-BF811BD8C2B3
iBoot version: iBoot-8419.80.7
secure boot?: YES
roots installed: 0
x86 EFI Boot State: 0x16
x86 System State: 0x0
x86 Power State: 0x0
x86 Shutdown Cause: 0x5
x86 Previous Power Transitions: 0x405060400
PCIeUp link state: 0x89271614
macOS kernel slide: 0x18600000
Paniclog version: 14
Kernel slide:      0x0000000017d2c000
Kernel text base:  0xfffffff01ed30000
mach_absolute_time: 0x7ce0fb8cd1
Epoch Time:        sec       usec
  Boot    : 0x640a4370 0x0003d4f9
  Sleep   : 0x640a7276 0x000d9fc2
  Wake    : 0x640ab429 0x000d7b2c
  Calendar: 0x640add0e 0x00007068

Zone info:
  Zone map: 0xffffffdc07cc0000 - 0xffffffe207cc0000
  . VM    : 0xffffffdc07cc0000 - 0xffffffdcee324000
  . RO    : 0xffffffdcee324000 - 0xffffffdd3aff0000
  . GEN0  : 0xffffffdd3aff0000 - 0xffffffde21654000
  . GEN1  : 0xffffffde21654000 - 0xffffffdf07cb8000
  . GEN2  : 0xffffffdf07cb8000 - 0xffffffdfee320000
  . GEN3  : 0xffffffdfee320000 - 0xffffffe0d4988000
  . DATA  : 0xffffffe0d4988000 - 0xffffffe207cc0000
  Metadata: 0xffffffdc00234000 - 0xffffffdc01a34000
  Bitmaps : 0xffffffdc01a34000 - 0xffffffdc01c68000

TPIDRx_ELy = {1: 0xffffffdf076d5088  0: 0x0000000000000000  0ro: 0x0000000000000000 }
CORE 0 is the one that panicked. Check the full backtrace for details.
CORE 1: PC=0xfffffff01ef4e848, LR=0xfffffff01ef4e848, FP=0xffffffeac9fdbf00
Compressor Info: 0% of compressed pages limit (OK) and 0% of segments limit (OK) with 0 swapfiles and OK swap space
Panicked task 0xffffffde216bd638: 0 pages, 223 threads: pid 0: kernel_task
Panicked thread: 0xffffffdf076d5088, backtrace: 0xffffffec820176c0, tid: 378
		  lr: 0xfffffff01ef1d44c  fp: 0xffffffec82017700
		  lr: 0xfffffff01ef1d25c  fp: 0xffffffec82017770
		  lr: 0xfffffff01f053220  fp: 0xffffffec820177e0
		  lr: 0xfffffff01f0521b8  fp: 0xffffffec820178a0
		  lr: 0xfffffff01eedd5fc  fp: 0xffffffec820178b0
		  lr: 0xfffffff01ef1ccd0  fp: 0xffffffec82017c60
		  lr: 0xfffffff01f5c6c1c  fp: 0xffffffec82017c80
		  lr: 0xfffffff01e07db08  fp: 0xffffffec82017cb0
		  lr: 0xfffffff01e065604  fp: 0xffffffec82017d10
		  lr: 0xfffffff01e06eac8  fp: 0xffffffec82017d60
		  lr: 0xfffffff01e067b58  fp: 0xffffffec82017e00
		  lr: 0xfffffff01e064b94  fp: 0xffffffec82017e70
		  lr: 0xfffffff01de81b40  fp: 0xffffffec82017ea0
		  lr: 0xfffffff01f51205c  fp: 0xffffffec82017ee0
		  lr: 0xfffffff01f5118b4  fp: 0xffffffec82017f20
		  lr: 0xfffffff01eee86c0  fp: 0x0000000000000000

That only gives you part of the story. The next step was to look at the Log Reports in Console for reports like panic-full-2023-03-10-073243.0003.ips.

{"roots_installed":0,"caused_by":"macos","macos_version":"Mac OS X 13.2.1 (22D68)","os_version":"Bridge OS 7.2 (20P3045)","macos_system_state":"running","incident_id":"2D1ABF32-DA0B-4CF2-B82F-BF811BD8C2B3","bridgeos_roots_installed":0,"bug_type":"210","timestamp":"2023-03-10 07:32:43.00 +0000"}
{
  "build" : "Bridge OS 7.2 (20P3045)",
  "product" : "iBridge2,3",
  "socId" : "0x00008012",
  "kernel" : "Darwin Kernel Version 22.3.0: Thu Jan  5 20:17:53 PST 2023; root:xnu-8792.81.2~1\/RELEASE_ARM64_T8010",
  "incident" : "2D1ABF32-DA0B-4CF2-B82F-BF811BD8C2B3",
  "crashReporterKey" : "c0dec0dec0dec0dec0dec0dec0dec0dec0de0001",
  "date" : "2023-03-10 07:32:43.90 +0000",
  "panicString" : <REMOVED FOR BREVITY>,
  "panicFlags" : "0x902",
  "bug_type" : "210",
  "otherString" : "\n** Stackshot Succeeded ** Bytes Traced 37852 (Uncompressed 115568) **\n",
  "macOSPanicFlags" : "0x0",
  "macOSPanicString" : "BAD MAGIC! (flag set in iBoot panic header), no macOS panic log available",
  "memoryStatus": <REMOVED FOR BREVITY>,
  "binaryImages": <REMOVED FOR BREVITY>,
  "processById": <REMOVED FOR BREVITY>

Note that Bridge OS is failing, indicating a Touch Bar issue. I found this SO post suggesting a cron script that restarts the touch bar every 3 minutes.

*/3 * * * * /usr/bin/pkill "Touch Bar agent" &>/dev/null; /usr/bin/killall "ControlStrip" &>/dev/null;

Edit your crontab with crontab -e then copy and paste that in. It’s not the cleanest solution, but if it gets the job done, I don’t really care.

I’m testing it now to see if it works…

Organizing A Photo Library With Rclone And Exiftool

I recently found out Google Photos makes it difficult to get metadata from photos. Also, they were using up all my cloud storage. So I decided to self-host my photo library. To get the original photos with metadata intact, I had to export them using Google Takeout. I had about 1 TB of Google Photos data which I had to download in 50 GB chunks. If you’re downloading to a flash drive, there’s a good chance it’s formatted using FAT which means you’ll need to download them in 4GB chunks. 4GB * 250 downloads is quite a bit and Google makes you download each archive manually. So yeah, don’t use Google Photos as a photo backup solution. It’s free and the search tools are pretty good, though.

Once I downloaded all the archives, I deleted any duplicates using rclone. I chose to dedupe by hash and automatically delete every file except for the first result. The first result may not have the name I want, but I’m going to use exiftool to rename everything later. I redirect output to a log file in case I need to analyze things later.

rclone dedupe --by-hash --dedupe-mode first --no-console --log-file ../rclone.log .

Next, I organized everything into directories by date using exiftool. If the picture had metadata containing the date the picture was taken, I used that. If not, I used the time the file was modified. I redirected output to a log file so I can address things like duplicate files later.

exiftool -r -d %Y/%m/%d "-directory<filemodifydate" "-directory<createdate" "-directory<datetimeoriginal" -overwrite_original . 1> ../exiftool.log 2>&1

I’m trying to think of some other identifying information to add to the directory names, like the location or activity or event. Location is pretty easy to get in a parseable format.

exiftool -gpslatitude -gpslongitude -n -json . | jq

Then I just gotta reverse geocode it.

I’d also like to do some deduplication using an image similarity detector. Facebook open-sourced a tool that looks like it might work.

How to develop websites with Astro + Docker + ChromeOS

I was reading StackShare’s Top Developer Tools 2022 and noticed Astro is the #1 tool so figured I should check it out. I use an Acer Spin 713 Chromebook for my daily driver and it works pretty well for local dev, but it does have a few quirks, namely that the dev environment runs in its own containerized environment so you have to keep track of things like the container IP.

# Get the IP address of the Crostini container
IP_ADDR=$(ip -4 addr show eth0 | grep -oP '(?<=inet\s)\d+(\.\d+){3}')

Next, start up a Node container. (If you don’t have Docker installed, follow this blog post.)

# Run the container
docker run -it --rm --entrypoint bash -p 3000:3000 node

Once you’re in the container, initialize a new Astro app.

npm create astro@latest
# say yes to the prompts

Run the app. The -- --host 0.0.0.0 is important.

npm run dev -- --host 0.0.0.0

Now jump into your browser, plug in http://${IP_ADDR}:3000, and you should be good to go.

So that’ll get you going. Now get to web devving!

Troubleshooting `error creating vxlan interface: operation not supported` on Docker on Ubuntu on Raspberry Pi

I recently did a big update on my home server and noticed my containerized services weren’t starting up. First, I looked into the Portainer service:

docker stack ps portainer --no-trunc

After searching around the internet, I found this advice to install a package.

sudo apt-get update && sudo apt-get install linux-modules-extra-raspi

After that, I got a scary warning about how the new kernel wouldn’t be loaded automatically so I should reboot.

sudo reboot now

Things still weren’t working after I rebooted, but restarting Docker seemed to kick Portainer into motion.

sudo systemctl restart docker

Re-enabling Change Filament On Ender 3 Pro

I was playing around with a new Ender 3 Pro at the maker space and noticed the Change Filament option was gone. This option was useful because it would eject filament instead of having to pull it out by hand when you wanted to change it. Our other Ender 3 Pro has a mini USB plug, but this one has a micro USB plug.

Left: micro USB = old 8-bit board, Right: mini USB = new 32-bit board

That got me into researching the new 32-bit boards for this printer and how they disabled the Change Filament feature. I learned about the Marlin firmware, Marlin Auto Build, and its custom configurations. It’s a bit complicated, but this Youtube video helped a lot by walking through the process. I used bugfix-2.1.x branch, copied the contents of Configurations\config\examples\Creality\Ender-3 Pro\CrealityV427 configuration to the Marlin directory, built the firmware, copied it to a SD card, and installed it, but the Change Filament option still wasn’t there! I found this post explaining that the ADVANCED_PAUSE_FEATURE and NOZZLE_PARK_FEATURE had to be enabled. The first feature was enabled, but I had to uncomment the second one in Configurations.h.

I rebuilt the firmware and Change Filament is still not showing up. Guess I’ll have to keep looking into this…

How to fix `The following signatures couldn’t be verified because the public key is not available: NO_PUBKEY`

You might have run into this error when updating packages. The error is probably because you don’t have the public key corresponding to whatever private key the package was signed with. Hopefully, you can just download the public key from keyserver.ubuntu.com and get on with your day. In my case, it was a Google SDK key 8B57C5C2836F4BEB that was missing. Just use the following command, replacing the key ID with the key you need.

sudo apt-key adv –keyserver keyserver.ubuntu.com –recv-keys 8B57C5C2836F4BEB

Now, sudo apt-get update should just work. If it doesn’t, feel free to leave a comment.

Set up Ubuntu 22.04 Server 64-bit on Virtualbox 6.1

Download the Ubuntu ISO here.

Create a new VM. Here’s the specs I use.

  • 2 GB RAM
  • 25 GB HDD, dynamically allocated

Once the VM is created, configure a few more properties.

  • In System, increase CPUs to 2, and enable PAE
  • In Storage, attach the Ubuntu ISO to the optical drive
  • In Network > Advanced, set up a port forwarding rule TCP 2222 to 22 for SSH

Start up the new VM and follow the installation prompts. Once installed and rebooted, pause the VM and restart in headless mode.

I ran into errors during installation which I worked around by disabling the network interface for the installation.

From your host CLI, copy a SSH key to the VM.

ssh-copy-id -i ~/.ssh/id_ed25519 user@localhost -p 2222

Don’t have a SSH key?

Generate one.

ssh-keygen -t id_ed25519

Log in to your VM.

ssh user@localhost -p 2222

Do you need to set up a web proxy?

Configure it in /etc/apt/apt.conf and /etc/environment.

sudo tee -a /etc/apt/apt.conf <<EOF
Acquire::http::Proxy "http://proxy.example.com:8080";
Acquire::https::Proxy "http://proxy.example.com:8080";
EOF
sudo cp /etc/environment /tmp/environment
sudo tee -a /etc/environment <<EOF
http_proxy=http://proxy.example.com:8080
https_proxy=http://proxy.example.com:8080
HTTP_PROXY=http://proxy.example.com:8080
HTTPS_PROXY=http://proxy.example.com:8080
EOF

Update packages.

sudo apt update && sudo apt upgrade -y

Set up automatic updates

Install unattended-upgrades. This will update your OS automatically.

sudo apt install unattended-upgrades -y sudo systemctl enable unattended-upgrades

Back up original configuration, then enable updates, email notifications, remove unused packages, and automatic reboot.

sudo cp /etc/apt/apt.conf.d/50unattended-upgrades /tmp/50unattended-upgrades
sudo sed -i 's/\/\/"${distro_id}:${distro_codename}-updates";/"${distro_id}:${distro_codename}-updates";/' /etc/apt/apt.conf.d/50unattended-upgrades
sudo sed -i 's/\/\/Unattended-Upgrade::Mail "";/Unattended-Upgrade::Mail "me@example.com";/' /etc/apt/apt.conf.d/50unattended-upgrades
sudo sed -i 's/\/\/Unattended-Upgrade::MailReport "on-change";/Unattended-Upgrade::MailReport "only-on-error";/' /etc/apt/apt.conf.d/50unattended-upgrades
sudo sed -i 's/\/\/Unattended-Upgrade::Remove-Unused-Kernel-Packages "true";/Unattended-Upgrade::Remove-Unused-Kernel-Packages "true";/' /etc/apt/apt.conf.d/50unattended-upgrades
sudo sed -i 's/\/\/Unattended-Upgrade::Remove-Unused-Dependencies "true";/Unattended-Upgrade::Remove-Unused-Dependencies "true";/' /etc/apt/apt.conf.d/50unattended-upgrades
sudo sed -i 's/\/\/Unattended-Upgrade::Automatic-Reboot "false";/Unattended-Upgrade::Automatic-Reboot "true";/' /etc/apt/apt.conf.d/50unattended-upgrades
sudo sed -i 's/\/\/Unattended-Upgrade::Automatic-Reboot-Time "02:00";/Unattended-Upgrade::Automatic-Reboot-Time "02:00";/' /etc/apt/apt.conf.d/50unattended-upgrades

Apply the changes.

sudo systemctl restart unattended-upgrades

Take a VM snapshot.

Want to install Docker?

How To Add Secrets To Your Docker Containers

This post is inspired by this SO thread.

Note: You must be running Docker in swarm mode, and docker-compose can’t use external secrets.

You might have heard that you should use Docker Secrets to manage sensitive data like passwords and API keys shared with containers. For example, you can set the default database admin password with the POSTGRES_PASSWORD environment variable:

# docker-compose.yml
version: "3.9"
services:
  postgresql:
    image: postgres
    environment:
      POSTGRES_PASSWORD: P@ssw0rd!

But now anybody who can read docker-compose.yml can read your password! To avoid this, it is common practice is to use secrets to set sensitive environment variables at runtime.

One way to create a secret is in the secrets block of a docker-compose configuration. Then, we can add that secret to our services. By default, secrets will be mounted in the container’s /run/secrets directory. We also need to set the environment variable POSTGRES_PASSWORD_FILE to the path of our secret /run/secrets/postgres-passwd. The postgres container uses environment variables ending in _FILE to set the corresponding sensitive environment variable, e.g. POSTGRES_PASSWORD.

# docker-compose.yml
version: "3.9"
services:
  postgresql:
    image: postgres
    environment:
      POSTGRES_PASSWORD_FILE: /run/secrets/postgres-passwd
    secrets:
      - postgres-passwd
secrets:
  postgres-passwd:
    file: ./secrets/postgres-passwd

That’s it! Now your container deployment will use a POSTGRES_PASSWORD that isn’t in a docker-compose.yml for all to see. The actual secret is in ./secrets/postgres-passwd which can be protected with traditional file permissions. You can even randomly generate secrets instead of using files.

openssl rand 33 -base64 | docker secret create postgres-passwd -

What about containers that don’t support secrets?

Remember what I said about the postgres container using environment variables ending in _FILE? It uses a custom docker-entrypoint.sh script to do that. Specifically, it uses the file_env function to set the specific environment variables it’s looking for. To add support for secrets to a container, create your own docker-entrypoint.sh script with that file_env function and call it to set the environment variables you want. The last line of your entrypoint script should be exec "$@" so you can run whatever was originally the entrypoint as command parameters.

# docker-entrypoint.sh
#!/bin/bash

set -e

# usage: file_env VAR [DEFAULT]
#    ie: file_env 'XYZ_DB_PASSWORD' 'example'
# (will allow for "$XYZ_DB_PASSWORD_FILE" to fill in the value of
#  "$XYZ_DB_PASSWORD" from a file, especially for Docker's secrets feature)
file_env() {
	local var="$1"
	local fileVar="${var}_FILE"
	local def="${2:-}"
	if [ "${!var:-}" ] && [ "${!fileVar:-}" ]; then
		echo >&2 "error: both $var and $fileVar are set (but are exclusive)"
		exit 1
	fi
	local val="$def"
	if [ "${!var:-}" ]; then
		val="${!var}"
	elif [ "${!fileVar:-}" ]; then
		val="$(< "${!fileVar}")"
	fi
	export "$var"="$val"
	unset "$fileVar"
}

file_env 'POSTGRES_PASSWORD'

exec "$@"

In your docker-compose service configuration, mount the script and set the entrypoint and command values. This example uses the env command so we can see our variables were set.

# docker-compose.yml
version: "3.9"
services:
  test:
    image: debian
    volumes:
      - ./docker-entrypoint.sh:/docker-entrypoint.sh
    entrypoint: /docker-entrypoint.sh
    command: env
    environment:
      POSTGRES_PASSWORD_FILE: /run/secrets/postgres-passwd
    secrets:
      - postgres-passwd
secrets:
  postgres-passwd:
    file: ./secrets/postgres-passwd

Now deploy your service and see if it works!

docker-compose up

How To Set Up Minikube Behind A VPN And Proxy On A Mac

I’ve been fighting with my local Minikube environment behind a VPN for months now. I resigned to using a remote server that doesn’t use the VPN, but decided to revisit the local environment setup recently and finally figured it out!

What I Think The Problem Was

I’ve always used the Virtualbox driver for Minikube. It works fine behind a proxy by just setting the HTTP_PROXY, HTTPS_PROXY, and NO_PROXY variables. I think what messes up a lot of people is they forget to add all the Minikube-related IPs to their NO_PROXY variables. A cool thing about Kubernetes is it respects CIDR notation in your NO_PROXY. I even wrote a script to set up all the proxy variables.

But for some reason, Virtualbox doesn’t like my VPN…

$ minikube delete
🔥  Deleting "minikube" in virtualbox ...
💀  Removed all traces of the "minikube" cluster.
$ . proxy.sh http://proxy.example.com:8080
$ minikube start --driver virtualbox
😄  minikube v1.23.2 on Darwin 11.6.1
    ▪ MINIKUBE_ACTIVE_DOCKERD=minikube
✨  Using the virtualbox driver based on user configuration
👍  Starting control plane node minikube in cluster minikube
🔥  Creating virtualbox VM (CPUs=2, Memory=6000MB, Disk=20000MB) ...
🌐  Found network options:
    ▪ HTTP_PROXY=http://proxy.example.com:8080
    ▪ HTTPS_PROXY=http://proxy.example.com:8080
    ▪ NO_PROXY=127.0.0.1,localhost,10.0.0.8/16,172.16.0.0/12,192.168.0.0/16,.example.com,.internal
    ▪ http_proxy=http://proxy.example.com:8080
    ▪ https_proxy=http://proxy.example.com:8080
    ▪ no_proxy=127.0.0.1,localhost,10.0.0.8/16,172.16.0.0/12,192.168.0.0/16,.example.com,.internal
❌  minikube is unable to connect to the VM: dial tcp 192.168.99.225:22: i/o timeout

        This is likely due to one of two reasons:

        - VPN or firewall interference
        - virtualbox network configuration issue

        Suggested workarounds:

        - Disable your local VPN or firewall software
        - Configure your local VPN or firewall to allow access to 192.168.99.225
        - Restart or reinstall virtualbox
        - Use an alternative --vm-driver
        - Use --force to override this connectivity check


❌  Exiting due to GUEST_PROVISION: Failed to validate network: dial tcp 192.168.99.225:22: i/o timeout

╭───────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                           │
│    😿  If the above advice does not help, please let us know:                             │
│    👉  https://github.com/kubernetes/minikube/issues/new/choose                           │
│                                                                                           │
│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                           │
╰───────────────────────────────────────────────────────────────────────────────────────────╯

After reading the Minikube VPN documentation, I suspected my VPN client was configured to not allow local traffic, but when I tried to configure it, I couldn’t find any option to do so. So after reading the Pulse Secure VPN client documentation, I also suspected my IT department doesn’t allow users to configure the client. I tried the VMware driver with similar results. (The VMware driver seems to work now, and it works well, so maybe use that if you have VMware Fusion.) I broke up with Docker Desktop the first time because it didn’t respect NO_PROXY, and then again after they changed their license, and after all that, I just don’t want to use the Docker driver. So finally I checked out the HyperKit driver and that’s what solved all my problems. All I know about HyperKit is that it’s a Mac hypervisor developed by the Moby project, and it works with my VPN, so let’s get things set up…

Install Minikube and HyperKit

Install Homebrew if you haven’t already. Once you have Homebrew set up, install Minikube and HyperKit.

brew install minikube hyperkit

Create a Minikube Instance

Make sure you set up your proxy variables if required. Use my script if you want. The following command sets up a Minikube instance with a default configuration using HyperKit, but there’s a ton of options you can change. See minikube --help for those.

minikube start --driver hyperkit

There was an error about not being able to connect to k8s.gcr.io, but I just ignored it as it didn’t seem to affect the instance creation. Once the VM was created, I set up my Kubernetes/Docker clients. I wrote a script for that so you can just run . minikube.sh --driver hyperkit to set everything up. When I tried to pull an image, I got another error:

Error response from daemon: Get "https://registry-1.docker.io/v2/": proxyconnect tcp: dial tcp: lookup proxy.example.com on 172.16.129.1:53: read udp 172.16.129.14:44065->172.16.129.1:53: read: connection refused

It took me a while to resolve this, but when a co-worker posted a DNS config he used to get Docker running behind the VPN, I was able to adopt his method to configure Minikube’s DNS to use my host’s DNS server. Maybe Minikube has an option that does this automagically?

minikube ssh sudo resolvectl dns eth0 192.168.0.53
minikube ssh sudo resolvectl domain eth0 example.com

And then I was rocking!

    $ docker image pull timescale/timescaledb-postgis:2.3.0-pg13
    2.3.0-pg13: Pulling from timescale/timescaledb-postgis
    540db60ca938: Pull complete 
    a3cb73039552: Pull complete 
    39855706e49a: Pull complete 
    19d88c3ceadb: Downloading [===============================>                   ]  37.52MB/59.45MB
    9ef572e3c9bb: Download complete 
    261ea2d28080: Download complete 
    1716633ec467: Download complete 
    051e02f33f5f: Download complete 
    79ceec13a19e: Download complete 
    f43ce1b5bcdc: Download complete 
    9e276ca2472d: Download complete 
    3e6c8f42f360: Download complete 
    c9bfa14dc9b6: Download complete 
    101a245191b9: Downloading [===================>                               ]  16.91MB/44.11MB

Hope that works out for you with your VPN, but if not, let me know in the comments.

UPDATE: Things Are Still Broken

I thought my troubles were over, but it seems VMware and HyperKit drivers want me to use `minikube mount` to mount my host files in the VM so bind mounts will work. Virtualbox driver seem to do that automagically. I ran into this issue which required specifying the network IP address of the VM.

minikube mount /Users/me:/Users/me –ip 172.16.129.1

Then I ran into an error when running pytest:

OSError: [Errno 526] Unknown error 526

I followed some advice in this issue and increased msize to no avail.

minikube mount /Users/me:/Users/me –ip 172.16.129.1 –msize 524288

So now I’m back to just using a server without a VPN and Virtualbox. :*(