5.9 KiB
title | date | tags | excerpt | ||
---|---|---|---|---|---|
Learning Go: Day Eight | 2024-05-08T08:00:00.0Z |
|
Getting the project deployed via Gitea actions |
So that I can do the whole build-in-public thing properly, I always want my code to automatically deploy. I've got Gitea Actions on my Gitea server, so I can use those to build, deploy, and start a Go binary.
Building and copying a binary
This is the simplest part. I had a decent template from my Eleventy action that I was able to take and turn into the following workflow:
name: Build and copy to prod
on:
push:
jobs:
build-and-copy:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Setup Go
uses: actions/setup-go@v5
with:
go-version: 1.22
- name: Build binary
run: go build -o dist/oopsie
- name: Install SSH Key
uses: shimataro/ssh-key-action@v2
with:
key: ${{ secrets.SSH_KEY }}
known_hosts: ${{ secrets.SSH_KNOWN_HOSTS }}
- name: Copy to prod
run: scp -rp dist/* ${{ secrets.SSH_USER }}@${{ secrets.SSH_HOST }}:oopsie/
This installs Go on the action runner, builds the binary, and then uses SCP to copy the compiled binary onto my server. To facilitate this, I created a user on my VPS, created a new SSH key, and added the public key to the .ssh/authorized_keys
file. Then I added the private key to the Gitea action secrets, along with the known_hosts
entry for my git server, the name of the user I created, and the hostname for my VPS.
Running the software
So now I need to run my compiled software. I can create a systemd service to do this, and then run it as the user I've created. First of all I create a new file, /etc/systemd/user/oopsie.service
:
[Unit]
Description=Daemon for the Oopsie service
[Service]
Type=simple
#User=
#Group=
ExecStart=/home/<user>/oopsie/oopsie
Restart=on-failure
StandardOutput=file:%h/log_file
[Install]
WantedBy=default.target
Then, as the user I've created I run:
systemctl --user daemon-reload
systemctl --user start oopsie.service
And can confirm my service is running locally:
curl -X POST http://localhost:8000
> This was a POST request!
Nginx proxy
Next up I need to use Nginx's proxy_pass
directive to direct any requests to https://oopsie.lewisdale.dev to my running service. Again, this was mostly lifted from an existing template I already had:
server {
listen 80;
listen [::]:80;
server_name oopsie.lewisdale.dev;
rewrite ^ https://$server_name$request_uri? permanent;
}
server {
# SSL configuration
#
listen 443 ssl;
listen [::]:443 ssl;
server_name oopsie.lewisdale.dev;
# Include certificate params
include snippets/certs/lewisdale.dev;
ssl_certificate /etc/letsencrypt/live/lewisdale.dev/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/lewisdale.dev/privkey.pem; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf;
location / {
proxy_pass http://localhost:8000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
Woo, now I can actually access my service over the internet!
Restarting the service
Finally, I can use the -o
argument with the RemoteCommand
SSH config option to execute a command. I can use that to run systemctl restart
:
- name: Restart the service
run: ssh ${{ secrets.SSH_USER }}@${{ secrets.SSH_HOST }} -o RemoteCommand="systemctl --user restart oopsie.service"
Nope
Ah, that's not quite correct. My first build failed:
scp: oopsie//oopsie: Text file busy
I can't overwrite a file while it's in use. Instead, I have to stop the service, copy the file, and then start the service again:
- name: Stop the service
run: ssh ${{ secrets.SSH_USER }}@${{ secrets.SSH_HOST }} -o RemoteCommand="systemctl --user stop oopsie.service"
- name: Copy to prod
run: scp -rp dist/* ${{ secrets.SSH_USER }}@${{ secrets.SSH_HOST }}:oopsie/
- name: Restart the service
run: ssh ${{ secrets.SSH_USER }}@${{ secrets.SSH_HOST }} -o RemoteCommand="systemctl --user start oopsie.service"
And that works! It deploys successfully. Ironically, there's a minor bit of downtime while it does, but for now that's really not an issue. You can see the project in progress on its deployed home or on the Git repo.
Still no!
I went back to check on my deployed service before this post was scheduled to go out, and noticed I was getting a 502 Bad Gateway
error. My initial thought was that I had two services using port 8000
, but logging in with SSH and running curl http://localhost:8000
returned nothing. Then I noticed when I tried to run systemctl --user status oopsie.service
, I got an error: Failed to connect to bus: No such file or directory
. So, I turned to Google, and found this Superuser answer.
The TL;DR is that the systemctl
process for the user I'm running Oopsie with is being terminated. I can either run the process as root, which I'm not going to do because this user is one I specifically use for SSH via CI, so I don't want it to have anything close to sudo access. Instead, I can do sudo loginctl enable-linger <oopsie user>
, which means that the systemctl
process should remain even after the user's session has been terminated. Hopefully.
At the time of writing it's been about 10 minutes and the process is still live, so I'm hopeful this is the case. If not, one of my next posts will be about using Docker to run it instead.