5th Oct 2019
Earlier this year, I made the decision to remake my website using Gatsby, a static site development framework, which uses React and GraphQL, has a great ecosystem of plugins and is pretty extensible.
My previous site had been written in Jekyll, using GitHub Pages – which meant there were a few things I needed to achieve to maintain parity with that platform:
Regarding Hosting, I already had a VPS: Linode and Caddy Web Server installed, which comes with automatic HTTPS by default through Let's Encrypt. This only leaves a Deploy Pipeline to implement, which I'll go into here!
Sidenote:
Several services for a fully-managed deployment pipeline are available which can handle all of the functionality covered here and more – often for free.
The goal here was to create, tailor and implement my own solution end-to-end using my chosen tools and VPS server, if you care more about the end goal – I'd suggest you check Netlify out!
CircleCI builds your project in a job using workflows, made up of tasks.
For example, a task might be to run tests, deploy or lint your source code. The way we define our workflow and the tasks that are run within are using CircleCIs config file, which spins up a worker in an isolated environment for each task.
CircleCI's Config is the configuration file you'll need to create in order to let CircleCI know what to do when creating your file. It starts with just a YAML file – .circleci/config.yml
, so you can use variables and reference them.
To help debugging, I created a quick ruby script which will output the compiled configuration if you drop it into your project root:
require 'rubygems'
require 'yaml'
require 'pp'
config = YAML::load(File.read('./.circleci/config.yml'))
pp(config)
Add your SSH keys into the CircleCI config in your project root, by referencing the fingerprint the CircleCI dashboard gives you after uploading and make sure the public key has been added to your server.
You can also add any environment variables which will be available to all workers in your pipeline here.
Tip: Each time a new task is run in Circle, the first step taken is an automatic
Spin up Environment
step – which outputs a bunch of information about that task's worker and the information available at this point.
Circle provides some general purpose images for common use cases. I used the circleci/node
image for the application build image.
I would check the latest versions before starting to see what's available!
For the worker image I created a variant of alpine containing the scripts I'd need:
sudo
for root permissionsca-certificates
for workspace
persistence (read more)rsync
to sync our built application across to the serveropenssh
to connect to our remote serverSome CircleCI features require certain packages to run if you'd like to use them as the primary image (directly by the CI worker), such as the ca-certificates
package above. Find a list of these here.
I also use koalaman's shellcheck image (koalaman/shellcheck-alpine:latest)
to check my deploy.sh
and config.sh
script before running.
Here's a gist of my config.yml
file. In the following sections, we'll go through an explaination of each job, one-by-one.
This process is split across two tasks: install-dependencies
and build
.
This process is built to manage our node_modules
dependencies. It relies on Circle's cache mechanism, which can save build artefacts against the project to be restored later.
The naming structure of our cache is saved as node_modules-{{ checksum "package.json" }}
.
By naming our cache with a checksum of the package.json
, we can ask circle to restore the cache if it ever receives the same cache key again – which stays static so long as the package.json
doesn't change.
All of this means our packages are only ever built once and just reused on subsequent deploys, saving time in the long run!
The build step begins by restoring the package cache and then runs a single command – npm run build
in my case for Gatbsy.
It then uses Circle's persist_workspace
step to persist or save the built files for later in the job – if tasks are to share context, using a workspace is the recommended way to do so.
The testing step checks out the code using the Circle Node image, restores the dependency cache and runs jest tests.
To deploy to the VPS, Circle recommends using rsync
in their guide.
Forestry also did a great write-up which helped inform my solution.
The process is as follows:
add_ssh_keys
is a Circle provided command to add the SSH key to the worker container, the only rule is they key provided must have an empty passphrase. A fingerprint
is used to refer to the key, which is added in the CircleCI settings under Settings > Permissions > SSH Permissions
run – ssh-keyscan $REMOTE_URL >> ~/.ssh/known_hosts
This step scans the remote machine and adds the network to known hosts, without this step the worker would require an input to trust the remote machine when connecting
rsync -va --delete . $REMOTE_USER@$REMOTE_URL:scripts
rsync -va --delete . $REMOTE_USER@$REMOTE_URL:$REMOTE_DIR
Both of these commands copy my deployment script and public directory across to the remote machine using rsync
. The -a
flag is archive mode, which preselects some commonly used rsync
modes and the -v
flag adds verbosity – finally we use --delete
to remove any files found that aren't in the source directory when compared to the Circle worker's working_directory
.
ssh -o StrictHostKeyChecking=no $REMOTE_USER@$REMOTE_URL 'cd scripts && . config.sh && echo "$PASSWORD" | sudo -S sh deploy.sh'
Lastly, we connect to the box, generate config and then deploy. I'd like to improve this part in the future to remove the sudo
and $PASSWORD
requirements, but this works reliably for now!
The create_config.sh
script uses the available Circle environment variables to generate another script – config.sh
– to be run on the target machine. The resulting script is used to initialise environment variables for the deploy script on the target.
The deploy.sh
script is used primarily to move files from the rsync
directory across to my webserver's directory using the correct permissions. Finally, it restarts the webserver on the host machine for good measure.
Following the initial implementation, I'd like a containerised approach to deployment – which would involve connecting to a server with Docker installed and pulling an image from my private docker repository.
Thanks for reading – Let me know if this helped you out on Twitter!