How we used CI/CD to Transform our Development Process
Continuing our series exploring how we leverage technologies and tools to continuously refine our service offerings, Evoluted's Technical Team Lead John Noel has penned this post on how our Web Development teams use CI/CD to great effect.
When I joined Evoluted’s Development team in early 2019, the nascent continuous integration (CI) setup - charmingly referred to internally as The Pipeline - was one of the first things I really dedicated myself to learning.
CI had been in use in my last job, running on the venerable Jenkins system to wrangle capricious and arcane processes inherited through multiple, some now long forgotten, layers of the business. It was red/green in the sense that if it was red, you could expect a polite but firm message in short order. The Evoluted setup on the other hand took a developer’s git-branched work and set up an internally accessible environment. Basically you got a full instance of your project for every branch.
Coming from other agencies whose testing processes were variants of “works on my machine” and a production deployment was (S)FTP and hastily run shell commands, this felt like a step change in how a team can develop and test solutions. CI and its partner continuous delivery (CD) is something that I have since fully embraced, and while that functional foundation has remained within Evoluted, we do an awful lot more with it now. But it’s important to understand that it wasn’t always that way.
What is CI/CD?
"No longer could you be a gremlin coding in your own little corner"
Reductively, CI/CD is an automated way of “doing stuff when something happens” which in almost all cases is “when you do a git push, run one or more scripts”. There is obviously a lot of dressing around that - e.g. circuit breakers when your code doesn’t pass muster - but the core idea is to make your development experience better by automating the fickle parts.
The problems that we were trying to solve at the time were quality assurance and standardisation; specifically improving our in-house testing and standardising our deployment procedure. It’s all well-and-good to say that your code worked on your machine, but that doesn’t mean it will be able to be meaningfully tested by other developers, or that it can be deployed to our hosting infrastructure without issue.
Standardising how your code travels from your machine to an environment that is very similar to the target hosting environment removes a whole class of problems - unavailable modules, runtime versioning, database provisioning, cache warming etc. - all now needed to be scripted.
We use on-premises, self-hosted GitLab which has always had CI/CD as first class functionality and, at the time, free offerings from companies like CircleCI or TravisCI weren’t available and GitHub actions hadn’t been released. Setting up Jenkins would have been possible but the appeal of having functionality built in is immense.
This was also true of merge requests (pull requests in GitHub parlance) that we started using to vet the code developers were producing - no longer could you be a gremlin coding in your own little corner, now your code could be checked for bugs and shared with the wider team, making knowledge silos more difficult to sustain.
The connective tissue between CI/CD and the merge requests was Deployer. We had used task runners before with Capistrano and Rocketeer but as Deployer is written in PHP - our in-house language of choice - it didn’t take long to craft a handful of custom “recipes” to handle our suite of varied projects and push code onto our staging server replete with a branch-specific URL.
This means when opening a merge request there is an environment spun up and waiting to be tested. From there it’s a short hop to deploying to a UAT server once the merge request has passed, ready for client approval. At the time we still used Deployer for live deployments, but manually triggered from a developer’s machine; not really in the spirit of CI/CD but vanishingly close to it.
It was a difficult transition, both for me joining shortly after The Pipeline began to be widely used, and for the developers who now had to change their entrenched workflow. But it worked. Our bug rate in live systems dropped, clients and project managers could now test and feedback on upcoming work before it made its way to live, and our developers learned and collaborated in new ways. It wasn’t perfect - some days it felt like the stars had to align just for The Pipeline to work, and sometimes regardless of the number of eyes on something, a bug still sneaks through.
That was then though. Now? Now CI/CD is an integral part of our development process.
Our coding standards which before were only encouraged are now demanded: your environment is never created and your MR can’t be opened if you don’t follow our standards. And a standard is only worth something when it can be enforced. Test suites are run, and again, nothing progresses until those tests pass; more than just unit tests though our integration tests can spin up infrastructure to make sure our code works in-situ and continues to do so.
Many production deployments now run through The Pipeline as well, giving us the added bonuses of being more secure with fewer people needing production credentials, and new starters can get their first task live without laborious tooling. Post-deployment tasks are also becoming common with performance tracking (i.e. which commit made the homepage slow) and accessibility audits using Lighthouse being rolled into our standard project packages.
This shift has also opened up a wealth of internal tooling to not only support but enhance The Pipeline’s functionality. Whereas initially our staging and UAT servers had cobbled together nginx configurations, now there’s a virtual host REST API to set up the correct configuration and versions allowing us to upgrade or even wholesale move environments easily.
Database provisioning too has its own API meaning sites can define the type and version of the database they need along with any seeding or anonymisation that needs to take place, while also remaining secure with credential rotation. With all these moving parts visibility became a priority so now our project specific Slack channels keep track of everything going on, all in one place.
And we’re not done yet. While The Pipeline has been a boon to almost every aspect of development, we know it can be better, more robust, and easier to set up. As we hire more developers of all skill ranges, we always get feedback that CI/CD can be difficult to parse and frustrating to set up, especially with the diversity of projects we regularly work on. When it works there is a Fantasia-like feel of your chores magically being done for you. When it doesn’t work though, you can feel like taking an axe to the whole thing.
For us at least, CI/CD has been transformative and lets us produce better and more robust solutions for our clients.
To learn more about the services delivered by our Development team, see our "Build" pages.