Showing posts with label ci/cd. Show all posts
Showing posts with label ci/cd. Show all posts

Sunday, March 29, 2020

The road to pipeline-as-code on Azure DevOps

I've been setting up and maintaining continuous integration builds for a long time. Back when I started, pipeline as code was the option.

Then newer, "better" options that were mostly configured via a GUI were offered.

Now the industry has come full circle and realized that pipelines are automation and automation is code. Ergo, pipelines should be code.

Dwindle was born before that realization had swept its way through the majority of the industry. Most importantly, before the option of a pipeline as code was even a little mature in the Azure DevOps offering.

Now, however, it seems the idea is pretty widely accepted. Again, more importantly, it's accepted - even evangelized - by Azure DevOps. The trip for "that's a good idea" to near feature-parity with the GUI-based pipelines has been dizzyingly fast.

So I had an opportunity to convert my UI-configured pipeline in a code- (YAML-)configured pipeline.

That effort was not without its obstacles. The "get the YAML for this job" button doesn't really work perfectly. Service connections are persnickety. Although: they were kind of difficult to deal with in the UI-configured world, so maybe that's not fair to attribute to switching to YAML.

Most significantly, though, the unification of builds and releases into single pipelines represents a nontrivial (but good) paradigm shift in how Azure DevOps expects developers to shepherd their code out to production.

Previously, I had to introduce some obnoxious kludges into my system that I have replaced with simple, first-class features of the modern pipeline-definition system.

For instance, I used to enter "red alert" work items into a work-tracking system whenever validating a blue environment failed. These red alert work items would prevent any promotions of any environment until they were cleared, which happened automatically at the end of the successful validation of a replacement promotion-candidate. This meant, among other things, that my pipeline was coupled to my work tracking system.

As a result, validations happened in a haphazard way. One promotion could, theoretically, occur if validation along another stream of deployment/promotion failed later.

Likewise, the way I had to marshall build artifacts was a pain. I used to have to download them in one build only to re-upload them so that the next build could access them from its triggering pipeline. That's a lot of wasted time.

Stages, dependencies, and pipeline artifacts changed all that. Pipeline artifacts allow me to upload an artifact one time and download it wherever I need it. Stages and dependencies allow me to ensure all of the following:

  • All tests - unit, server, and client -  happen before any deployments to any blue environments.
  • All deployments to blue happen before any environment-validation steps.
  • All validation happens before any promotion.

The Environments feature makes it easy to track and see at a glance which deployments and promotions have occurred. They also give you a place to introduce manual checks. For instance, because I'm resource-constrained, I only want deployments to blue environments to happen when I approve. Likewise, I only want to promote to any green environments after I've given the okay.

The transition has been largely positive. As a single-person operation, it's made my life easier since I completed it.

As I said, though, it came with challenges.

I will be exploring those challenges and their corresponding rewards in near-future blog entries.

Monday, March 23, 2020

My pipeline brings all the builds to the prod

The title is a reference to a song I only know exists because of this clip from Family Guy.

Dwindle is a one-person operation, right now. Eventually, I might pull my wife into it but she doesn't have any time to dedicate to anything new, right now.

We have a two-year-old, a four-year-old, various other business interests, a house that seems to require constant repairs, and, until the recent panic, a reasonably busy consulting practice.

So time is at a premium.

For now, it's just me and I don't get a full forty hours a week to work on Dwindle. Before I got sick, I was lucky to have twelve. Obviously, last week, my time available to work was right around zero hours.

Still, in those twelve hours per week, I managed to build and maintain an automated pipeline that carries Dwindle all the way from check-in to automated deployment and, ultimately, promotion to production environments.

The pipeline covers everything...

  • Building and testing binaries for the core logic of the game.
  • Building and testing the backend API.
  • Building the clients.
  • Acceptance/integration testing the clients.
  • Deploying to a "blue" or "staging" environment.
  • Validating the blue/staging deployments.
  • Promoting from blue to "green" or "production" environments.
  • Cleaning up old deployments where applicable.

It manages parallel deployments in different environments:

  • Azure Functions & Storage for the backend.
  • Google Play for the Android app.
  • Kongregate for the browser version.

It keeps everything in sync, ensuring each of the following:

  • No blue deployments occur until all tests for every component have passed
  • No deployment validations occur until all blue deployments are completed
  • No release to production begins until all promotion candidates have been validated.

This is no mean feat for a Unity application. It's more work than, say, a web application or even a normal Windows or mobile app. Unity makes every traditional software development task harder - probably because they are solving a different problem than the traditional app-development problem.

Even an automated build of a Unity game is a lot harder than automated builds of Xamarin or native-coupled apps would be. Acceptance testing your game is hard, too. Everything that isn't making a pretty shape on the screen is much more difficult than it would be with a more traditional tech stack.

I did it anyway. I did it when it was hard and seemed like it would only get harder. I did it when it looked like I had an endless churn of tasks and felt like solving the next problem would just beget two more.

Even when a little voice in the back of my head began to whisper "Hey... maybe it's not so bad to do that part manually," I did it. I pushed past the little voice because I knew it was lying to me.

If I can do it, alone and beset on all sides by two very-high-energy youngers, you can do it while you're sitting at your desk in your large, reasonably-well-funded corporate office (or home office).

...but that's not very important, is it?

We shouldn't do things just because we can, right?

I need a legitimate reason and I have one. It's not just that I can do it. It's that I absolutely need a completely automated pipeline.

I couldn't possibly afford to build Dwindle into something successful if I was spending all my time manually testing and manually promoting builds. I'm not saying I will make Dwindle a financial success but my chance would nil if I was just wasting all my time on those things. Most of my time would go to validating a handful of changes. I wouldn't have any time left over to hypothesize, innovate, or develop new features.

The marginal cost of investing in proper automation is negative. While this may be impossible when talking about manufacturing, it's one of the most basic principles of software development: Investing in things related to quality lowers costs.

So I built a fully automated pipeline with a mix of integration and unit tests for a very simple reason: You spend less to have a fully automated pipeline than you do without one.

...and if I can spend less to have one alone, you certainly can do it with a team.