Then newer, "better" options that were mostly configured via a GUI were offered.
Now the industry has come full circle and realized that pipelines are automation and automation is code. Ergo, pipelines should be code.
Dwindle was born before that realization had swept its way through the majority of the industry. Most importantly, before the option of a pipeline as code was even a little mature in the Azure DevOps offering.
Now, however, it seems the idea is pretty widely accepted. Again, more importantly, it's accepted - even evangelized - by Azure DevOps. The trip for "that's a good idea" to near feature-parity with the GUI-based pipelines has been dizzyingly fast.
So I had an opportunity to convert my UI-configured pipeline in a code- (YAML-)configured pipeline.
That effort was not without its obstacles. The "get the YAML for this job" button doesn't really work perfectly. Service connections are persnickety. Although: they were kind of difficult to deal with in the UI-configured world, so maybe that's not fair to attribute to switching to YAML.
Most significantly, though, the unification of builds and releases into single pipelines represents a nontrivial (but good) paradigm shift in how Azure DevOps expects developers to shepherd their code out to production.
Previously, I had to introduce some obnoxious kludges into my system that I have replaced with simple, first-class features of the modern pipeline-definition system.
For instance, I used to enter "red alert" work items into a work-tracking system whenever validating a blue environment failed. These red alert work items would prevent any promotions of any environment until they were cleared, which happened automatically at the end of the successful validation of a replacement promotion-candidate. This meant, among other things, that my pipeline was coupled to my work tracking system.
As a result, validations happened in a haphazard way. One promotion could, theoretically, occur if validation along another stream of deployment/promotion failed later.
Likewise, the way I had to marshall build artifacts was a pain. I used to have to download them in one build only to re-upload them so that the next build could access them from its triggering pipeline. That's a lot of wasted time.
Stages, dependencies, and pipeline artifacts changed all that. Pipeline artifacts allow me to upload an artifact one time and download it wherever I need it. Stages and dependencies allow me to ensure all of the following:
- All tests - unit, server, and client - happen before any deployments to any blue environments.
- All deployments to blue happen before any environment-validation steps.
- All validation happens before any promotion.
The Environments feature makes it easy to track and see at a glance which deployments and promotions have occurred. They also give you a place to introduce manual checks. For instance, because I'm resource-constrained, I only want deployments to blue environments to happen when I approve. Likewise, I only want to promote to any green environments after I've given the okay.
The transition has been largely positive. As a single-person operation, it's made my life easier since I completed it.
As I said, though, it came with challenges.
I will be exploring those challenges and their corresponding rewards in near-future blog entries.