Friday, April 3, 2020

Here's a little time-saver for managing NuGet packages in your solution

This entry assumes you use Microsoft Visual Studio. If you don't use that, please bookmark this entry so you can revisit it when you've come to your senses.

I like to tend to my NuGet dependencies a lot and my preferred view for doing so is the "Manage NuGet Packages for Solution" window. This can be accessed from the context menu (below), the tools menu (further below), or the search bar (third below).

The first option is pretty quick except that you have to scroll up in your Solution Explorer. So it's kind of disruptive. You also have to fish through a meaty context menu but it's near the top and in a predictable way. So, with the power of habit, this one can be made reasonably cheap.

The tools menu is about as useful as any menu bar in a large application like Visual Studio (Visual Studio desperately needs a ribbon, like Office has). I never use it.

I've been trying to train myself to type Ctrl+Q and then find the option but, for some reason, it just won't take. Even with my trick for retraining muscle memory, I just couldn't make myself do it the new way.

The reason I couldn't make myself adopt the new way is that the new way and the old way both suck. The right way to do this is to have a button I can push without any preceding navigation steps. Like a custom toolbar.

I'm assuming you know how to make a custom toolbar in Visual Studio. Actually, I'm just assuming you are smart enough to figure it out because it's super easy.

What's not easy was finding the damn command because it has a nonsense name in the toolbar editor dialog. That's not really the fault of the dialog - a lot of commands show up correctly. It's something about the way the NuGet commands are configured.

Here is the name: cmdidAddPackagesForSolution




It shows up correctly in the toolbar, just not in that editor dialog.

If you ever need to find a command that is improperly named, the trick is to pretend to edit the menu, first.

Go into the Customize dialog for the toolbar you are trying to edit and then switch to editing the menu group where the command is already located.




When you've done that, you'll be able to see what the command's name is.





Thursday, April 2, 2020

Recent improvements to my NuGet packages

I've improved my NuGet packages for sharing feature files and constraining test-generation.

As of 2020.4.2.5, the following changes have been made:

  • Feature files are imported to a hidden folder under your base intermediate objects path (usually obj\). This helps make it more clear that the imported feature files are build-artifacts, not maintainable source code.
  • Control tags for other test libraries can be stripped out of the test-generation process. This will help prevent miscategorization.
  • Shared scenario templates/outlines which were being broken by the incredibly-offensive importance of this class now work properly.

Fun.

Sunday, March 29, 2020

The road to pipeline-as-code on Azure DevOps

I've been setting up and maintaining continuous integration builds for a long time. Back when I started, pipeline as code was the option.

Then newer, "better" options that were mostly configured via a GUI were offered.

Now the industry has come full circle and realized that pipelines are automation and automation is code. Ergo, pipelines should be code.

Dwindle was born before that realization had swept its way through the majority of the industry. Most importantly, before the option of a pipeline as code was even a little mature in the Azure DevOps offering.

Now, however, it seems the idea is pretty widely accepted. Again, more importantly, it's accepted - even evangelized - by Azure DevOps. The trip for "that's a good idea" to near feature-parity with the GUI-based pipelines has been dizzyingly fast.

So I had an opportunity to convert my UI-configured pipeline in a code- (YAML-)configured pipeline.

That effort was not without its obstacles. The "get the YAML for this job" button doesn't really work perfectly. Service connections are persnickety. Although: they were kind of difficult to deal with in the UI-configured world, so maybe that's not fair to attribute to switching to YAML.

Most significantly, though, the unification of builds and releases into single pipelines represents a nontrivial (but good) paradigm shift in how Azure DevOps expects developers to shepherd their code out to production.

Previously, I had to introduce some obnoxious kludges into my system that I have replaced with simple, first-class features of the modern pipeline-definition system.

For instance, I used to enter "red alert" work items into a work-tracking system whenever validating a blue environment failed. These red alert work items would prevent any promotions of any environment until they were cleared, which happened automatically at the end of the successful validation of a replacement promotion-candidate. This meant, among other things, that my pipeline was coupled to my work tracking system.

As a result, validations happened in a haphazard way. One promotion could, theoretically, occur if validation along another stream of deployment/promotion failed later.

Likewise, the way I had to marshall build artifacts was a pain. I used to have to download them in one build only to re-upload them so that the next build could access them from its triggering pipeline. That's a lot of wasted time.

Stages, dependencies, and pipeline artifacts changed all that. Pipeline artifacts allow me to upload an artifact one time and download it wherever I need it. Stages and dependencies allow me to ensure all of the following:

  • All tests - unit, server, and client -  happen before any deployments to any blue environments.
  • All deployments to blue happen before any environment-validation steps.
  • All validation happens before any promotion.

The Environments feature makes it easy to track and see at a glance which deployments and promotions have occurred. They also give you a place to introduce manual checks. For instance, because I'm resource-constrained, I only want deployments to blue environments to happen when I approve. Likewise, I only want to promote to any green environments after I've given the okay.

The transition has been largely positive. As a single-person operation, it's made my life easier since I completed it.

As I said, though, it came with challenges.

I will be exploring those challenges and their corresponding rewards in near-future blog entries.

Here are two new NuGet packages for SpecFlow you might enjoy

Hey all. I made my first two packages on nuget.org, today. I must say, it's a lot easier than it was the last time I looked into it. No .nuspec file is required. Uploading at the end of an Azure DevOps pipeline is a snap. The NuGet part of the problem is officially painless.

If you want to cut to the chase, the packages are here:

  1. HexagonSoftware.SpecFlowPlugins.ImportSharedFeaturesDuringGeneration
  2. HexagonSoftware.SpecFlowPlugins.GenerationFilter

I think it makes sense to explain what they do, what they are for, and how to use them, though.

Import Shared Features During Generation


The former is not really a plugin for SpecFlow so much as it is an extension of the .csproj MsBuild ecosystem. It allows you to designate a reference to a set of feature files. Each such reference points to an external folder (presumably full of feature files) and maps it to a folder inside the project.

This is accomplished by editing the .csproj file and adding a fairly-standard-looking reference to it. Here's an example:

<SpecFlowFeaturesReference
  Include="..\HexagonSoftware.SpecFlowPlugins.ImportSharedFeaturesDuringGeneration.Import"
  ImportedPath="SharedFeatures" />

That will cause all the feature files under ..\HexagonSoftware.SpecFlowPlugins.ImportSharedFeaturesDuringGeneration.Import (relative to the project root, of course) to copied to SharedFeatures (again, project-relative) prior to test-generation.

The internal folder (in this case, SharedFeatures) is completely controlled by the reference. Everything in it is destroyed and rebuilt every build. For my own sanity, I add those folders to my .tfignore (or .gitignore, if I'm feeling masochistic).

Unfortunately, at this time, the best way I was able to get it to work was by making a folder under the project root. In the future and have the files actually be a part of the project while generation occurs. This has a little to do with how difficult it was to access the internals of the SpecFlow generation task from a plugin and a lot to do with how difficult actually getting a reference to the task assembly is.

I'll probably crack it eventually.

I'm sure there are many cases I haven't considered but, of course, the ones I have considered are tested.

Generation Filter


This plugin allows you to control which parts of a feature file are actually generated into tests. You do this by specifying sets of tags that include or exclude scenarios or features from generation.

The tag selection is controlled by a JSON file at the root of your project. The JSON file must be named "specflow-test-filter.json".

Its format looks something like this:

{
  "included-tags": [ "@In" ],
  "excluded-tags": [ "@Stripped" ]
}

As it should be, exclude tags always trump include tags.

Why Both?


These two plugins work together nicely. The first one allows me to reuse feature files. The second allows me to generate a subset of the scenarios within a project. As a result, I can create the SpecFlow equivalent of a "materialized view" within my test suite. Each test assembly can be a subset of all the tests I use for Dwindle.

Before, I relied on environment variables to select tests and set the test mode. Now, the place that a feature file is instantiated sets all the context required.

This worked perfectly in my automated gates. Maybe it was even convenient. At the very least it teetered on the edge of convenience but it was a real pain in the ass for my local development environment.

For one thing, I had to fiddle with environment variables if I wanted to switch between unit, client, or API tests. I was able to kludge my way past that, though: I have three different test runners installed in my Visual Studio instance and each one is configured differently.

Another problem - probably a more important one - is that it made it hard for me to push a button and run all the tests. As I migrate over to using these plugins, I'll be able to run all my tests in a single batch.

Before, the only tests that were convenient to run locally were the unit tests. Those were so fast that I could run them any time I wanted. Everything else was a batch of tests ranging from 30 seconds (just long enough to be infuriating) to ten minutes.

When I'm done migrating to this structure, I'll have a choice. I can right-click and run whatever test I want in whatever context I desire. I can select a ten-minute batch and go get some coffee. I can set up a run that takes an hour and go for a swim or walk.

I'll probably circle back on this with an experience report when the migration is done. My automated gates will all have to change (at little). I'm guessing, in the course of the migration, I'm going to need to add a few more features and refine the workflows a little, too.

Maybe it won't help you the way it (already) helps me but I figured I should build this as NuGet packages for the next person with the same problem.