Yesterday, I wrote about budgetary forces impacting technical decision-making.
Everyone just reading that sentence probably is thinking "Well... duh!"
However, I don't mean it in the way a lot of people do.
My argument was that big, padded budgets create a sense of being able to afford bad decisions, allowing technical debt to pile up. I also posited that small, tight budgets might help people see that they need to make good decisions if they want to stay afloat.
Of course, at the end of the blog entry, I noted that this was just one piece of the puzzle. It's not like every big budget results in bad practices. Likewise, not all of the small budgets in the world result in better decision-making.
It should also be noted that this is all anecdotal and based on my own experiences. Really, that should have been noted in yesterday's entry.
Oh well.
As I considered my position, I realized there was a major crack in the argument: Large organizations can't afford to make bad decisions. The large, complicated budgets only make them think they can. As a result, they often defer sustainability-related investments until the technical debt is so bad that they are in crisis mode. Yet, the margin is almost always called, the crisis almost always comes, and the result is almost always bad.
So it is definitely not about being able to afford an unsustainable decision. Actually, just that phrase is inherently self-contradictory. It reduces to either "afford an unaffordable decision" or "sustain unsustainable decisions", both of which are nonsense statements.
Budget-size is probably not what's at work, here. It's just a clue pointing to the real driver: understanding.
Consider this: A small organization on a shoestring budget that makes good decisions about sustainable practices only does so because its membership understands they can't really afford to make bad decisions. If proper software design wasn't perceived as something that was going to pay off, an organization wouldn't do it.
For every one of those small teams with healthy technical practices, there are probably dozens we never hear of because they are crushed under the weight of their own poor choices.
Why did they make those poor choices? Do people intentionally undermine their ability to succeed knowing full well that's what they're doing? Not usually. Again, those small organizations that winked out of existence too fast for anyone to notice were undone by their lack of understanding.
Now let's look at the big organizations. They go months or years before the crisis hits them and then sunk costs often make them drag on for a lot longer after that. Are there really people sitting around in the leadership of those companies, greedily rubbing their hands together and muttering "This project is going to be toast in three to five years! Heh, heh, heh."
Well. Maybe that happens every once in a while.
Most of the time, though, it's probably just that the decision-makers simply didn't understand the long-term ramifications of the decisions they are making.* They don't understand that what they are doing is going to create a debt that cannot possibly be paid when no more forbearances can be obtained.
Furthermore, you will occasionally find very large firms that really do all the things they are supposed to do - keep the test suites comprehensive and meaningful, regularly reconcile design with requirements, properly manage handoffs, continuously improve their processes, et cetera. From what I can tell, it seems like this is often a result of a handful of higher-up leaders who have a deep understanding of what works in software development.
So all four of the main cases seem, to me, to be dependent on understanding that there is a problem. Large budgets just muddy the waters for decision-makers who don't already have a deep enough understanding of how software development works.
To fix the problem, leaders need to know that pressuring developers to defer writing tests or refactoring to a proper design (among other things) will be the seeds of their undoing.
Why don't they know that, already?
I think that question brings us to a real candidate for a root cause.
So many organizations - especially large organizations - claim to be "data-driven". They need numbers to make their decisions. Not only do they need numbers, but they need numbers fast. It seems like a lot of leaders want to see the results of making a change in weeks or months.
Therein lies the problem.
For large organizations, the consequences of poor technical practices take months or years to produce intolerable conditions. Why should damage that took years to manifest be reversible in a matter of weeks or months? It's not possible.
So long as the way we measure progress, success, failure, and improvement in software development is tied to such incredibly short windows in time, those metrics will always undermine our long-term success. Those metrics will always create the impression that the less-expensive decision is actually the more expensive one and that you can squeeze a little more work out of a software development team by demolishing the very foundations of its developers' productivity.
Not all data are created equal. Data-driven is a fine way to live so long as the data doing the driving are meaningful.
We need better metrics in software development or, at the very least, we need to abolish the use of the counterproductive ones.
Wednesday, March 25, 2020
Tuesday, March 24, 2020
Why is it hard to make sustainable practices stick?
I've been thinking about this problem for a while.
It's hard to make sustainable software development practices stick. The larger an organization is, the harder it seems to be.
Why?
There are many possible answers and probably all of them have at least a little validity.
All of those are at least somewhat true, some of the time, but none of them smells very much like a root cause. Maybe there isn't a single cause and that's why it's hard to pin one down. In fact, that's probably true, but I still feel like the above list (and its ilk) is a list of symptoms of some other problem, not a list of first-class problems.
Yesterday's blog entry helped me make a little progress in the form of a new hypothesis we can at least invalidate.
As I said, I automate everything because I know I can't afford to do otherwise. In fact, that's the reason why I apply modern software design principles. It's why I refactor. It's why I have a test-driven mindset and why I write tests first whenever I can. I'm on a shoestring budget and I know I can't spare anything on wasted effort, so I work as efficiently as I can.
What if the reason why larger organizations ostensibly tend to struggle with code quality, design patterns, refactoring, TDD, agile workflow, and lean product definition is as simple as the inverse statement? I know I don't have the budget to work inefficiently, so I work efficiently. Larger organizations have the budget to work inefficiently, so they don't work efficiently?
It sounds crazy. At least, it would have to my younger self.
"People are working in an unsustainable way just because they think they can afford it? What?!?" That's what young Max would have said. Today Max really only has a resigned shrug to offer in dissent.
So, because Old Max put up such feeble resistance, let's explore the idea a little more.
A small organization is usually comprised of* a tight-knit group of individuals. While they may not all be experts in every area of work, the impact of any person's decisions can be plainly seen in day to day activities. This means that the costs of bad decisions are not abstracted debts to be paid someday. They are real problems being experienced at the moment.
Pair that with the tight budget that smaller companies usually have, and you get a recipe for action: the plainness of the problem helps people know what should be done and the necessities of a small budget provide the impetus to do it.
Contrast that with a large organization.
In a large organization, consequences or often far-removed from actions. If you create a problem, it may be weeks, months, or even years before you have to pay the cost. That's if it doesn't become someone else's problem altogether, first. Fixing a systemic problem such as, say, not being test-driven can be imagined as this high-minded "nice to have" that probably won't really work because you'd need everyone to be on board, which will never happen because everyone else feels the same way.
At the same time, pockets are often quite deep in large organizations. While a show may be made of pinching pennies in the form of (for instance) discount soap in the bathrooms, they tend to spend a lot on wasted effort. They are able to do this because they have some array of already-successful and (typically) highly profitable products they can exploit to fund new efforts. Furthermore, in addition to being very large, corporate budgets seem like they are usually very complex. Large costs can sometimes seem smaller than they really are because they are impacting many different line items.
Pair those two together and you get fatalistic ennui making everything seem academic with a budgeting apparatus that consistently says "we've got bigger fish to fry".
I'm pretty sure this is one piece of the puzzle but I think there's more to it. For instance, there are many small organizations with shoestring budgets that still make bad decisions about sustainability. There are also counterexamples in the form of large companies that tend to make decisions that are good, or at least better than those of their competitors.
However, this writing is now quite long. So I'm going to end it, here, and discuss another factor in tomorrow's entry.
*: That is one of the correct usages of "to comprise". Look it up.
It's hard to make sustainable software development practices stick. The larger an organization is, the harder it seems to be.
Why?
There are many possible answers and probably all of them have at least a little validity.
- It's way easier to learn a computer language than it is to learn how to properly attend to design.
- Refactoring (true refactoring) is hard to learn, too.
- TDD requires significant effort to figure out how to apply meaningfully.
- BDD requires organizational changes to make its biggest impacts.
- Larger organizations have politics.
- Larger organizations have communications and handoffs.
- Larger organizations have often deadlines that aren't linked with capacity or value.
All of those are at least somewhat true, some of the time, but none of them smells very much like a root cause. Maybe there isn't a single cause and that's why it's hard to pin one down. In fact, that's probably true, but I still feel like the above list (and its ilk) is a list of symptoms of some other problem, not a list of first-class problems.
Yesterday's blog entry helped me make a little progress in the form of a new hypothesis we can at least invalidate.
As I said, I automate everything because I know I can't afford to do otherwise. In fact, that's the reason why I apply modern software design principles. It's why I refactor. It's why I have a test-driven mindset and why I write tests first whenever I can. I'm on a shoestring budget and I know I can't spare anything on wasted effort, so I work as efficiently as I can.
What if the reason why larger organizations ostensibly tend to struggle with code quality, design patterns, refactoring, TDD, agile workflow, and lean product definition is as simple as the inverse statement? I know I don't have the budget to work inefficiently, so I work efficiently. Larger organizations have the budget to work inefficiently, so they don't work efficiently?
It sounds crazy. At least, it would have to my younger self.
"People are working in an unsustainable way just because they think they can afford it? What?!?" That's what young Max would have said. Today Max really only has a resigned shrug to offer in dissent.
So, because Old Max put up such feeble resistance, let's explore the idea a little more.
A small organization is usually comprised of* a tight-knit group of individuals. While they may not all be experts in every area of work, the impact of any person's decisions can be plainly seen in day to day activities. This means that the costs of bad decisions are not abstracted debts to be paid someday. They are real problems being experienced at the moment.
Pair that with the tight budget that smaller companies usually have, and you get a recipe for action: the plainness of the problem helps people know what should be done and the necessities of a small budget provide the impetus to do it.
Contrast that with a large organization.
In a large organization, consequences or often far-removed from actions. If you create a problem, it may be weeks, months, or even years before you have to pay the cost. That's if it doesn't become someone else's problem altogether, first. Fixing a systemic problem such as, say, not being test-driven can be imagined as this high-minded "nice to have" that probably won't really work because you'd need everyone to be on board, which will never happen because everyone else feels the same way.
At the same time, pockets are often quite deep in large organizations. While a show may be made of pinching pennies in the form of (for instance) discount soap in the bathrooms, they tend to spend a lot on wasted effort. They are able to do this because they have some array of already-successful and (typically) highly profitable products they can exploit to fund new efforts. Furthermore, in addition to being very large, corporate budgets seem like they are usually very complex. Large costs can sometimes seem smaller than they really are because they are impacting many different line items.
Pair those two together and you get fatalistic ennui making everything seem academic with a budgeting apparatus that consistently says "we've got bigger fish to fry".
I'm pretty sure this is one piece of the puzzle but I think there's more to it. For instance, there are many small organizations with shoestring budgets that still make bad decisions about sustainability. There are also counterexamples in the form of large companies that tend to make decisions that are good, or at least better than those of their competitors.
However, this writing is now quite long. So I'm going to end it, here, and discuss another factor in tomorrow's entry.
*: That is one of the correct usages of "to comprise". Look it up.
Monday, March 23, 2020
My pipeline brings all the builds to the prod
The title is a reference to a song I only know exists because of this clip from Family Guy.
Dwindle is a one-person operation, right now. Eventually, I might pull my wife into it but she doesn't have any time to dedicate to anything new, right now.
We have a two-year-old, a four-year-old, various other business interests, a house that seems to require constant repairs, and, until the recent panic, a reasonably busy consulting practice.
So time is at a premium.
For now, it's just me and I don't get a full forty hours a week to work on Dwindle. Before I got sick, I was lucky to have twelve. Obviously, last week, my time available to work was right around zero hours.
Still, in those twelve hours per week, I managed to build and maintain an automated pipeline that carries Dwindle all the way from check-in to automated deployment and, ultimately, promotion to production environments.
The pipeline covers everything...
It keeps everything in sync, ensuring each of the following:
This is no mean feat for a Unity application. It's more work than, say, a web application or even a normal Windows or mobile app. Unity makes every traditional software development task harder - probably because they are solving a different problem than the traditional app-development problem.
Even an automated build of a Unity game is a lot harder than automated builds of Xamarin or native-coupled apps would be. Acceptance testing your game is hard, too. Everything that isn't making a pretty shape on the screen is much more difficult than it would be with a more traditional tech stack.
I did it anyway. I did it when it was hard and seemed like it would only get harder. I did it when it looked like I had an endless churn of tasks and felt like solving the next problem would just beget two more.
Even when a little voice in the back of my head began to whisper "Hey... maybe it's not so bad to do that part manually," I did it. I pushed past the little voice because I knew it was lying to me.
If I can do it, alone and beset on all sides by two very-high-energy youngers, you can do it while you're sitting at your desk in your large, reasonably-well-funded corporate office (or home office).
...but that's not very important, is it?
We shouldn't do things just because we can, right?
I need a legitimate reason and I have one. It's not just that I can do it. It's that I absolutely need a completely automated pipeline.
I couldn't possibly afford to build Dwindle into something successful if I was spending all my time manually testing and manually promoting builds. I'm not saying I will make Dwindle a financial success but my chance would nil if I was just wasting all my time on those things. Most of my time would go to validating a handful of changes. I wouldn't have any time left over to hypothesize, innovate, or develop new features.
The marginal cost of investing in proper automation is negative. While this may be impossible when talking about manufacturing, it's one of the most basic principles of software development: Investing in things related to quality lowers costs.
So I built a fully automated pipeline with a mix of integration and unit tests for a very simple reason: You spend less to have a fully automated pipeline than you do without one.
...and if I can spend less to have one alone, you certainly can do it with a team.
Dwindle is a one-person operation, right now. Eventually, I might pull my wife into it but she doesn't have any time to dedicate to anything new, right now.
We have a two-year-old, a four-year-old, various other business interests, a house that seems to require constant repairs, and, until the recent panic, a reasonably busy consulting practice.
So time is at a premium.
For now, it's just me and I don't get a full forty hours a week to work on Dwindle. Before I got sick, I was lucky to have twelve. Obviously, last week, my time available to work was right around zero hours.
Still, in those twelve hours per week, I managed to build and maintain an automated pipeline that carries Dwindle all the way from check-in to automated deployment and, ultimately, promotion to production environments.
The pipeline covers everything...
- Building and testing binaries for the core logic of the game.
- Building and testing the backend API.
- Building the clients.
- Acceptance/integration testing the clients.
- Deploying to a "blue" or "staging" environment.
- Validating the blue/staging deployments.
- Promoting from blue to "green" or "production" environments.
- Cleaning up old deployments where applicable.
It manages parallel deployments in different environments:
- Azure Functions & Storage for the backend.
- Google Play for the Android app.
- Kongregate for the browser version.
It keeps everything in sync, ensuring each of the following:
- No blue deployments occur until all tests for every component have passed
- No deployment validations occur until all blue deployments are completed
- No release to production begins until all promotion candidates have been validated.
This is no mean feat for a Unity application. It's more work than, say, a web application or even a normal Windows or mobile app. Unity makes every traditional software development task harder - probably because they are solving a different problem than the traditional app-development problem.
Even an automated build of a Unity game is a lot harder than automated builds of Xamarin or native-coupled apps would be. Acceptance testing your game is hard, too. Everything that isn't making a pretty shape on the screen is much more difficult than it would be with a more traditional tech stack.
I did it anyway. I did it when it was hard and seemed like it would only get harder. I did it when it looked like I had an endless churn of tasks and felt like solving the next problem would just beget two more.
Even when a little voice in the back of my head began to whisper "Hey... maybe it's not so bad to do that part manually," I did it. I pushed past the little voice because I knew it was lying to me.
If I can do it, alone and beset on all sides by two very-high-energy youngers, you can do it while you're sitting at your desk in your large, reasonably-well-funded corporate office (or home office).
...but that's not very important, is it?
We shouldn't do things just because we can, right?
I need a legitimate reason and I have one. It's not just that I can do it. It's that I absolutely need a completely automated pipeline.
I couldn't possibly afford to build Dwindle into something successful if I was spending all my time manually testing and manually promoting builds. I'm not saying I will make Dwindle a financial success but my chance would nil if I was just wasting all my time on those things. Most of my time would go to validating a handful of changes. I wouldn't have any time left over to hypothesize, innovate, or develop new features.
The marginal cost of investing in proper automation is negative. While this may be impossible when talking about manufacturing, it's one of the most basic principles of software development: Investing in things related to quality lowers costs.
So I built a fully automated pipeline with a mix of integration and unit tests for a very simple reason: You spend less to have a fully automated pipeline than you do without one.
...and if I can spend less to have one alone, you certainly can do it with a team.
Friday, March 20, 2020
Parsing SVG documents into useful layout specifications
In a previous post, I discussed using SVG documents to specify layouts in a Unity app. More recently, I started delving into that subject and laid out a list of things to cover:
Among the many things I deferred was how to actually parse the SVG into a set of layout constraints.
Both #2 and #3 are pretty simple and they are strongly related, so I'll handle them in this text.
How to make the SVG easy to interpret visually.- How to convert the SVG into a set of rules.
- How to select the right rules given a query string.
- Generating an SVG that explains what went wrong when a test fails.
Both #2 and #3 are pretty simple and they are strongly related, so I'll handle them in this text.
Thursday, March 19, 2020
Unit testing versus integration testing: I'm starting to reconsider my position
For a long time, I have rejected integration tests as overly costly and with very little benefit. Friends of mine have a similar argument about unit testing. They say that it does very little and slows you down.
I'm still not sold on the "unit testing slows us down" position. The negative side effects people mention (like impeding refactoring) line up more with testing implementation details than with unit testing, itself.
However, in the modern era, I'm starting to come around on integration testing.
First, I'll lay out my reasoning for why unit testing is better:
- Properly specifying how each individual behavior works allows you to build a foundation of working behaviors.
- Defining behaviors relative to other behaviors (rather than as aggregations of other behaviors) stems proliferation of redundancy in your test code.
- How two or more behaviors are combined in a certain case is, itself, another behavior; see #1 and #2.
I still believe all of that. If I had to choose between only unit tests or only integration tests, I would choose only unit tests.
In addition to providing better feedback on whether or not your code (the code you control) works, they also help shape your code so that defects are harder to write in the first place. By contrast, when something you wrote breaks, an integration test failing unlikely as it's very hard to create exhaustive coverage with integration tests. Furthermore, even if an integration test does fail, it's not very helpful, diagnostically-speaking.
In addition to providing better feedback on whether or not your code (the code you control) works, they also help shape your code so that defects are harder to write in the first place. By contrast, when something you wrote breaks, an integration test failing unlikely as it's very hard to create exhaustive coverage with integration tests. Furthermore, even if an integration test does fail, it's not very helpful, diagnostically-speaking.
Yet, I don't have to choose. I can have both. As we've seen in previous posts and will continue to see, I can have one scenario be bound as both.
So what is it that integration tests tell us above and beyond unit tests. My unit testing discipline makes sure that I almost never break something of mine without getting quick feedback to that effect. What do integration tests add?
The spark of realization was a recent discovery as to why a feature I wrote wasn't working but the evidence has been mounting for a while, now. It took a lot of data to help me see the answer even though to you it may prove shockingly simple and maybe even obvious.
Over the last year or so, a theme has been emerging...
- A 3rd party layout tool has surprising behavior and makes my layouts go all wacky. So I have to redo all my layouts to avoid triggering its bugs.
- "Ahead of time" code fails to get generated and makes it so I can't save certain settings. I have to write code that exercises certain classes and members explicitly from a very specific perspective in order to get Unity actually compile the classes I'm using.
- A Google plugin breaks deep linking - both a 3rd-party utility and Unity's built-in solution. I have to rewrite one of their Android activities and supplant the one they ship to make deep linking work.
- A "backend as a service" claims that its throttling is at a certain level but it turns out that sometimes it's a little lower. I have to change how frequently I ping to something lower than what they advise in their documentation.
- A testing library is highly coupled to a particular version of .NET and seems to break every time I update.
- Et cetera. The list goes on...and on...and on.
Unit tests are good at catching my errors but what about all the other errors?
When you unit test and do it correctly, you isolate behaviors from one another so that each can be tested independently from another. This only works because you are doing it on both sides of a boundary and thus can guarantee that the promises made by a contract will be kept by its implementation.
That breaks down when you aren't in control of both sides of the contract. It seems like we live in unstable times. You simply can't count on 3rd party solutions to keep their promises, it seems.
This realization led me to a deeper one. It's about ownership. If you own a product, your customer only cares that it works. They don't care about why it doesn't work.
Telling them "a 3rd-party library doesn't function as promised and, as a result, deep links won't work for Android users. I'm working on it," sounds worse than just saying "Deep links don't work for Android users, I'm working on it." What they (or, at least, I) hear is "Deep links don't work for Android users. Wah, wah, wah! It's not my fault! I don't care about your inconvenience. I only care about my inconvenience. Feel sorry for me."
Even though you don't own 3rd-party code, to your customers, you may as well. You own the solution/product/game in which it is used and you own any failures it generates.
That extends well beyond whether or not the 3rd-party component/service works. It includes whether or not a component's behavior is represented correctly. It includes whether or not you used it appropriately. It even includes the behavior of a component or service changing in a surprising way.
So I finally realize the point of integration testing. It forces you to deal with the instability and fragility of the modern software development ecosystem. It's not about testing your code - that's what unit tests are for - it's about verifying your assumptions pertaining to other people's code and getting an early warning when those assumptions are violated.
Integration testing - whether it's how multiple microservices integrate or how all your components are assembled into an app - is essential. Just make sure you are using it to ask the right questions so you can actually get helpful answers:
"Is this thing I don't control properly functioning as a part of a solution I offer?"
Wednesday, March 18, 2020
The pains of being flexible
I ran a build with some pretty simple refactors as the only changes. I expected it to pass. The only reason the build even ran was that I checked in my changes.
Yet the build didn't pass. It failed with a bizarre error. SpecFlow was failing to generate code-behind files.
Some research made it clear that this was really a function of SpecFlow not working very well with .NET 3.1.200. The surprising thing about that was that I didn't remember switching to 3.1.200.
The fault actually was mine. At least, it was in a circuitous way. It was an artifact of a decision I made while defining my pipeline:
I intentionally chose to allow updates automatically with that little ".x" and it burned me.
Sure enough, a tiny change "fixed" my broken pipeline:
I'll definitely leave it that way until I have a reason to change it.
What I don't know, at this time, is if I'll go back to allowing the maintenance version to float, when I finally do make a change.
Is it better to have the stability of a known version or to get the fixes in the latest version without having to do anything?
Right now, my inclination is toward faster updates, still. For an indie game-development shop with a disciplined developer who jumps on problems right away, it's probably better to get the updates in exchange for the occasional quickly-fixed disruption.
That said, if I get burned like this a few more times, I might change my mind.
Yet the build didn't pass. It failed with a bizarre error. SpecFlow was failing to generate code-behind files.
Some research made it clear that this was really a function of SpecFlow not working very well with .NET 3.1.200. The surprising thing about that was that I didn't remember switching to 3.1.200.
The fault actually was mine. At least, it was in a circuitous way. It was an artifact of a decision I made while defining my pipeline:
- task: UseDotNet@2
displayName: 'Use .NET 3.1'
inputs:
version: 3.1.x
installationPath: '$(Agent.ToolsDirectory)\dotnet\3.1'
I intentionally chose to allow updates automatically with that little ".x" and it burned me.
Sure enough, a tiny change "fixed" my broken pipeline:
- task: UseDotNet@2
displayName: 'Use .NET 3.1'
inputs:
version: 3.1.102
installationPath: '$(Agent.ToolsDirectory)\dotnet\3.1'
I'll definitely leave it that way until I have a reason to change it.
What I don't know, at this time, is if I'll go back to allowing the maintenance version to float, when I finally do make a change.
Is it better to have the stability of a known version or to get the fixes in the latest version without having to do anything?
Right now, my inclination is toward faster updates, still. For an indie game-development shop with a disciplined developer who jumps on problems right away, it's probably better to get the updates in exchange for the occasional quickly-fixed disruption.
That said, if I get burned like this a few more times, I might change my mind.
Tuesday, March 17, 2020
Making an SVG shape specification easy to interpret visually.
In my most recent post, I deferred describing how I parsed an SVG document to another post.
There are multiple subtopics:
I will attempt to address them all in separate posts. Like the rest of the world (at the time of this writing), I'm recovering from illness. So I'll do an easy one, now, and the rest will have to wait.
There are multiple subtopics:
- How to make the SVG easy to interpret visually.
- How to convert the SVG into a set of rules.
- How to select the right rules given a query string.
- Generating an SVG that explains what went wrong when a test fails.
I will attempt to address them all in separate posts. Like the rest of the world (at the time of this writing), I'm recovering from illness. So I'll do an easy one, now, and the rest will have to wait.
First up, how to make the SVG easy for a person to understand.
This part is all about SVG, itself. I started out with a single document that looked pretty raw - just some white rectangles on a black background. Over time I accumulated more documents and evolved them into something more palatable.
Those mostly involved the use of stylesheets and definitions but also, after much experimentation, I discovered that polyline was the most effective tool to create the shapes I wanted. I'll explain why in a bit.
First, let's look at a single polyline element:
<polyline id=".inner" points="80,192 80,1000 1860,1000 1860,192 80,192 80,193" />
That's a rectangle with its first two vertices repeated. For the test, I only need the first three points - the rest of the parallelogram is inferred.
However, to create the visual effect of a bounding box with little arrowheads pointing inward at the corners, I needed the extra points. At least, I couldn't figure out how to do it without the extra points.
I could only get the orientation of markers on inner vertices to be correct. Everything else pretty much just looked like a random direction had been chosen. As a result, I needed 4 inner vertices, which means I needed six of them, total (start, inner x 4, end).
The other structures I needed were some defined shapes to use as vertex-markers.
<defs>
<marker id="marker-outer" viewBox="0 0 10 10" refX="5" refY="10"
markerWidth="5" markerHeight="5"
orient="auto">
<path d="M 5 10 L 2 0 L 8 0 z" class="label" />
</marker>
<marker id="marker-inner" viewBox="0 0 10 10" refX="5" refY="0"
markerWidth="5" markerHeight="5"
orient="auto">
<path d="M 5 0 L 2 10 L 8 10 z" class="label" />
</marker>
</defs>
Once I have a rule-definition (my polyline, in this case) and the definition of the marker, I can use a stylesheet to marry the two and create the visual effect.
<style>
*[id='inner'],
*[id$='.inner'],
*[id='outer'],
*[id$='.outer']
{
stroke-width: 5;
stroke: white;
stroke-dasharray: 30 5;
fill:none;
}
*[id='inner'],
*[id$='.inner']
{
marker-mid: url(#marker-inner);
}
*[id='outer'],
*[id$='.outer']
{
marker-mid: url(#marker-outer);
}
/* SNIPPED: Stuff used at runtime for other purposes */
</style>
Finally, to create a frame of reference, I converted a background image to base 64 and embedded it in the SVG document as the first element.
All of those steps create an effect like this:
Thankfully, most of those steps don't need to be repeated.
Sadly, it seems that SVG pushes you in the direction of redundancy. You can externalize your stylesheet but not every renderer will respect it. I couldn't find a way to reliably reuse the markers, either. The background image could be externalized but then I'd be relying on hosting for the specification to render properly.
There's a bunch of copy and paste but it's not on the critical path for the test. It just affects how the test looks to developers. So I tolerate it.
I could write a little generator that runs just before build time but then I wouldn't be able to easily preview my work.
C'est la vie.
At least, this way, I can quickly interpret my specification just by opening it in a browser or editor.
Subscribe to:
Posts (Atom)