Consistent code infrastructure

“Any fool can write code that a computer can understand. Good programmers write code that humans can understand.”

— Martin Fowler

I’m currently working on a project in which I’m trying to deconstruct a database. In doing so, I’ve come across a number of things about it that, in the scope of databases, appall me. Who the hell creates a relational database with no defined primary or foreign key constraints??? And this thing is in a production environment, no less! While that’s a big part of my frustration, that’s another rant for another time. For this article, I want to focus on something else.

A big part of my task — and my frustration — is trying to figure out what the data columns are and how they’re being used in the application. I did come across a table that contains data as to what primary keys are defined (I have no idea why whomever built this thing didn’t actually create the primary keys), and I’m spending a lot of time trying to figure out how these tables relate to each other. As I already mentioned, there are no foreign key relationships defined. So a lot of my time is being spent trying to figure that out.

This is where my frustration — and the purpose of this article — kicks in. Whomever built this structure used names like “DataCounter,” “CrossReferenceCounter,” and so on, to define their “primary keys.” (I put it in quotes, because, like I said, they’re not actually there. And who uses the word “counter” to define them?) What I’m finding is that the corresponding foreign key isn’t exactly the same. For example, while the entity table uses ” DataCounter” for its “primary key,” other tables reference it using ” DataIDCounter.”

This might not seem like a big deal, but when you’re trying to figure out how to map large numbers of data tables and columns, you start questioning whether or not the relationships are correct. And I came across several others whose naming conventions are even worse.

Some of you might be saying, “that’s not a big deal. What’s your problem?” Well, trying writing an ad hoc query where you type in what you think is the column name, and it turns out to be something completely different. You end up wasting your time going back to look it up and trying to figure out what it is.

I remember a previous job in which I was looking at a piece of JavaScript code that contained two nested loops. You would think that the increment counter would use words that would make some sense, right? Wrong. Whomever programmed it named the variables “dog” and “cat.”

Explain to me how that is helpful to somebody trying to troubleshoot or edit the code.

In my previous life as a developer, I would write what I referred to as “open-ended code” — that is, I wrote it with the idea that it would likely be rewritten somewhere down the line. I wanted to make it easy for me (or someone else) to go back and edit or change the code, if necessary. I like to think that other developers have this same mindset, but, all too often, I come across examples like this that tell me that that is not the case.

If you’re a developer — whether it’s for an application, website, database, network, or whatever — keep your naming conventions and infrastructure consistent and meaningful. You will save another developer or support analyst a great deal of grief, frustration, and time.

Playing in the sandbox is important for documentation

While working on a user guide, I realized that I had administrative rights to the application I was trying to document. That was all well and good, except that I was trying to write a non-admin user guide, and I needed to know how someone who didn’t have admin rights saw the application. Fortunately, one of my co-workers sent me an application URL and a testing user login I could use that simulated exactly what I needed.

That brings me to today’s article. Many application development environments make use of a sandbox, or some other development, environment where a developer can play to their heart’s content — that is, some place where someone trying to test or develop applications can play with the app without having to worry about breaking anything important. That same testing environment is just as important for a technical writer.

Much of my work as a technical writer involves putting myself in an end user’s shoes. I’ll often go through an interface and document the steps a user might use, what a user might see on the screen, and the effects of certain buttons and links. One of my most frequent questions when I work on documenting an application is, “what happens when I click this?” After I do so, hopefully my next response isn’t “oh crap!”

This is why a tech writer needs access to a testing environment. Like application developers who need to test within a safe environment, a person documenting the system needs to be able to document the system and be able to do so knowing that (s)he won’t adversely affect the application by playing with the system.

I wrote previously that a tech writer can help an application developer, and vice versa. Indeed, the tech writer can function as an in-house QA analyst. In order to write good documentation, the writer needs a realistic environment in which (s)he experiences what an end user might see. Having a sandbox environment in which a writer can “play” provides exactly that type of environment. As an added bonus, not only does this allow the tech writer to produce better documentation, it allows that person to provide feedback regarding the application, which ultimately results in a better application.

Testing something? What’s the test plan?

Imagine if you will that you’ve been asked to test a product. The product could be anything — software, a car, a kitchen appliance, a piece of sports equipment, whatever. For the purposes of this article, we’ll say you’re working at some company, and you’ve been asked to test a piece of software.

You’re told to go into an application, and you’re given this instruction.

“Okay, test it and see if it works.”

That’s it.

How would you feel? Vague? Confused? Frustrated? Abandoned? All of the above? Something else?

Well, I, myself, have been put into this situation more times than I care to admit. It’s one of the most frustrating job situations I’ve ever been thrust into.

What, exactly, constitutes “see if it works”? I could simply start the application, see if it starts, and say, “okay, it works.” I suspect that that’s not what the people who make the request are looking for. Yet time and again, I get a request from a developer or a designer to test something, and that’s the only instruction I’m given.

And it’s frustrating like you wouldn’t believe.

What’s even more frustrating is when (not if) the application comes back with some kind of problem, and the people who asked you to test come back with, “you said you tested this! Why is this broken?”

Want to know why there’s so much friction between developers and QA personnel? This is a likely reason. This is something that definitely falls under my list of documentation pet peeves.

The fact is, if you develop a product, and you need to test it for functionality, you need to specify what it is you’re looking to test. You need to define — and spell out — what constitutes a “working product” from one that’s “defective.” Just because a car starts doesn’t mean it’s working. You still need to put it in gear, drive it, steer it, and make sure that it can stop.

If you are creating a product, you need to describe what parameters are required for it to pass testing. If you’re asking someone in quality control to test your product, provide the tester with guidelines as to what should be checked to ensure the product is functional. Better, provide him or her with a checklist to determine whether or not your product can be released. If you discover additional items that need to be tested, then update the checklist.

(If you want to know more about checklists, I highly recommend The Checklist Manifesto by Atul Gawande. It’s actually a surprisingly excellent read.)

So any time you release a product for testing, tell the testers what it is that constitutes a “good” product. Don’t just send it off with vague instructions to “make sure it works” and expect it to be tested. More often that not, that will result in a failed product — and defeat the entire purpose of why you’re looking to test it in the first place.

The symbiotic relationship between documentation and application development

One of my current projects involves documenting processes for an application that are still under development. As such, much of what I write may change, depending on how processes are changed during the course of development.

At one point, I tested one of the processes so I could determine functionality and document it. In doing so, the process came back with an error message that I wasn’t expecting and didn’t have any user-friendly information, other than a cryptic error code. I contacted one of the developers working on the application and told him what I found. I gave him the error codes I experienced and steps I took to get them. He told me, “you’re coming across bugs that we didn’t even know we had.”

It occurred to me that I was doing more than just documenting the application. I was also acting as a beta tester.

One aspect about writing technical documentation is learning about what you’re writing. In order to write about a process, you need to understand how it works. If you’re documenting an application, the best thing you can do is run the application in a safe environment (such as development or a sandbox), learn how it works, and use it to document steps and capture screens. In doing so, you come across application bugs and even come up with ideas to make the application even better.

I’ve long argued as to the criticality of documentation. It records important information and serves as a reference. However, until this point, it didn’t occur to me that the document development process could have a symbiotic relationship with application development. To me, this adds further fuel to the argument that documentation is critical and required.