First drafts are ugly

“The secret to life is editing. Write that down. Okay, now cross it out.”

William Safire, 1990 Syracuse University commencement speech

“No thinking – that comes later. You must write your first draft with your heart. You rewrite with your head. The first key to writing is… to write, not to think!”

William Forrester (Sean Connery), Finding Forrester

“Just do it.”

Nike

I will confess that this article is a reminder to myself as much as anything else.

Raise your hand if you’re a writer, and whatever it is you’re writing has to be perfect the first time around. Yeah, me too.

How many times have you tried writing something, but in doing so, you hit a wall (a.k.a. writer’s block) because you don’t quite know how to put something in writing? Or how often have you written a first draft, only to take a second look at it a second time and say, “what a piece of s**t!”

(And speaking as someone with application development experience, this happens with writing code, too. Don’t think that this is limited to just documentation. This is yet another example of how technical writing and application development are related.)

Someone (I don’t know whom) once said, “one of the stupidest phrases ever coined is, ‘get it right the first time.’ It’s almost never done right the first time!” In all likelihood, you need to go through several iterations — review, editing, rewriting, etc. — before a draft is ready for public consumption. It’s called a “draft” for a reason.

The fact is, nobody has to see what you write the first time around. If you’re trying to get started on a document, just write what’s on your mind, and worry about making it look nice later.

Advertisements

For every action, you need a reaction

I came across yet another example of bad interface design this morning.

After I logged into my computer, a pop-up window appeared. It was my Docker application, telling me that an update was available and ready to install. Okay, I said to myself, and clicked the button to proceed.

Except… nothing happened.

I clicked it again. And again. And again. I mashed the mouse button. Nothing. I decided there was a problem with the interface, went on with my work, and forgot all about it. At one point, I saw a Docker window appear, saying updates were being applied. Okay. Again, I went back to what I was doing.

I didn’t think anything of it — until I looked at my taskbar several minutes later. All along the taskbar were several new — and identical — icons that I hadn’t seen before, roughly one for each time that I had hit the mouse button. When I clicked each icon, I was greeted with a window that said “installation failed.” Well, almost all. The second-to-last one I clicked said, “installation succeeded.”

Yet another example of horrible design rears its ugly head.

As an application end user, if I click something where it says “click here,” I expect — and demand — that it does something. It doesn’t matter what it is. Granted, I would prefer that it performs the action that I expect when I click it, but even if all it does is change the mouse pointer, display an “in progress” icon, or display an error message, I expect some kind of response that indicates that my action did something.

An action that results in nothing is a huge pet peeve of mine, and, in my opinion, is extremely horrible design. A click that does nothing tells me that the application is doing exactly that: nothing. Having an action with no response is not only annoying, it can be potentially dangerous. What if — hypothetically — clicking a button resulted in lost data, but there was no indication as such?

A reaction is a form of feedback. If I click a button, a reaction — even if all it is is an in-progress icon — tells me that the application is doing something. If I click a button, and it does nothing, then I expect the application is doing nothing. An action without any reaction results in frustration on the part of the end user, and potentially dangerous side effects if the application performs an action that the user doesn’t expect.

If this is how you design your UX, then you need to rethink your design. When it comes to interfaces, every action must have a reaction.

Playing in the sandbox is important for documentation

While working on a user guide, I realized that I had administrative rights to the application I was trying to document. That was all well and good, except that I was trying to write a non-admin user guide, and I needed to know how someone who didn’t have admin rights saw the application. Fortunately, one of my co-workers sent me an application URL and a testing user login I could use that simulated exactly what I needed.

That brings me to today’s article. Many application development environments make use of a sandbox, or some other development, environment where a developer can play to their heart’s content — that is, some place where someone trying to test or develop applications can play with the app without having to worry about breaking anything important. That same testing environment is just as important for a technical writer.

Much of my work as a technical writer involves putting myself in an end user’s shoes. I’ll often go through an interface and document the steps a user might use, what a user might see on the screen, and the effects of certain buttons and links. One of my most frequent questions when I work on documenting an application is, “what happens when I click this?” After I do so, hopefully my next response isn’t “oh crap!”

This is why a tech writer needs access to a testing environment. Like application developers who need to test within a safe environment, a person documenting the system needs to be able to document the system and be able to do so knowing that (s)he won’t adversely affect the application by playing with the system.

I wrote previously that a tech writer can help an application developer, and vice versa. Indeed, the tech writer can function as an in-house QA analyst. In order to write good documentation, the writer needs a realistic environment in which (s)he experiences what an end user might see. Having a sandbox environment in which a writer can “play” provides exactly that type of environment. As an added bonus, not only does this allow the tech writer to produce better documentation, it allows that person to provide feedback regarding the application, which ultimately results in a better application.

Testing something? What’s the test plan?

Imagine if you will that you’ve been asked to test a product. The product could be anything — software, a car, a kitchen appliance, a piece of sports equipment, whatever. For the purposes of this article, we’ll say you’re working at some company, and you’ve been asked to test a piece of software.

You’re told to go into an application, and you’re given this instruction.

“Okay, test it and see if it works.”

That’s it.

How would you feel? Vague? Confused? Frustrated? Abandoned? All of the above? Something else?

Well, I, myself, have been put into this situation more times than I care to admit. It’s one of the most frustrating job situations I’ve ever been thrust into.

What, exactly, constitutes “see if it works”? I could simply start the application, see if it starts, and say, “okay, it works.” I suspect that that’s not what the people who make the request are looking for. Yet time and again, I get a request from a developer or a designer to test something, and that’s the only instruction I’m given.

And it’s frustrating like you wouldn’t believe.

What’s even more frustrating is when (not if) the application comes back with some kind of problem, and the people who asked you to test come back with, “you said you tested this! Why is this broken?”

Want to know why there’s so much friction between developers and QA personnel? This is a likely reason. This is something that definitely falls under my list of documentation pet peeves.

The fact is, if you develop a product, and you need to test it for functionality, you need to specify what it is you’re looking to test. You need to define — and spell out — what constitutes a “working product” from one that’s “defective.” Just because a car starts doesn’t mean it’s working. You still need to put it in gear, drive it, steer it, and make sure that it can stop.

If you are creating a product, you need to describe what parameters are required for it to pass testing. If you’re asking someone in quality control to test your product, provide the tester with guidelines as to what should be checked to ensure the product is functional. Better, provide him or her with a checklist to determine whether or not your product can be released. If you discover additional items that need to be tested, then update the checklist.

(If you want to know more about checklists, I highly recommend The Checklist Manifesto by Atul Gawande. It’s actually a surprisingly excellent read.)

So any time you release a product for testing, tell the testers what it is that constitutes a “good” product. Don’t just send it off with vague instructions to “make sure it works” and expect it to be tested. More often that not, that will result in a failed product — and defeat the entire purpose of why you’re looking to test it in the first place.

Don’t forget to edit your system messages

One of my current work projects is a administrative guide for our application. After a recent status meeting, one of the developers sent me a list of validation error messages that might appear during data imports. I was asked to make sure the validation messages were included with the documentation.

While going through the validation messages, I noticed that they were filled with grammatical, capitalization, and spelling errors. I asked the developer if he wanted me to edit the messages, to which he responded, “yes, please!”

People don’t think about checking output messages for correctness during application development. It is often a part of applications that is overlooked. For what it’s worth, I, myself, didn’t even think about it until I was asked about these validations. Nevertheless, reviewing and editing output messages is probably a pretty good idea.

For one thing, and I’ve stated this before, good writing reflects upon your organization. Well-written documentation can be indicative that a company cares enough about their product and their reputation that they make the effort to produce quality documentation as well. Well-written system messages indicate that you care enough to address even the little things.

Well-written error messages can also ensure better application usage and UX. A good output message can direct an end user to properly use the application or make any needed adjustments. Messages that are confusing, misleading, badly-written, or ambiguous could potentially result in things like application misuse, corrupted data, accidental security breaches, and user frustration.

Ensure your application development review and testing also includes a review of your system messages. It may be a small thing, but it could potentially address a number of issues. As someone once said, it’s the little things that count.

Can Agile and documentation projects co-exist?

When I spoke at the New England SQL user group meeting yesterday, an interesting question came up. An audience member spoke about Agile development, and he mentioned that, because of the nature of Agile, document projects were doomed to fail.

At this point, I should mention a few things.

  1. I currently work in an environment that uses Agile development methodology.
  2. Just because I work in said environment doesn’t mean I know anything about Agile.
  3. Even as my workgroup’s technical writer, I am considered a highly valued member of the workgroup.
  4. Somehow, we make it work.

In regards to #4 above, the gentleman had a simple question: “How?”

This question came up during a point in my presentation in which I argued that document projects should be treated like software — in that, in an ideal environment (stop snickering!), a formal document should be subject to planning, development, and testing. This is a point that I’ve alluded to before.

The person asking the question went as far as to say, “if you can make the case as to how Agile can make technical writing projects work, then you should make a presentation out of it — and I’ll even help you sell it to Agile.” He even gave me his business card. Indeed, one of my big reasons for writing this article is as a reminder to myself to revisit this subject — after I’ve had a chance to do some homework about Agile development. It’s something I’ve been meaning to do; I even went as far as to begin a draft article about Agile development.

So, to the gentleman who brought up this (great) question, you were heard. And I will make sure that I come back to this at some point — once I’ve had a chance to do my homework on Agile.