Testing something? What’s the test plan?

Imagine if you will that you’ve been asked to test a product. The product could be anything — software, a car, a kitchen appliance, a piece of sports equipment, whatever. For the purposes of this article, we’ll say you’re working at some company, and you’ve been asked to test a piece of software.

You’re told to go into an application, and you’re given this instruction.

“Okay, test it and see if it works.”

That’s it.

How would you feel? Vague? Confused? Frustrated? Abandoned? All of the above? Something else?

Well, I, myself, have been put into this situation more times than I care to admit. It’s one of the most frustrating job situations I’ve ever been thrust into.

What, exactly, constitutes “see if it works”? I could simply start the application, see if it starts, and say, “okay, it works.” I suspect that that’s not what the people who make the request are looking for. Yet time and again, I get a request from a developer or a designer to test something, and that’s the only instruction I’m given.

And it’s frustrating like you wouldn’t believe.

What’s even more frustrating is when (not if) the application comes back with some kind of problem, and the people who asked you to test come back with, “you said you tested this! Why is this broken?”

Want to know why there’s so much friction between developers and QA personnel? This is a likely reason. This is something that definitely falls under my list of documentation pet peeves.

The fact is, if you develop a product, and you need to test it for functionality, you need to specify what it is you’re looking to test. You need to define — and spell out — what constitutes a “working product” from one that’s “defective.” Just because a car starts doesn’t mean it’s working. You still need to put it in gear, drive it, steer it, and make sure that it can stop.

If you are creating a product, you need to describe what parameters are required for it to pass testing. If you’re asking someone in quality control to test your product, provide the tester with guidelines as to what should be checked to ensure the product is functional. Better, provide him or her with a checklist to determine whether or not your product can be released. If you discover additional items that need to be tested, then update the checklist.

(If you want to know more about checklists, I highly recommend The Checklist Manifesto by Atul Gawande. It’s actually a surprisingly excellent read.)

So any time you release a product for testing, tell the testers what it is that constitutes a “good” product. Don’t just send it off with vague instructions to “make sure it works” and expect it to be tested. More often that not, that will result in a failed product — and defeat the entire purpose of why you’re looking to test it in the first place.

Advertisements

Monthly CASSUG meeting — May 2019

Greetings, data enthusiasts!

This is a reminder that our May CASSUG meeting will take place on Monday, May 13, 5:30 pm, in the Datto (formerly Autotask) cafeteria!

Our guest speaker is Mike Jones! His talk is entitled: “Using Pure ActiveCluster for SQL High Availability.”

For more information, and to RSVP, go to our Meetup link at http://meetu.ps/e/GBP2c/7fcp0/f

Thanks to our sponsors, Datto/Autotask, Capital Tech Search, and CommerceHub for making this event possible!

Security: Close isn’t good enough!

I am reblogging an article written by my friend, Greg Moore. Hopefully, we all have our data locked down, but I felt that what he wrote was important enough that it was worth passing along.

greenmountainsoftware

I was going to write about something else and just happened to see a tweet from Grant Fritchey that prompted a change in topics.

I’ve written in the past about good and bad password and security polices. And yes, often bad security can be worse than no security, but generally no security is the worst option of all.

Grant’s comment reminded me of two incidents I’ve been involved with over the years that didn’t end well for others.

In the first case, during the first dot-com bubble, I was asked to partake in the due diligence of a company we were looking to acquire. I expected to spend a lot of time on the project, but literally spent about 30 minutes before I sent an email saying it wasn’t worth going further.

Like all dot-com companies, they had a website. That is after all, sort of a requirement to…

View original post 626 more words

The symbiotic relationship between documentation and application development

One of my current projects involves documenting processes for an application that are still under development. As such, much of what I write may change, depending on how processes are changed during the course of development.

At one point, I tested one of the processes so I could determine functionality and document it. In doing so, the process came back with an error message that I wasn’t expecting and didn’t have any user-friendly information, other than a cryptic error code. I contacted one of the developers working on the application and told him what I found. I gave him the error codes I experienced and steps I took to get them. He told me, “you’re coming across bugs that we didn’t even know we had.”

It occurred to me that I was doing more than just documenting the application. I was also acting as a beta tester.

One aspect about writing technical documentation is learning about what you’re writing. In order to write about a process, you need to understand how it works. If you’re documenting an application, the best thing you can do is run the application in a safe environment (such as development or a sandbox), learn how it works, and use it to document steps and capture screens. In doing so, you come across application bugs and even come up with ideas to make the application even better.

I’ve long argued as to the criticality of documentation. It records important information and serves as a reference. However, until this point, it didn’t occur to me that the document development process could have a symbiotic relationship with application development. To me, this adds further fuel to the argument that documentation is critical and required.

Don’t forget to edit your system messages

One of my current work projects is a administrative guide for our application. After a recent status meeting, one of the developers sent me a list of validation error messages that might appear during data imports. I was asked to make sure the validation messages were included with the documentation.

While going through the validation messages, I noticed that they were filled with grammatical, capitalization, and spelling errors. I asked the developer if he wanted me to edit the messages, to which he responded, “yes, please!”

People don’t think about checking output messages for correctness during application development. It is often a part of applications that is overlooked. For what it’s worth, I, myself, didn’t even think about it until I was asked about these validations. Nevertheless, reviewing and editing output messages is probably a pretty good idea.

For one thing, and I’ve stated this before, good writing reflects upon your organization. Well-written documentation can be indicative that a company cares enough about their product and their reputation that they make the effort to produce quality documentation as well. Well-written system messages indicate that you care enough to address even the little things.

Well-written error messages can also ensure better application usage and UX. A good output message can direct an end user to properly use the application or make any needed adjustments. Messages that are confusing, misleading, badly-written, or ambiguous could potentially result in things like application misuse, corrupted data, accidental security breaches, and user frustration.

Ensure your application development review and testing also includes a review of your system messages. It may be a small thing, but it could potentially address a number of issues. As someone once said, it’s the little things that count.

Treat documents like software

This is a followup to my earlier article about Agile and documentation projects.

I’m currently working on a document in which I’m outlining a group of processes within our application. My manager sent me a JIRA ticket to track the project. The ticket includes a few sub-tickets to track some of the specifics of the project.

It then occurred to me how we’re able to make documentation projects work in an Agile environment. It’s something I’ve been espousing for a while; in fact, I even make mention of it in my documentation presentation. I even made mention of it when I presented it a couple of weeks ago. (Ed. note: I wish I’d thought of this when the gentleman asked me this question at that user group meeting!) As it turns out, the answer is pretty simple.

The answer: treat documents like software.

Too often, documentation is treated like a second-class citizen. It gets absolutely no respect. That lack of respect is why I travel to SQL Saturdays preaching the gospel of documentation. Documentation is an important piece when it comes to technology planning; yet it is often treated as an afterthought.

In my current working environment, what makes document projects work in Agile is that they are treated with the same level of importance and respect as application updates. In a sense, document updates are application updates. They are important for end-users and developers alike.

In my daily Scrum meetings, I discuss progress made on my projects — almost entirely documentation projects. These projects are discussed on the same level as application and database updates. Tickets are created and tracked for these projects — just like applications.

I’ve had the luxury of having worked in both professional software development and technical writing environments. I can tell you from firsthand experience that the development lifecycle between the two environments is no different. So why do technical managers keep insisting on treating documentation differently? We are able to make documentation work because it is treated on the same level of importance and respect as application updates. Granted, it might be handled with a lower priority level (that’s okay), but the way documentation is handled is no different from the application.

If you want your documentation to be successful, consider it at the same level as you would your software development. It is a critical part of technology development, and it needs to be considered with the same level of respect.

Dashboard design = UX/UI

This is another article that was borne from my experience at SQL Saturday #814.

When I went to my room to get ready for my first presentation of the day, I walked in on the tail end of Kevin Feasel‘s presentation about dashboard visualization techniques.  I caught about the last ten minutes of his session.

And from those ten minutes, I regret not having sat through his session.

Kevin’s presentation focused on dashboard layout and design.  In the short time in which I saw his last few slides, he showed off his impressions of a badly-designed dashboard, and talked about what not to do.  In other words, he was talking about UX/UI — a subject near and dear to my heart.  It reminded me of the article I wrote about poor design a while back.

I wish I had read through the presentation schedule more clearly.  That was definitely a presentation that I would have liked to have seen.  In my defense, during that time slot, I was sitting in the speaker’s room getting ready to do my own presentation.  But I would have gladly spent that time sitting through a presentation that interests me — and this one definitely qualified.

I’ve attended a number of SQL Saturdays, and I’ve crossed paths with Kevin a few times.  If we’re both attending a SQL Saturday, and Kevin is doing this presentation again, I’ll make sure that I’m there.  I don’t want to miss it a second time.