purchase doxycycline without prescription https://antibioticsonlinerx.com

December 7th, 2008

Testing and launching a web app: What every startup needs to know

in: Create, Startups

Several of the companies I’ve worked with in the last year have gone through a software launch. While I usually focus on the business side of startups, and this post is more like something from Bitcurrent or Watchingwebsites, it’s pertinent to any web startup that needs to test and launch a successful product.

There are ten distinct stages of defining, testing, and launching a web application. Each stage has some tools you can use, involves different people, and focuses on different kinds of data collection.

Ten stages of release visibility and testing

If you go through these stages in the wrong order, you’ll waste time and money. Do them in the right order—using some of the tools we’ve found here to help you along the way—and you’ll be much more likely to launch the right product at the right time and make it easy for your customers to access you.


In the concept phase, it’s important not to be constrained by what’s possible. Avoid technology; instead, focus on needs, how you’ll make money, and how you’ll get adoption. In fact, John Stokes of Montrealstartup told me about a Washington, DC-based startup incubator that insists its participants write no code for the first month of their three-month term.


This is where you stitch together the concept. I’ve mentioned Productplanner, and I’ll do it again here. Ideally, you want a big, blank wall with lots of drawnings of screens. I’ve even done this with push-pins and colored yarn to represent links.

While this might seem awfully old-fashioned, there’s something organic and accessible about a wall full of screens to represent navigation. You can put post-its of ideas on the various pages, and if put new designs atop old ones so people can quickly leaf through prior designs.


Once you know the concept and workflow, it’s time to refine the wireframes a bit. Tools like Balsamiq (thanks to Austin for the pointer) or Axure make this easier, but you can use Powerpoint in a pinch.

You can use your wireframes to do “paper prototyping” where you ask people who aren’t familiar with the app to “use” it, moving their finger as if it were a mouse.

The result of all this work is a set of requirements documents that describe the product. Then developers go off and code furiously.


Once you have code — either an individual component or the whole application — it’s time to do QA. You should have a list of all the things each page is supposed to do, things like “when you click the login button it takes you to the home page.” The initial QA testing plan is where you check each of these things. It’s a test to see whether the code does what the requirements documents said it would.

Some people rely on spreadsheets for this stuff, but if your app is of any size, you probably need to integrate it with a bug tracking system like Fogbugz, Trac, Jira, or something similar. Ultimately, you’ll write scripts to run these tests automatically and that will become your regression testing system.

Don’t forget to run browser plug-ins like Firebug to see what’s loading slowly and what’s missing. Two great services for checking page performance are Webpagetest and Website optimization.


If you know the app works, you still don’t know whether it’s unusable. Unusability testing looks for dumb things — places where everyone gets stuck. While you tried to eliminate these back in the Workflow phase, the reality is that you won’t find all the problems until you actually watch people using it.

The goal here is to validate the assumptions of the requirement document. Usually, you want to do an unusability test, then go fix what you found, then do another one. So don’t get five people in all at once to do testing — iterate. Test users are precious.

Set the test user up at a machine, and project a copy of their screen on a wall for all to see. If you like, you can use screen recording software like Camtasia. Encourage the test subject to talk about what they’re doing. And — most importantly — no coaching. It will be incredibly frustrating to watch someone try and use the app, oblivious to the big, red button saying “click me” in the middle of the screen. Bite your tongue. Watch them suffer. It’ll make the development team that much more eager to fix the problem and try again.

Also be sure to vary the browser, monitor, OS, and if possible connection speed. You may find certain resolutions make buttons invisible, or that when the connection is slow users will click something repeatedly.


While unusability was about finding dumb mistakes, usability testing is about making sure your target market can use your app or site properly. You’ll need to get users that represent your target demographic in. This means the same age, gender, and online experience, ideally from similar industries. If you’re building a site for truck drivers, get truck drivers to test it.

It’s harder to find targeted testers like this, which is why we did unusability testing first — we don’t want to waste our targeted testers on dumb mistakes we could find ourselves.


Once targeted testers can use the app properly under the comparatively ideal conditions of your office, go and watch them using it in their place of work.

This means it’s time for a field trip. If they’re truckers who will access the application from a pay terminal in a truckstop, go watch them doing it there. You’ll learn about other constraints such as noise, lighting, privacy, distractions, and time limits that weren’t obvious.


Once you’ve completed situational testing and done the best you can, it’s time to roll out your stuff to alpha testers. These are people who expect problems, but want to try it anyway. At this point, instrumentation is essential. Let me be as blunt as possible on this point: It’s stupid to roll out software without analytics. You simply can’t know what worked and what didn’t.

Google Analytics is the de facto standard here. Install it, and use it to figure out what people are using and what they’re not. This tool can also show you where people are clicking, but I’m partial to the heat charts and A/B testing capabilities of Crazyegg for this stuff. You’ll augment your analytics with other tools as you get closer to release.

Alpha testing is about getting data in the aggregate, rather than from individuals, and using this data to improve the app. In the alpha phase, you probably know many of the users and can solicit feedback from them directly. Remember to train them to take a screenshot whenever they have a problem, and to send it to you as part of their report; this will help to identify client-side problems and to reproduce issues.


Beta is a broader release of alpha code. With alpha, you knew there were issues. With beta, you think it’s ready for release, but want to be sure. Because a beta will go to a larger audience, you probably want to include more feedback tools in the form of services like Kampyle or iPerceptions, or forms you embed yourself from someone like Wufoo, Surveymonkey or Google Docs’ Forms.

If you want to replay some user sessions with a relatively lightweight service, check out Clicktale. Other products like Tealeaf do this on a more industrial scale, as well as fixing other blind spots in your monitoring.

You need to worry about scale and performance, too. Of course, I’m partial to Coradiant when it comes to user experience monitoring, but there are lots of other good products to keep an eye on web performance. You’ll need a synthetic testing tool like those from Gomez, Keynote, Alertsite, Webmetrics, Pingdom, and others.


Finally, you’re releasing the product. At this point, your focus should be on intentional misuse — someone trying to break the application or hack their way in — or on error reporting. You’ll be using performance management tools (to guarantee uptime and responsiveness) and analytics tools (to optimize conversions.) For smaller companies, something like Clicky is a good complement to Google Analytics as it provides more drill-down to individual users. But if you’re looking to do more complex things, you’ll be after Omniture, Webtrends, Coremetrics, or similar tools.

Now ignore some of what I just said

These stages all need to happen, and in an ideal world they would.

Sometimes, though, business priorities will require that you launch before you’re done. That’s fine; just be sure to worry about usability, unusability, and situational use even after launch.

I recently overheard a VC say “if you’re not embarrassed by your application when you launch, you waited too long to launch it.” While that’s not true for every kind of application, it’s certainly a good way to get feedback fast and to create a sense of urgency. And for rapid prototyping, you may combine some of these steps.

So take the phases with a pinch of salt; they’re not hard-and-fast steps prior to a release, but they all need to be considered. Following them will ensure a better final product that customers adopt more, use more, and are ultimately more likely to pay for.

Tags: , , , , , , , , , , , , , , , , , , , , , , , , , , ,

You can skip to the end and leave a response. Pinging is currently not allowed.