This method, popularly known as quick-and-dirty usability testing, is one of my favourite testing methods to get rapid feedback on designs, without overthinking user recruiting, setting up the stage, etc
💡 Use this method when:
You want to check if users can use the feature as intended
You want to do a quality check on the designs (one iteration or in parts)
🔍 Check for three primary things:
Can they find what we intended them to on the screen/ flow
Are they interacting with what we wanted them to
Did they understand what was happening
Anything else you find during these tests is just a bonus!
🤔 Why use this method?
It’s fast. No user recruitment. No camera, software setup. Simply walk up to a few people who match your ideal customer and ask them to use the flow.
It’s better than just designing by yourself, or debating for hours if something looks/ feels intuitive or not.
In just 2–3 conversations, you’ll discover stuff that’s working or not working. Do a max of 2–3 iterations with 2–3 per iteration.
1. Check simple flows
Say you were Medium and changed how “Claps” work. You may have added a prompt (a tooltip, banner with written text etc.) to communicate that change on the story detail page.
You’d quickly find few people to open a prototype and check for the 3 things:
Were they able to find that prompt where the changes are mentioned?
Did they interact with it? (e.g. click on learn more, dismiss, see it in action button)
Did they really understand it?
If 1 fails, make the design pop more or position it in context.
If 2 fails, make the buttons more explicit.
If 3 fails, make the copy simpler/ add a gif, etc.
These fast iterations can give you better chances of design success within hours.
2. Scriptwriting & storyboarding
The way ratings work at UC might look simple to customers. But, our service professionals find it challenging to get their heads around. Ratings are precious to them, and it determines how successful they become at UC.
Hence, it becomes crucial for us to explain how ratings work to them in an easy-to-consume medium (probably an explainer, infographic-style video). But how do we know that we’re using the right metaphors to explain ratings? Are infographics too daunting and hard to follow? If we use a human to describe, will they be able to follow along with just audio aided visualisation?
Producing one concept end-to-end and showing them the final video for feedback takes days. Naturally, it is a high-cost feedback loop, and taking this route will probably take multiple weeks to get right.
That’s where quick-and-dirty-usability testing kicks in.
Instead, one can storyboard on flashcards (using the crazy-8 method) and generate many good ideas in a matter of minutes. Then we take those flashcards, lay them out in sequence in front of a few partners and ask them what they understand. We take notes and iterate.
This method gives us actionable, honest feedback in a single day vs weeks of production work. Once we’ve done it a few times, we move on to producing the actual video.
3. Complex screens
Picking a time of service while booking an appointment for a haircut on the UC app might seem simple but gives more cognitive load. One would need to plan basis how much time it’d take to do the service and accordingly pick a date and time when a customer would be free.
A prompt we give on that screen is the time required for service (It’d take 20minutes for 1 haircut). But this prompt only works if users read it, understand it and book accordingly.
When we design this screen, we will go out to test for the same 3 things:
Did the user find the text easily?
Did the user pause for a few secs to read it? Or did they find it but choose to ignore it?
Did the user understand what it meant and pick a slot accordingly? Or picked one randomly?
You may also like