Agile development, or any kind of iterative development process, thrives best when it incorporates the feedback of customers into the process. And while some teams choose to have a person on the team who "represents" the customer, usually this isn't the actual customer.
At some point, you've realized the feedback from real customers beats feedback from simulated customers. At least, I hope you have.
So: how do you integrate usability testing into your sprint planning?
There are a couple of ways, depending on how your sprints are structured and how long they are. The single most important factor is going to be: the time it takes to recruit appropriate test participants.
Recruiting for research is the biggest variable
Recruiting test participants is a topic in and of itself, but suffice it to say that it is less of a task for one person to take on than an entire vocation. Unless your target user base is Agile software developers, you are unlikely to have anyone on your team who would be appropriate. And customers - especially for enterprise software - can be notoriously difficult to find, schedule, incentivize and coordinate.
And when I say “customer”, I mean “the person who is going to pay money to use your product or service.” I do not mean “the customer representative on the Agile team.” If you try to substitute someone who “represents the customer” for the person who “actually pays money”, then you miss out on the biggest benefit of usability testing: risk mitigation. By putting off interaction with real customers until after your product is built, you’re just waiting until after the development process has ended to see if you’ve actually built the right thing which solves the right problem.
A few pointers:
- Get creative in where you recruit from. Look to far-flung parts of your own organization first (those who have the least to do directly with software development.) Then look to your own customers. Then look outside that circle of influence to the general public.
- Consider testing using remote, automated platforms like UserTesting.com, YouEye, Optimal, etc. If you want maximum flexibility to fit usability testing into your sprint schedule, you need to be flexible about when, where and how you test.
- Recruitment of participants is an ongoing task. Start at the beginning of your project and recruit people into a panel of participants you can draw from continuously.
Integrate where it will cause the least disruption first
With those items in mind, here’s how I like to integrate continuous usability testing (or more broadly, customer research) into a series of Sprints.
- Focus first on testing low-fidelity concepts, wire frames and prototypes one sprint ahead of the development team.
- Make a primary goal of being able to provide the development team with customer feedback before they start building functionality. Some Agile methodologies tend to look unfavorably on this, and prefer to present working code to customers and nothing else. Frankly, I think that’s bonkers: customers don’t interact with code, they interact with a UI, which doesn't need to be functional in order to generate good feedback.
- A "UI" doesn't mean "a finished visual design that is branded and launch-ready." It can be a drawing, a sketch, or frankly even a list of words or phrases that you'd customers to react to.
- Once you’ve got a good handle on the cadence of providing testing feedback on low-fidelity artifacts to help feed the development team, begin to layer on a second round of testing on finished functionality. Because pieces of functionality may not be fully integrated into an entire system at first - but may still be ready for a customer to give feedback on them - you’re likely going to need to move away from remote, automated qualitative testing into a testing methodology that requires live, one-on-one interaction with customers. Remember how hard it is to recruit actual customers? It just got 3x harder, because now you need to work around theirschedules.
- Usability testing of finished code can occur on the last day of a sprint, or possibly on the first day of subsequent sprints’ planning. Expect it to take a full day or two to conduct the testing and review the findings before they can be shared with the full sprint team; it’s GREAT to have the entire team observe (from “the other side of the glass”) the testing in progress, but the team should initially receive feedback from a trained researcher who has experience analyzing usability testing results. (At least at the beginning; team members will eventually learn what kind of feedback to prioritize over others, but like programming this is a skill that can take years to master.)
Note that this overall cadence can scale up or down fairly well: I have integrated it into 4-week long sprints, and into 5-day long sprints. Technically, if you have everything lined up well in advance (recruiting, your testing script and plan, a place to conduct your research, etc.) you could integrate customer testing into a 1- or 2-day long sprint (think “hackathon” or “design jam”.) The feedback will be quite a bit more shallow, but if you’re iterating fast you may be able to make use of it still.