Notes From SauceCon: Experience-Centric Testing

Did you get a chance to attend SauceCon last week? The event brings testing experts, software developers and DevOps leaders together; this year’s conference was built around the theme “Reimagine Test.”

Threading through the workshops and conversations was the theme of the intersection of testing and the user experience. If the final goal of testing is the delivery of an exceptional user experience, it makes sense that engineers need to think beyond just inputs and functions—but that’s not part of every team’s testing practice.

“We wanted to emphasize the need for expanding the ways we think around how we test,” said Matt Wyman, chief customer officer of SauceLabs. “Quite often as an engineer, especially a developer who’s not a QA person but is asked to do some form of testing, there’s a tendency to work just at the unit level as opposed to going all the way out and seeing what the consumer experience is like.”

Wyman talked to Kelsey Hightower, principal engineer at Google, in a keynote fireside chat about his work with Kubernetes and open source software and the evolution of testing. Here are a few highlights.

Testing with Confidence: Kubernetes in the Early Days

Hightower recalled his days as an open source contributor working on early-stage Kubernetes. There were no integration tests or documentation on what each piece did, he pointed out, which made it difficult to have a test harness that would allow him to test with confidence. After all, Docker, Amazon, Red Hat, Microsoft and other companies were working on it, too. The focus then was on getting the API correct—but by working within restraints, Hightower found a path to creativity.

“A lot of people laughed at us in the early days,” he recalled. “We had this 100 node limit. We also had an SLA where all interactions between components had to be less than a second. So if I called one command, it might have to bounce around the entire system. We wanted that to come back in a second; no one would want to use a system that was too slow. So that 100 node limit was an artificial limit, a way for us to say, ‘We’re going to test at 100. We know we have work to do to get to 1000. But if we can have a really great experience at 100 and keep that to a subsecond, that will give us the confidence to know we can make the change.’”

Another important stepping stone to Kubernetes testing: Thinking about failure cases.  The emphasis, he said, was “Ensuring Kubernetes could go down without affecting the fate of a website or app. You could do upgrades without taking down the system; have catastrophic bugs and the app kept running. That gave people confidence that you could use it.”

Chaos, Collaboration and Communication

Hightower also addressed the value of chaos testing. While rigid testing might be standard in quality assurance, rigidity doesn’t serve Kubernetes testing well, as any small change can break the system.

“We realized in the Kubernetes world we have to be less rigid at the unit test level and move into more realistic environments, like running the whole system with real workloads and seeing if Kubernetes eventually does the right thing,” he said. “Then you get things that work the way the customer wants them to work. Does the API make sense? Did the command line tool make sense? Do the error messages make sense? And it’s not just the UI—you have to understand interactions, especially with someone’s laptop, phone and wearable all interacting together.”

Hightower addressed the collaborative nature of testing with a personal story. “Early on, I was a dev who took pride in being the hero. If it’s broken, ask me; I’ll fix it,” he said. “Then I had a moment of maturity. To go to the next level, I had to make other people really good. You know it works and that’s great, but how do the rest of us know it works? Or when they’ve broken it? When you create a set of tests, you’re communicating to everyone else. It’s your best opportunity to communicate once and have people go forward knowing how to test the system and what a quality test looks like.”

Instrumentation and Visibility

After the keynote, we spoke to Wyman, who shared a cautionary tale on where so many engineers go wrong with testing. “I was signing up on a website recently and it asked for my email and my username – so I put my email in for both,” he said. “Without any errors, it failed. I kept clicking the button and nothing would happen. Because the username and the email couldn’t be the same—but it wasn’t telling me that. I’m sure that when it was tested, all the pieces were validated, but no one got all the way to the outside and looked at the overall flow. So the registration rate for that page is going to be lower than it should have been.”

Wyman offered this as an example of why devs need instrumentation to spot hidden errors where users might bump into failures as opposed to complete breakages. “It’s easy to see a complete break when code fails and the whole thing falls over,” he said. “But the experience error is hard to find; harder to discover. Developers need to look at the user analytics and usage analytics, not just system analytics.”

Thinking Differently About Testing

Input, functionality, reliability and security are still the linchpins of any testing practice, but they can no longer be the entire story. The ultimate test has to answer one question before it crosses the finish line: “Do I want to use this product?” Engineers who can encompass all of these elements in their test environment will create both technical and experience excellence—and be able to test with confidence.