User Researchers Are Under Siege – AI Can Help

User Researchers Working Together

User researchers leaders from many top US companies feel they are under siege. For many, the recent layoffs have disproportionately impacted their ranks. For others, the dramatic increase in the need for user research has resulted in huge backlogs of work. Scaling UX research with less (or, at minimum, the same) investment is a common need.

Fortunately, advances in generative AI can make scaling UX research possible — but AI alone is not the answer. Instead, UX research must include both new technologies and human oversight to deliver on that promise.

With the increase in demand for human-centric experiences and the desire to test earlier and more often, executives are also questioning the value of their investment in user research. Executives are often frustrated by the need for more connection between their large investments in UX and the impact of that research on the business. Some camps perceive researchers as too “academic” and more concerned with creating the perfect, unbiased research methods than generating results that can yield better products and experiences.

In many companies, UX leaders recognize that their role is two-fold. Both to:

  1. Link research to business impact to evangelize the results they deliver.
  2. Provide “proof” of the reliability of their results – winning the confidence of their executives, who have been highly skeptical of small N, qualitative user testing in the past.

So, the gauntlet has been thrown. User research leadership must do more with less: link pivotal business impact to their teams’ work, as well as evangelize the value of their team’s research to the larger organization.

How did we get here?

Digital-first has created a vast demand for human-centric and user-focused experiences. As a result, user research has become increasingly important to product and design teams, looking to de-risk new products and improve the user experience of those products. Many teams are adopting user research earlier (“shift left” strategy) to ensure that products are de-risked earlier in development when the cost of changes is minimal compared to later in the story, or worse, the launch process.

Traditional user research conducted during the design process has been heavy on effort and light on rigor, with unmoderated video-based as the “common” method. These are both time-consuming and deliver insights, using “small N” test populations, which can lead to unreliable results. And, the effort to run these small N (generally up to 12) takes 4-5 hours of reviewer time — so even a small test of 6-8 panelists who do a 30-minute test can take 20+ hours of researcher effort for that anecdotal result. Unsurprisingly, most user research leaders I speak with all point to large backlogs of research insights and unfulfilled research requests.

Couple this significant increase in research effort with the fact that unmoderated user tests (and evaluative tests in general) use less of the researchers’ core talents (generative and formative research skills), and you can see the mismatch. Researchers are backlogged on projects that require less skill and not getting the time to do the critical formative and generative research that helps teams better understand their prospective customers’ needs. Add to this the reduction in staff for some or hiring freeze for others, and leaders are left with a recipe for burnout.

Research executives are leading the change by embedding user researchers in product and design teams, the right way to democratize evaluate research, and bringing on new tools to help insight sharing. However, these methods don’t change the effort required to do research. It only increases how many people can do the research. Companies that have embraced democratization have more user research “team members” (comprised of designers and product teams) but not increased efficiency.

What is the fix?

There are many ways that AI and automation, generative AI can help researchers be more efficient, catch up on their backlogs, and have more time to evangelize their findings. AI is especially valuable in evaluative work — where the tests rely less on researcher expertise and are more ‘standard’ in nature. Panel management, qualitative summarization, and results reporting are areas where AI can significantly decrease researcher time and effort.

WEVO, my company, has been employing AI for over five (5) years and helping large research organizations at companies like Mastercard reduce effort, increase research throughput, and do so reliably.

WEVO 3.0, just released, takes this to a new level, providing a one-stop solution that provides effortless and reliable usability and attitudinal testing in the same test: with behavioral metrics (time, completion rate, ease and click paths) and attitudinal (participant mindset, engagement, and sentiment). And, with WEVO 3.0, creating and analyzing studies are nearly effortless.

Unlike traditional tools, WEVO makes it easy to test early and often. Researchers, digital marketers, and designers can easily create tests by selecting the target audience, outlining key test objectives, and uploading or linking to the UX testing experience.

Because the WEVO test is standard, researchers can confidently allow nonresearchers to run tests — without concern about question bias or poor test design. And, WEVO’s team of human experts reviews and synthesizes results to eliminate nonresearcher bias in the insight generation from the analysis and findings. WEVO results are easily shared with all team members so that all can benefit from the insights generated by every test.

Humans Not Optional

All AI is not created equal, and AI alone is not the answer. Researchers need to look for tools that combine humans and AI, that include context and insight from prior research, and that can provide proof of the insights.

AI and automation can help speed the insight generation by summarizing and synthesizing qualitative results, providing quantitative results (and, in WEVO’s case, benchmarking results against other results in the same product/industry), and handling all the headaches of targeting the right audience (and eliminating poor participants).

Plus, in the case of WEVO, results are delivered with easy-to-understand charts and visual and human-reviewed findings.

Advances in AI will radically change how user research is done — and ensure that reliable user research is conducted at every stage of product or concept development — without the need for more (and, in many cases, less) resources.

Summary

Researchers need to scale their research capabilities and improve the link between their research and their company’s goals (and bottom line). While the traditional need for rigor and quality remains, their traditional tools and methods can not meet current needs.

Researchers have a massive opportunity in front of them, but they will need to be open to new tools and methods which will allow them to scale and provide more reliable results — ultimately allowing them to link their work to the business impact (e.g. better products and experiences) they enable. See the next-generation UX research platform powered by generative AI »

Janet Muto is the co-founder and Head of Research of WEVO, the human-augmented AI user research company. She’s passionate about the continuous joys of discovering keen customer insights, finding new podcasts, hiking new trails, and baking.

Share This Post

More To Explore

Ready to Get Started?