Drawing of 2 llamas. One is wearing a purple leotard and looks confused. The other one is chewing on a stem of grass and is content.

Llamas, leotards, and user research

ksenia cheinman
14 min readMay 27, 2022

About my lenses

I have worked in the User experience field in government for almost 10 years. Every day, I learn how much I don’t know. That said, I’ve picked up varying levels of expertise in areas spanning content design, content strategy, information architecture, user research, data analytics, co-design, UX design, accessibility, and service design.

My most recent professional undertaking placed me deeply into the context of user research and research operations. While my role on paper was that of a Manager, due to limited capacity and emergent nature of the team, I often played the content contributor role as well.

Over the past 1.5 years in this role, I was involved in at least 11 research projects which I observed and thought deeply about.

The following reflections are an attempt to highlight some of the barriers to doing good research in government, in the hopes of improving both the process and the outcomes.

Research needs time; llamas need grass

Even though you may be very captivated by llamas in leotards and research in agile attire, these disguises are impacting the dignity of their subjects and their broader surroundings.

Thank you to Sam Spencer for the humorous imagery comparing agile research to llamas in leotards and elaborating on this incongruity in his article UX Research and Agile: Chasing the Train:

Don’t put a llama in a leotard and expect it to dance.

I tend to agree. While there are certainly effective strategies and methods to make research work in an Agile environment, more often than not, it can go terribly wrong. Especially within the complexity of government context:

Don’t squish user research into an Agile box and expect it to be inclusive, meaningful for the users and impactful for the organization (= rigorous), as well as sustainable for your team.

And if we don’t expect our research to be inclusive, meaningful, impactful and sustainable, then why are we doing it?

Fast research is possible, but at what cost ?

In preparation for the launch of a new learning platform, my team of 3 (including myself) had 3.5 weeks to complete a research project. The objective was to identify how users navigate within the platform and if there were any challenges.

We completed the project on time, with lots of valuable insights.

From a user experience perspective, the project was a success. We identified a number of important issues which were either resolved or minimized by the time the platform launched to roughly 300,000 public service employees.

From a team-wellbeing perspective, it was difficult and unsustainable.

This is what our schedule looked like:

  • 1 week planning (timeline planning of the entire project and getting the team clear on expectations, define the problem, really define it, plan out research questions — this takes tweaking as sometimes they don’t work out, get the environment set up for testing to imitate realistic scenarios, determine who to recruit and how, actual recruitment and logistics of scheduling as well as managing consent forms and answering questions, planning around multiple schedules, preparing all support documents in both official languages such as scripts, SUS questionnaires and note-taking templates, setting up analysis and collection tools like Optimal Workshop Reframer, setting up digital versions of consent forms and SUS questionnaire, setting up project management tools like Lists to keep all tasks in one place, scheduling recurrent meetings for the team to connect) — it’s not surprising this phase is often the bottleneck!
  • 1.5 week testing (running the actual moderated sessions, taking notes, pulling out important observations, debriefing)
  • 0.5 week analysis (as a team, coding observations in Optimal Workshop based on sentiment, tasks success and identified issues, finding patterns and themes through affinity mapping in Miro)
Screenshots capturing the the analysis stage with coded observations on the left and affinity mapping on the right.
  • 0.5 week synthesis & reporting (sharing findings with design and product teams, shaping the themes and packaging them into insights for senior leadership)
Screenshots showing the different ways in which tasks and thematic insights need to be synthesized to be presented to senior leaders.

The truth is, it is complicated. But I think asking ourselves “at what cost” is a good way to continuously consider what success looks like or should look like.

In my opinion, the cost is too high and this is why I am sharing my reflections and other strategies below.

5 reasons for why you shouldn’t sprint through user research

There are 5 main reasons why government should not expect researchers to sprint through research:

  1. Agile research does not mean doing research faster
  2. Dependencies and constraints are beyond any researcher’s control
  3. ResearchOps is often immature in the public sector
  4. Evaluative research often ends up being explorative
  5. Research takes a lot of mental energy

1. Agile research does not mean doing research faster

I’ve come across a few 1-week user research sprint guides which tend to simply compress research processes into short timelines:

Is it doable? Evidently, in certain contexts. But I’ll be honest. From experience, it is not very sustainable to do multiple user interviews in one day and if you only spend a few hours analyzing results as a group — keeping in mind that it can take up to 30 minutes to agree on any 1 disputed point — I will let you figure out what quality you will end up with.

Experts tend to agree that compression is not the right approach:

When teams adopt Agile, the first attempts often involve doing the same things you’ve always done, just faster. This never works out well. You can’t do eight weeks of research in two weeks. Don’t even try. Instead, you need to behave in a new way. You need to re-conceptualize the work. — Josh Seiden

Moreover, there are different types of research projects, a topic that is often skipped by research guides and toolkits, and they require different time commitments.

Different people call these groupings different names (dscout research types on the left of the slash and Dovetail research types on the right), but the distinction remains:

  1. discovery projects (~ 60 days) / strategic or generative research (6–8 weeks)
  2. iterative projects (~ 35 days) / concept research (4–6 weeks)
  3. evaluative (~28 days) / assumption-testing research (2–4 weeks)
  4. post-release feedback projects (~28 days)/ maintenance or monitoring research (? weeks)

Below is an example from Dovetail:

An inverted pyramid diagram showing a reduction in scope and time commitments form top to bottom: Strategic research, concept research, evaluative research and maintenance research.
Image source: https://dovetailapp.com/blog/user-research-agile/

All of this needs to be accounted for when deciding on the amount of time a research project should take.

Maybe it is also time to deconstruct the word “agile” and express what it is the organization is actually striving for, which often ends up being “speed” (Dr. Andrew Abela says it so well in his Tweet below).

Twitter thread about not using ‘opaque’ terms that mean everything and nothing.

You also need to consider the context within which the research is carried out.

More on that below.

2. Dependencies and constraints are beyond any researcher’s control

In the government context, dependencies and constraints are very often completely outside any researcher’s control and can quickly push any research activity weeks out of the planned timelines.

Delayed assets for research

For example, in a recent research project my team had to use the production environment of the learning management system that was getting ready for launch. This production environment needed to be modified to better imitate a real environment prior to us being able to test in it. To do this, the research team had to coordinate with numerous groups responsible for the different parts of the platform including developers, for whom this request for support was completely unplanned for and thus, understandably, not a priority. This disconnect caused considerable delays in the researchers’ ability to prepare the testing environment, yet the timelines and expectations for testing did not change.

In the user research (UXR) community, delayed “assets for research” are recognized as the 3rd most common bottleneck affecting project timelines.

Multilingual research

In the Canadian federal government context, we also have to do user research in both official languages (English and French). Recruiting participants in both languages and scheduling/matching the right language user researchers with the right session is an added challenge that takes extra time.

3. ResearchOps is often immature in the public sector

Without robust research operations (ResearchOps — a set of tested tools, templates and processes that can be reused and applied effectively), teams are testing and developing these processes and tools as they do the actual research. This puts a lot of pressure on the team and creates a stressful and chaotic cadence.

In a recent study about what makes for an adequate research project timeline, Ben Wiedmaier and Kendra Knight discovered that the activities that take place prior to the actual research are the most likely cause of project delays:

Recruitment, site securing, operations were the biggest source of project delays (36.3%). Scope creep (19.6%) was the next most common.

Oh yes, scope creep — that’s covered in #4. But for now, let’s focus on the logistics.

In many organizations, recruitment is taken care of by individuals in ResearchOps roles or even entire teams:

They [Google Ventures] realised that sprinting works best when you are not spending your key days trying to deal with a mountain of research ops such as booking in research participants or deciding whether to run in-person or remote sessions. And if the ops are slowing you down, ask for help from the rest of the team. — TestingTime

While the last bit sounds like great advice, when is the last time you’ve had ‘the rest of the team’ that had nothing to do and was expertly equipped to help with just what you needed? This is simply not realistic for most of us. This also means that researchers take on the burden of doing this important but time-consuming work which can significantly delay projects and exhaust the researchers. Moreover, when the researchers are focusing on recruiting the right participants with an inclusive lens, both the pressure and the delays can increase exponentially.

Most importantly, your organization’s ResearchOps and, consequently, your ability to pivot and quickly jump into a new research project will not improve if you do not set sufficient time aside for the wrap-up/close-out stage of research to give the team space to reflect, learn and make changes for the future.

A retrospective board for team’s feedback with ideas divided across 3 themes: Rose — what worked well, bud — opportunities, thorn- what did not work well.
An example of a retrospective exercise following a recent research project with action items to implement.

An example of a retrospective exercise following a recent research project with action items to implement.

Basically, you need more time. To save time in the long run, you need robust ResearchOps.

To have robust ResearchOps, you need to spend time grooming and developing it after every research project, as part of every research project.

4. Evaluative research often ends up being explorative

Public sector deals with a lot of complex problems and the problems are often poorly defined (not enough is known about the problem to begin with). This causes significant and invisible scope creep:

What starts out as an evaluative research request, can quickly balloon into an explorative one (aka discovery/generative), which takes at least twice as long.

Basically, when a research activity yields large amounts of previously unknow information that has dramatic consequences for the product/service, the research changes shape, scope and impact. This, of course, affects the level of effort and time needed to make sense of the findings. But it is most often not accounted for.

This experience is perfectly illustrated by the recent project described above, which took 3.5 weeks, but in hindsight should have taken at least 6 weeks.

In the 18 days that we had, we spent at least 150 hours to determine if users were able to navigate within the new learning management system. De-scoping anything, to fit the timelines, would make it difficult to answer the above question. The problem was broad enough that limiting the number of tasks (7) was not an option and limiting the number of moderated test participants (6+1 pilot) would not give us sufficient data. So we ended up with lots of data illustrating unusual behaviors that we had to analyze and interpret. In numbers, we ended up with 141 observations and 51 tags, synthesized into 9 insights.

Having completed this project, as a team, we realized this timeline did not work at all. It was unsustainable, especially, if your research team is working on other initiatives or projects along the way (as many do in the public sector).

Upon lots of reflection following this research and conversations with other researchers (thanks Joanne Li from BC Government), here is what a timeline for a similar project should look like:

  • 2 weeks for planning to actually be able to fit in the myriad of activities listed in the actual project timelines.

Things to keep in mind: This is also the time when by further unpacking the problem, researchers can determine that the scope of research has changed and it needs more time.

  • 2 weeks for conducting research, so testing sessions can be spread out to at least every other day and to allow for maneuvering when things don’t go as planned such as tech not working or participants not showing up.

Things to keep in mind: Testing in both official languages takes extra time, as it is not as easy to recruit Francophone participants. Holding tests daily is exhausting for the team, as each testing day can be an intense 3–4 hours of listening, thinking and analyzing: 30 mins to 1hour of prep, 1 hour interview, 1 hour debrief as a team, 1 hour of additional analysis.

  • 2–3 weeks for analysis, share-outs* and wrap up.

Things to keep in mind: Analysis is iterative, tags and associations build up overtime. So even after initial analysis and tagging of sessions on the day-of-debrief, researchers may have to go back to the same observations later to re-tag.

Analyzing as a group takes longer than when you work alone and requires more time for negotiating and consensus. It takes about 9 hours of group analysis for 7 interviews to be able to distill themes. This work needs to be spread out across at least 1 week. The second week can be spent presenting findings to the broader team to get feedback, refining recommendations and doing basic share-outs* of issues.

An additional week is needed for finishing up project documentation and post-project debriefing that helps to gradually build up the ResearchOps.

*If a share-out needs to be shaped for an executive audience in a form of a compelling presentation (a deck), add at least 1 extra week to this timeline. There is a significant amount of work needed to go from establishing themes and patterns (analysis) to being able to shape them into digestible, meaningful and impactful insights for an executive audience.

It may also be a better approach to share issues and insights as they emerge throughout the research and analysis phases, rather than in one big presentation at the end. But this requires an organizational culture change in how leadership expects to learn about research findings.

5. Research takes a lot of mental energy

Finally, but perhaps most importantly, research is a human task.

In a conference session “The Siren Call of Self-Neglect”, Vivianne Castillo talks about how researchers are often at risk of feeling drained, sad or grieved after research. Research as a job often requires talking to people, empathizing with their challenges and learning about things that researchers have no control over (such as systemic problems, unfair treatment, etc). This can cause compassion fatigue among other, more serious mental health impacts.

As a result, Vivianne talks about research as a discipline being closer to a therapist occupation than to a tech role. She also talks about the ethical imperative, a requirement of self-care to fulfill researchers’ professional responsibilities due to how closely researchers work with other people.

Empathy impacts aside, doing multiple research sessions a day and debriefing requires a lot of sustained attention, intense concentration and analysis. So having time to space out research sessions is one simple way to reduce the mental load of research during the testing phase.

You should also keep in mind that there is an impact to research projects that are scheduled back to back or overlap.

In UXR space, there are different explorations of continuous research methods. One such concept is rolling research (which is done on lightweight problems, across common themes and uses repeatable methods to reduce the burden on researchers). This framework however has many limitations, such as it is not suited for complex problems or junior teams, it has little portability beyond the immediate timeframe, as there is little documentation and most importantly, it is not sustainable:

Very demanding of time and resources, sometimes exhausting, especially at a very frequent cadence like weekly. — Mary Nolan in Rolling Research

What to do instead of compressing research

Instead of squishing user research into your design sprints:

  1. Better define the problem at the start to avoid an evaluative project becoming an explorative one
  2. Give research projects the time they need based on the type of research and add an extra week for public sector complexity factors (such as having to do user research in two official languages or not having prior templates to reuse)
  3. Recognize the constraints and limitations your organization has and factor these into the timelines (this means add extra weeks of time to #2)
  4. As a leader, if you can’t compromise on the timeline, be prepared to make changes in consultation with your research team:
  • provide extra resources
  • pick a different method
  • reduce the scope
  • reduce the rigour

5. Change your perception (or advocate for this change in those above you) about what qualifies as research deliverables; move away from polished decks to brief notes/rough insights at regular intervals or other quick shareables:

So, stop thinking in terms of research studies and research phases, and instead think of research as a continuous part of your team’s operating rhythm. Share your work. Deliver value each week. — Josh Seiden

You can also invite stakeholders to experience the insights by directly involving other teams/executives in research sessions as observers or notetakers. But remember, you also need extra time for this (I am starting to sound like a broken record); as you need to provide observers with resources that will help them understand and make the most of the sessions.

My Assistant Director recently suggested that we try this approach for communicating research progress weekly, which I think is perfectly in-line with the above strategy and that I am looking forward to testing as a team:

What did we learn this week

What changed from last week

What do we want to learn next week

6. If you need a deck for executives, add 1 extra week for framing, shaping and presenting findings

7. Add an extra week for wrap-up which includes documentation, artefact refinement, and managing loose ends that often get forgotten. This part of the research process generally gets very little time because new projects take over and fatigue settles in.

We need more time built in for clean-up, reflection, share-outs and documentation, unless we want our research to remain a shadow of it’s intent, a llama in a leotard:

the spectre of insights, recommendations, learnings, and innovation “left on the cutting room floor,” swept aside for — ostensibly — the next “critical” project and its “scrappy” timeline.

What to do if you need research fast?

If your organization needs research findings fast and can’t really commit to the above, consider a different approach to research.

Minimize original research whether evaluative or explorative and focus instead on growing an organizational research repository with data that the organization already has and is constantly collecting from different sources (call centres, feedback forms, web analytics, evaluations, platform usage data, reddit threads, social media replies).

Building, organizing, and curating this repository will still take upfront time and continuous maintenance, but once put in place, relevant insights can be pulled out quickly to help support decision-making on-demand.

This minimal approach will also create space and time for the original research that needs to happen, by prioritizing what really is unknown and worth knowing, without needing to research everything.

Next steps

Finding the right cadence for research in government is challenging.

I am struggling and learning everyday and would love to hear from anyone who is feeling the same or succeeding.

Until then, I will keep trying to find effective strategies and protect my team’s capacity, so they can do meaningful research work!

--

--

ksenia cheinman

:: digital content specialist — passionate about open learning + inclusion + collaboration + systems + stewardship + learning design + reflective practice ::