Open Research: Moving from Disposable to Reusable Research in Government
The core idea behind open government data is a simple one: public data should be a shared resource. It is valuable not only to the government departments that collect it, but also has value for citizens, entrepreneurs, researchers, and other parts of the public sector.
Across the government of Canada, research activities are a big part of decision-making. Yet research often seems to be treated as a project-based output, rather than a way to build knowledge about one’s industry and community that contributes to the broader, ever evolving mission of the organization and government at large.
From a researcher’s perspective, say I want to find out about learning needs of Communications specialists or insights on usability of forms. Certainly some government organizations have these insights. But they gathered them with a specific goal in mind, for a specific project and research findings were either only shared with a small group of internal stakeholders, or no one (yes, we know it sadly happens), or the way they were documented will never be understood out of context. This is a tragedy. We are wasting time and resources.
Internally, what this looks like is that many pieces of insights across the organization remain decentralized and uncoordinated. Different pockets in the same organization might be working on separate types of research that will never be compared, combined, triangulated, or understood together:
- Data analysts will do analysis of surveys, tool uses, industry data etc.
- User experience teams will do usability testing and user research
- Service designers will blueprint and create user journeys
- Call centres and support desks will have their own feedback loops
- Communications teams will get insights from web analytics
- HR will do surveys about the future of learning
- Public Opinion Research groups will analyze data relevant for their own reports
This is all just inside one single organization! What makes matters worse is that even in one of these pockets, there are likely no clear standards and practices for documenting research, communicating insights, and extracting what is important for longer term use and reuse.
The greatest tragedy however is what our current disposable practices might be doing to the quality of our research.
If research is never used beyond the one-time application, then there is no incentive to document, to question methods or to apply rigour, which of course could also impact its accuracy and trustworthiness.
I’ve been thinking about it for a number of years now. Initially, when developing a change management strategy for adoption of GCDocs, then as a practitioner of content design, wanting to find insights from other research to inform my decisions-making, now as a Manager of a research team thinking about what ResearchOps might look like in government.
The Government of Canada does not have a way to share re-usable nuggets of research with others in a meaningful, user-centred way. In fact, I would like to see this reusability reach beyond the federal context and include provincial partners and even non-profits.
Imagine being able to go to a website, input some keywords and find little bits of insights that someone else has gathered across the public sector on a specific topic. You can also dig deeper and find out:
- what was the research project
- when it happened
- what was the goal
- what were the limitations
- who to contact if you have questions….
This is inline with what some leaders in the industry have been working on under different names:
- Modular research insights @ Polaris
- Atomic research
- Feedback Explorer tool in UK Gov used for content improvements
In March 2021, I decided to connect with other Government of Canada researchers who were part of the Canadian Digital Service research community on Slack to see if anyone else has been thinking about reusability of research and was interested in exploring this further. I got a few interested responses (wave to Andrea F Hill and Amy Y) and shortly after, Andrea shared a very timely discussion related to this topic by Tanya Snook — spydergrrl, a UX Manager in the Government of Canada.
In Swimming in the UX data lake: Using machine learning to give new life to government UX research data, Tanya makes a compelling case for why sharing a range of data from user research to service feedback across government departments could be extremely valuable:
Service Canada, Immigration, and Public Services and Procurement all have front end client service interactions that are multi-regional. Client research and feedback on service interactions is scalable and can be applied to blueprint service workflows and develop new service designs. When I was hired in my current job, I had to blueprint my program by myself. If I’d been able to refer to data about client facing interactions from other programs, it would have helped me evaluate our blueprint and feed into the new service model I designed. Instead, I had to do it alone from scratch as a UX team of one.
The above example clearly shows how sharing of insights across government would:
- Save money by reducing the need to duplicate existing research, since it would now be accessible and discoverable
- Allow others to move with confidence in developing new services, by learning from others
- Improve overall best practices by having a standardized process and a place to contribute and build on the work of others.
While I have my reservation about the use of AI and machine learning on these data sets (for reasons that are outside of the scope of this post), I am excited to see another voice advocating for the sharing of data and highlighting practical ways of getting there.
From my experience however, before any data is shared across organizations, we need to get internal data processes in order; we need to:
- Deepen our understanding of privacy implications and ethics around sharing and reusing data
- Reflect on what should be considered a meaningful life-cycle of government research data and how long it should be preserved for
- Develop a data management plan, prior to beginning research
- Create consent forms that reflect how the data will and might be used in the future
- Critically evaluate the tools we use to create, capture, and document research and consider if we can replace them by open source options
- Understand the impact of proprietary file formats on long-term preservation
- Understand copyright, licensing and how all of it ties into Open Government
- Develop more robust documentation processes, readme files and standardized organization of research and data
- Consider how we can share research insights along with the research data itself
Basically, we first need to improve data sharing process within a single organization
And this is no small effort, as Tanya aptly described:
This year we worked on cataloguing our data, documenting insights, creating a metadata model to tag the content for reuse, and loading tagged content into a single repository.
It reminded me how much we need to make information management a ‘hot topic’ in government.
It made me reflect more deeply on who does research, what value is assigned to it, where it is stored, how it is stored, who is assigned to managing it, how it is used and reused.
Throughout the module, I found a number of questions that made me realize how much room we have for improvement:
How do you currently organize your files? Consider the file hierarchy on your computer.
How have you recently used the naming and organization system for your files? What works and what doesn’t work? What do you like/don’t like about your system?
Who are you designing your organization system for? You? Collaborators? Think of all the people that will need to access the files. Is this system adequate for their needs?
As I went through the module, I asked myself:
How might we build a research infrastructure that meaningfully supports immediate user experience as well as future organizational goals?
To complete this program, I have to submit a capstone project. For this project, I chose to focus on the organization of research materials as my output and as a starting point of my journey to open up government research.
I decided to begin by transforming our research documentation to be clear and consistent, so that someone completely new to the team or organization could understand what kinds of research we’ve done and what the research entails at a glance.
To do this, I planned to:
- Review our current file organization
- Create a file naming convention to organize all types of research documents
- Create instructions for ReadMe files (work in progress)
- Update an old research project to reflect the new file naming conventions and include a readme file
- Document this process in a blog post
- Develop an OER explaining what an open research workflow should look like and consider (work in progress)
While this list was a bit ambitious, I got numbers 1, 2, 4, and 5 done!
Here is what I developed as a result:
- Folder structure and file naming conventions for user research (under CC BY license, always a living draft :))
My thinking here was to organize file names around the stages of the research process. If this is consistent from one project to another, anyone can really quickly find the info they need (especially, if a ReadMe file is included) — like a Director looking for the final recommendations from research — and also learn about the process followed by different areas of the organization. It can even be used as a performance support or in-the-workflow learning tool for junior researchers who are onboarding to the team.
I went back to the research my team did on GCshare — the upcoming platform for open educational resources in the public service and organized all the files in it based on the developed structure.
This took about 35 minutes (the caveat is that the folder was already well-organized). If you are curious, here is what it looks like now:
Hopefully, even without reviewing the Folder structure and file naming conventions for user research document, by just looking at this structure, you can tell a few things about this research project at a glance:
- What is the project
- What the different stages of research are
- To which stages the specific documents belong
- How much work went into each stage
Looking at this also makes me wonder if this might be an effective way to spot any gaps in research processes, say if there are very few documents in the Analysis or Planning stages. It reminded me of the concept of Building blocks, shared by Ben Holliday. Something to explore in the future.
For now, here is my list of potential items that need to be noted down in a user research ReadMe file:
- People and roles of those who led the research
- Number of participants
- Language of research
- Any acronyms used in documentation that need to be explained
- What was the purpose of research
- What type of data was collected
- When was the data collected
- Where it was collected
- How it was collected
- Are there any gaps in data and what do they mean
- What are the limitations of this research
- What software was used
- How should research be attributed, licensing
Once I make some progress on these documents, I will be sure to update this article and share them too.