Building connected learning
As part of designing the learning architecture for all the offerings at the Canada’s Digital Academy (I will talk more about about the bigger picture in a later article), I’ve been incrementally working on discipline maps for each of the learning streams that focus on the development of digital skills in the public service.
The current 9 disciplines (which tend to change and evolve) include:
- Artificial intelligence and machine learning
- Change management
- Enabling teams
- Transformational leadership
- Digital foundations
The main purpose of these maps would be to serve as the taxonomic backbone (think faceted/ parametric search/ filters to help you find and discover content) of the Digital Open Learning platform that was being developed at the same time.
Making sense of the different disciplines
What started as an early draft topic map outlining the streams and practices we wanted to deliver to learners, has changed quite a bit since its inception in late 2018.
My preliminary approach to this design challenge was to create a canvas where all the disciplines would mingle together.
I had a starting point that was already determined for me in that for each discipline I had to identify:
I also had a rather large starter list of hundreds of initial items (full of conceptual challenges and many different-words-used-to-describe-the-same-thing) collected from government employees via a request to contribute ideas about what each discipline included.
You can see the messy design process sprawling on this RealtimeBoard, which I use as a digital collaboration tool to build the initial structures with stakeholders and subject matter experts.
What this allowed me to do was to critically look at potential overlaps and complex entanglements between all the topics.
All in all, this single canvas of mind maps was an important starting point and an equally important departure point.
As each consultation with stakeholders or subject matter experts lasted a couple of hours and with complex discussions in trying to define disciplinary concepts (“when you say this you mean..?, “what is the difference between y and x?, “is x a practice or a skill?”, “where does x fit into the domain? etc.) it would often contribute to defining at most about ⅕ of the disciplinary practices and skills, not even getting to learning points, it became clear this way of working was unsustainable given the timelines I had to meet.
Realizing the immense amount of work and messiness that goes into building disciplinary maps (my library and information studies training tells me I should have known better), I had to change the way I was approaching the initial design process where I was pulled in all directions, trying to build scaffolding for all disciplines at once, having conversations with multiple subject matter experts.
Design challenges identified along the way
The main challenge in building a reusable system of concepts, a taxonomy, that could be used to:
- discover learning content
- connect related content
- ‘push’ curated content to learners
- help the public sector employees learn about the interconnectedness of different disciplines
was that same concepts across these disciplines needed to be represented consistently with the same terminology and at the same level (if something was a Practice in one discipline, it could not all of a sudden become a Learning point in another — or at least, this is the only way I could conceptualize it logically at the moment).
The other challenge was how to work with different stakeholders without necessarily going into an in-depth explanation about how each topic development needed to consider other topics and that decisions about labeling and whether something is a Practice or a Skill needed to be tempered by those alignments.
- Should I simply be holding consultations and collecting feedback, facilitating discussions about each discipline in a vacuum, without ‘burdening’ my participants with extra explanations of purpose of use and its complexities and then applying the necessary ‘editorial’ processing to the collected data on my own?
- Should I explain the purpose and use of the data collected and help the stakeholders work through the challenges with me?
Both routes have their own challenges and I am still trying to figure out what the best approach is.
On top of that, I needed to find and consult multiple subject matter experts on each topic to get a discussion going and to identify areas of agreement and disagreement. Contributing to the complexity was the fact that not every subject matter expert was a good candidate for this exercise (something one may not be able to determine before the consultation) as they needed to:
- understand the discipline
- be able to look at it from alternative perspectives
- answer clarifying questions
- help articulate the different parts of each discipline which they may not be familiar with (since a subject matter in one area of a topic, may not be an expert in another area)
A good mitigation strategy would be to hold a short introductory meeting (phone or video call, since my work is virtual) prior to inviting someone to a session (instead of simply relying on referrals from others or picking candidates from a list of people who volunteered as ‘interested participants’) to talk about their area of expertise and ask questions that would help determine their suitability for this task.
Changing to an incremental design approach
After feeling overwhelmed by the constant negotiations between topics and having to repeatedly alternate between zooming in and out of different maps at different level of completeness, only to find myself having to redo and change things every time one discipline evolved, it was time to tackle a single topic all the way and see where it took me.
I started with the “design” topic, as it was most familiar to me and helped me mitigate a steep learning curve.
Even though I myself have worked in the field for about 5 years now, defining an entire discipline was not an easy task and required:
- multiple conversations with different subject matter experts
- discovery and consultations of different design course syllabi
- considerations of domain’s use within the government context
- discovery and consultation of resources on different sub-topics to ensure comprehensiveness and accurate terminology
- locating different terms for the same concepts and ‘normalizing’ them by picking a single one and using it consistently
While it would take too long to go into details about all the conceptual challenges that I’ve encountered, I will provide a few as examples.
One of the key obstacles was determining what was a “Practice” vs a “Skill” within the discipline.
Based on a workshop I facilitated with the team when I just joined in December 2018, we defined these two entities as follows:
- Practices are a collection of skills used to create outputs in a particular area of expertise (stream)
- Skills include knowledge-based skills such as ways of thinking (mindsets), principles and “what something is” as well as practical skills that enable you “to do things” and apply knowledge
Despite the available definitions, it was very tough to analyze each important component of a discipline and classify it.
For example, from my perspective, “service design”, “circular design”, “inclusive design” and “design thinking” would all be Skills.
- Service design and circular design are lenses or mindsets with which to approach a problem.
- Design thinking is an approach to problem definition and problem-solving, while inclusive design is a consideration for all design stages; both would fit under each design Practice as each Practice includes its own level of problem definition and solution-finding and would require an inclusivity check point
The challenge is that once these important concepts become Skills, their level of visibility and importance fades by comparison to Practices. “Service design”, “circular design”, “inclusive design” and “design thinking” also don’t seem to be of the same type or at the same level — they can’t all be Skills. For instance, both “inclusive design” and “design thinking” would fall under “service design” and “circular” design, becoming their children.
There was also a point of contention for me where I felt that based on my experience of usability practices in government projects, “service design” in itself should not be taught as a standalone Practice — I saw it more as “service design thinking”; it should be a lens and a consideration embedded within what I saw as the basic design methods/Practices:
- Design research
- Content strategy
- Information architecture
- Content design
- Visual design
- Interaction design
- Usability testing
By separating it out and putting it at the same level as these Practices, we are distinguishing it as a different thing that does not necessarily need to integrate. In other words, a learner could choose to learn “service design” as a separate Practice, not realizing that good design decisions come from using a set of common design methods/Practices which all should integrate a ‘service design’ lens and apply it when necessary and in a manner that makes sense.
Despite my personal inclination, I was compelled to bring these 4 challenging concepts to the level of Practice because I saw different perspectives on this topic and also because I wanted to offer more depth to these topics (given their importance in the government context) via Skills and Learning points, which would only be possible if they were at the highest level of the hierarchy.
I also grappled with including “visual design” as a Practice rather than a Skill under “interaction design”, especially in the government context where a lot of the design decisions are predetermined and someone would unlikely be a ‘visual designer’ only.
At a more basic level of challenges, there were also considerations of how to frame a Skill in a meaningful way, for example choosing between the Skill labels “Writing for the web” (which can be interpreted as only considering the medium and not including the user perspective) vs “Writing for users”; a point that emerged in a conversation about content design with Lisa Fast.
The list goes on…
Many changes and tweaks later, the final draft of the “design” topic map includes almost 500 entries and took about 20 hours to complete (with at least a couple of hours to define relevant Skills and Learning points for each Practice) and does not include the various conversations I had on the topics with a number of subject matter experts, which would bring it closer to 25 hours. It is worth reinforcing that this amount of effort is required for a topic of great familiarity.
For topics that I am far less familiar with (such as Development) and that require a significant level of research and conversations with subject matter experts, I would anticipate it taking nearly twice as long, so about 50h (that’s about 1.5 weeks of solid work).
Preparing a prototype for feedback
The next step was to seek feedback from the broader government and global digital community, as well as those who have expressed specific interest in being involved in the design of the Academy’s curriculum. We would want to know what’s missing or does not make sense. We would then need to determine how deep and how broad we can and should go, so that it is both useful and feasible to implement.
While I had a basic hierarchical prototype of the” learning architecture for the “design” topic (a structure made up of Practices, Skills and Learning points and sometimes, Sub-learning points), I was not very captivating and we would not get much uptake or feedback, which would defeat the point of a consultation.
I reached out to our technological disruptor and magician Sinan Baltacioglu asking him if he can help me come up with a simple, but visually appealing and interactive representation of the map, something similar to the http://arborjs.org/ project I came across in the past.
I would like to describe to you the conversations that ensued, but I don’t think I can do it adequately, so I will leave it to Sinan to tell, if he has time one of these days. But if I had to summarize it in a few words and concepts that I could understand/retain, it involved JSON objects, SVG (scalable vector graphics) and a React tree graph, among other elements.
After less than a month of back and forth (given the many other deliverables both of us were working on), we have a working prototype of a graph that represents the “design” topic learning architecture.
If you are interested in the technical aspects, you can find the code for the Digital Academy learning architecture tree on GitHub.
It is not perfect, the graph library used has some limitations (plus we had time and resource limitations), among them:
- Truncation at Skill level (3rd level) that makes the longer skills unreadable unless clicked on, at which point they can be seen in full
- Crammed spacing between the Skills makes readability difficult
- Need to create unique terms for each branch in a tree; so if a concept like “gathering data” appears in multiple branches of the graph tree, each instance would need to be visibly augmented by a unique identifier like “gathering data (information architecture)” to distinguish it from other instances and prevent rendering problems; this conceptually contradicts the idea of reusing the same term and cross-referencing it
- Accessibility (while fully valuing the importance of being inclusive)
Ideally, I would also like for someone to be able to see and compare 2 or more fully-expanded branches at the same time or to be able to click on a Practice, Skill or a Learning point and be able to see all the other instances where the same ‘attribute’ is applicable, whether in the same discipline or other disciplines the Academy offers.
Nonetheless, this is a minimal viable product that I am very excited about, as it offers an alternative view for someone who is visual or would like to experience different layers of a topic at a glance. It is a tool to build on and improve.
At the moment both the Google doc and the graph are available in English only, as we are using this first map to test out our overall approach and the depth of mapping that would be useful. Once we iterate on this version, we will have a bilingual (English and French) prototype.
A small update (April 2019). I tried Kumu to map out disciplinary concepts to see if it facilitated viewing the ‘system’ in its entirety. I only entered a subset of information as a test, since the re-arrangement of data took quite a bit of time. It is an interesting way to visualize knowledge and might be worth further exploration, when time permits.
My final reflection on the process so far is that it is messy and challenging. It is not a neat and orderly display of ideas and concepts… it is not even that pretty.. and it might never be, but it’s a start and it’s something where there was nothing at all — so we are making progress and hopefully building a community of like-minded people in the process.
Alors, bon appétit!
And if you have feedback on the above documents that you would like to share, please reach out!
Notes on what the learning architecture is and isn’t
Following some feedback, I realized that I should add a couple of points to explain what the current structure is supposed to reflect and what it does not reflect/include.
- Relationships between concepts are hierarchical, but within those hierarchies, items listed are not necessarily sequential or prioritized.
- The structure is not exhaustive. It intends to cover as much as possible, but there are definitely gaps and some areas are covered with greater level of detail than others, based on personal level of knowledge about a specific topic.