Human-Kind. Be Both.
In the previous article, I covered the first five takeaways from the ATD TechKnowledge conference. This article continues to explore the intersection of technology and humans from the human angle, covering my L&D conference reflections. As my shirt said while delivering the Artificial Intelligence (AI) use case session:
Human*kind. Be Both.
So many technology-driven projects fail because of the human factor. I’ve worked with teams with the brightest minds in tech, still assuming that the solution they’re building would drive adoption itself. Because it’s so “intuitive” and “logical” that as “they build it, they would come.” Humans are complex and often complicated. If we always did what was right, what was good for us, and what was logical, we could get rid of prisons and 80% of HR policies.
L&D Conference Reflections: Takeaways, Round Two
1. Accessibility: Who Cares About Others?
Speaking of the right thing to do… accessibility was also at the top of the mind in panel discussions, sessions, and hallway conversations. The fundamentals of accessibility are not about technical specifications to include an alt-text for every image but a frame of mind that: We humans should care about other humans.
Every decision we make can have an impact on others. The simple fact that we think about this in our design, in our conversations, in our comments and reviews, that is itself the first step towards access for all. One for all and all for one.
The challenge with acting on these principles is that it is often hindered by “theoretical” guidance without practical checklists and tools. I’ve seen people many times approaching this challenge with the “we can’t be perfect now, so let’s wait until we are” mindset. We need progress over perfection. This is why it was exciting to see a book, Design for All Learners: Create Accessible and Inclusive Learning Experiences, written by learning professionals in the field, under the guidance of the editor Sarah Mercier, capture best practices for this progress [1].
2. Waiting For GodoTech?
Speaking of progress over perfection… if there were a never-ending drama on TV about L&D’s escape from an imaginary isolated island, it would be called “Waiting for GodoTech.” That is, waiting for the next shiny technology that magically solves all problems and smoothly sails everyone to safety. From the invention of Learning Management Systems through mobile learning, gamification, microlearning, Augmented Reality, Virtual Reality, and now AI, we’ve been dazzled by the EdTech vendor wonderworld at every conference in the last two decades.
I have countless examples of leadership falling in love with some large-scale, expensive, tech investment and spending all resources on implementing something without the desired impact. Waiting for the perfect tech is doing things backwards. It’s like getting a solution first, and then doing our best to match it to a problem.
Start with the problem or opportunity. You may not need any new technology. You may just need humans to understand how the problem could be solved and what the trade-offs are.
Don’t wait for GodoTech. You’ll find yourself transitioning from one island to another. Understand the problem, the humans involved, and iterate. Small wins on anything help build credibility and support.
3. Collaboration Tech Does Not Collaborate; Humans Do
Another big topic and a subject for this L&D conference reflections is connecting humans. Many organizations realized that sending individuals to courses about behavior change does not work when it comes to changing actual human behavior. We work in teams, but we tend to think of development as an individual activity that magically translates into team performance. With AI, the change of pace is exhausting. You can’t be the master of all. In building your network, with both humans and agents (coming soon) or even digital twins, you’ll need to collaborate.
That is where the challenge starts. Many professionals reported that building a community is much more complicated than anticipated. Again, investing in a technology with collaboration features is not the place to start. Collaboration features do not collaborate. Humans do. You can’t mandate collaboration or mandate a community.
You design the right conditions, and they will come. So, start with where employees communicate today, where and how they collaborate today. Find challenges they face in their effort. Solve it for them, scale it for them. Build it from where it’s happening, rather than adding another platform to the pile.
Where to start? Behavior science and motivational theories. Again, humans are complex, and motivation may not work intuitively the way you think it should. Understanding the self-determination theory, BJ Fogg’s MAP, COM-B, and other science-based approaches can help you with correcting any fundamental mistakes. Then, you focus on progress over perfection: Iterate. Iterate. Iterate.
4. Diverse Thoughts, Better Outcome
The first AI assistant I built for myself (then opened up for anyone internally) was something called “Holey Poke.” I used it to poke holes in my idea, approach, belief, plan, or solution. I trained it to act like a devil’s advocate, a wise guru, a researcher, a “tough love” friend, an innovator, an analyst, etc. all in one.
In the instructions, I also made the assistant take time to look at the final response and apply the same principles to its response to poke holes in their own recommendations. This approach can provide the initial diverse thinking. Then, you can take that to humans and do the same. Diverse thinking tends to lead to more effective results, especially if you have done the same thing the same way over and over again. Remember: what brought you here, may not take you there.
5. Living Next Door To Alice: Alice? Who The [Beep] Is Alice?
On a final, personal note, I also did karaoke. The “empty orchestra” experience, aka singing along with no lead singer to a song, is a social activity. Humans vs. humans. If you’ve ever had the chance to hear me sing, you know why I did not choose that profession. By the way, I did a song that is not HR-appropriate so I won’t detail the lyrics, but you can find it on your own by searching “Gompie’s Living Next Door to Alice.”
The related learning point, and the final point in this L&D conference reflections, that I want to make comes from my childhood. In elementary school, we had music classes. The teacher took it most seriously. Let’s say a “friend of mine,” Alice, was in the class. The process of how Alice is assessed goes like this:
a) The whole class sings the song together
No problem for Alice.
b) Individually, everyone gives a grade to themselves
Alice makes a strategic move (our grade system was 1 – 5, as in F – A in the US system). Steps a and b are repeated to give a chance for self-improvement.
c) If you give a 1 (F) or a 2 (D) to yourself, you have to sing individually in front of the class
This is just in case you undervalue your performance: Alice considers it but rejects the idea.
d) If you give a 4 or 5 (B or A), you have to sing individually
This is to prove your worth: Alice doesn’t even consider this.
e) However, if you give yourself a 3 (C), you don’t have to embarrass yourself
In this case you just take your grade home as is: Alice? Who the [beep] is Alice?
Well, guess how my friend Alice did in music 🙂
References:
[1] Mercier, Sarah, ed. 2025. Design for All Learners: Create Accessible and Inclusive Learning Experiences. Association for Talent Development.