
528
Downloads
17
Episodes
The Learning Experience Ops Show is a series of real conversations with the people building and running the systems that make learning work—across higher education, K–12, healthcare, clean energy, corporate L&D, and beyond.
Each episode explores how learning teams are adapting to massive change: what’s working, what’s breaking, and what’s next. Guests share their strategies, tools, and stories from the front lines of Learning Experience Operations (LX Ops)—the evolving discipline where design, technology, and organizational systems meet.
At its core, the show is about one big idea: learning gets better when it’s built on a clear, repeatable process that’s ready for whatever comes next.
The Learning Experience Ops Show is a series of real conversations with the people building and running the systems that make learning work—across higher education, K–12, healthcare, clean energy, corporate L&D, and beyond.
Each episode explores how learning teams are adapting to massive change: what’s working, what’s breaking, and what’s next. Guests share their strategies, tools, and stories from the front lines of Learning Experience Operations (LX Ops)—the evolving discipline where design, technology, and organizational systems meet.
At its core, the show is about one big idea: learning gets better when it’s built on a clear, repeatable process that’s ready for whatever comes next.
Episodes

Wednesday Jan 28, 2026
Wednesday Jan 28, 2026
Summary
In this conversation, Jason Gorman and Mariana Ganapini delve into the ethical implications of AI, exploring the need for stillness in understanding these issues. They discuss the importance of applied theory in AI governance, the challenges posed by AI in education, and the concept of trust in AI systems. Mariana emphasizes the need for a rights-based approach to responsible AI, highlighting the importance of context in learning and the evolving role of humans in an AI-driven world.
Takeaways
- The need for stillness to think through ethical implications of AI.
- AI's rapid pace makes it difficult to consider ethical issues.
- Responsible AI aims to create safe and fair machines.
- Students' reliance on AI tools may hinder their reading skills.
- Trust in AI involves both reliability and understanding intentions.
- Evaluation measures for AI outputs are becoming more established.
- Human roles in learning must be redefined in an AI context.
- A rights-based approach to AI is essential for ethical governance.
- The influence of AI on education requires careful consideration.
- Context is crucial for effective learning experiences.
Watch the full episode:
Philosopher in the Loop- Marianna Ganapini on Redesigning AI with Reflection, Not Just Speed

No comments yet. Be the first to say something!