AI is often imagined as something distant — a chess master stripped of emotion, an invisible mind orchestrating decisions, or a faceless machine optimizing the world. Yet, when we turn to the individuals creating and guiding this technology — researchers, entrepreneurs, ethicists, and educators — a more layered narrative appears.
Artificial intelligence is not merely code. It is also culture, consequence, and conscience. And the trajectory of AI is increasingly defined by those who recognize both its potential and its risks.
The Humans & AI Show (AI Frontier Network) captures this tension through conversations with leaders who approach AI not with hype, but with grounded reflections on how it should be built, why it matters, and for whom it must serve.
Across education, automation, trust systems, and workplace transformation, these five conversations offer insight into the human blueprint shaping AI today — and the values that may determine its course tomorrow.
1. Andy Kurtzig — Pairing AI with Human Oversight
“You wouldn’t let an AI diagnose your child without a doctor involved, so why let it make decisions about your customers or your future without oversight?”
Andy Kurtzig, CEO of Pearl, frames AI as a tool to enhance human expertise, not replace it. His perspective highlights the need for systems with embedded checkpoints, fallback mechanisms, and radical accessibility.
For Kurtzig, the future lies in partnership: AI may scale intelligence, but human judgment provides the necessary context and compassion. Designing AI means designing systems that can explain themselves — and ensuring accessibility beyond technical specialists.
- Lesson: Trustworthy AI is a design requirement, not an afterthought.
- Relevance: Critical as AI expands across healthcare, law, and customer support.
2. Phil Tomlinson — Human-Centered AI by Design
“The human experience isn’t a data point. It’s the whole point.”
Phil Tomlinson, SVP at TaskUs, underscores that responsibility must be embedded in AI culture, not bolted on later. He advocates for “human-centered AI” — technology that is transparent, interpretable, and emotionally safe.
Designing for this requires multidisciplinary teams: ethicists, educators, and mental health experts alongside engineers. His concern is for those impacted most by AI decisions — from gig workers to enterprise clients — who often lack representation in design processes.
- Lesson: The human element is not a variable. It is the interface.
- Relevance: Foundational for AI applications in customer service, HR, and content moderation.
3. Doug Stephen — AI in Education and Emotional Intelligence
Doug Stephen, an education executive, emphasizes empathy as a core design principle in AI for learning. His interest extends beyond efficiency — not just grading faster or tailoring content — to whether AI can help learners build emotional intelligence, collaboration, and resilience.
AI can track engagement, motivation, and stress, providing teachers with tools that amplify care rather than replace it. In this framing, AI in education is not only about academic outcomes but about supporting human growth.
- Lesson: The strongest AI in education is the one that fosters human development.
- Relevance: As AI proliferates in classrooms, this perspective outlines how to safeguard humanity in digital learning.
4. Adnan Masood — Ethics, Mentorship, and Conscious Creation
Adnan Masood brings multiple perspectives — as machine learning architect, mentor, and ethicist — to reflect on AI’s transformative power and potential misuse.
His focus: mentoring the next generation of AI builders. Technical proficiency alone is not enough; developers must inherit values of responsibility, fairness, and inclusivity. Education and community engagement, he argues, are as critical as scalability.
“We don’t need more coders. We need more conscious creators.”
- Lesson: The future of AI depends on the values we transmit to its builders.
- Relevance: Essential for educators, policymakers, and developers shaping curricula and organizational culture.
5. Fabian Veit — Democratizing AI-Driven Automation
Automation, if poorly designed, can quietly erode meaning and displace purpose. Fabian Veit envisions an alternative: automation that empowers, frees time, and unlocks creativity.
By focusing on inclusion and accessibility, Veit develops tools that make automation broadly available — not only to major corporations but also to small businesses, NGOs, educators, and hybrid workplaces.
- Lesson: Automation must enhance dignity, not just productivity.
- Relevance: Central to understanding AI’s impact on labor and the future of work.
Shared Insight: Responsible AI as Ongoing Practice
Across these voices, a unifying message stands out: responsible AI is not a finished product, but a continual practice.
It requires beginning with humans, not just data.
It requires thinking beyond immediate prototypes toward long-term impacts.
It requires teaching, listening, and adapting in real time.
AI is not inevitable — it is intentional. Its trajectory depends on whether we commit to building with conscience, not just ambition.