My guest for Episode #327 of the My Favorite Mistake podcast is Dr. Maya Ackerman, AI pioneer, researcher, and CEO of WaveAI. She’s also an associate professor of Computer Science and Engineering at Santa Clara University and the author of the new book Creative Machines: AI, Art, and Us.
EPISODE PAGE WITH VIDEO, TRANSCRIPT, AND MORE
In this episode, Maya shares her favorite mistake — one that changed how she builds technology and thinks about creativity. Early in her journey as an entrepreneur, her team at WaveAI created an ambitious product called “Alicia,” designed to assist with every step of music creation. But in trying to help too much, they accidentally took freedom away from users. That experience inspired her concept of “humble AI” — systems that step back, listen, and support human creativity rather than take over.
Maya describes how that lesson led to their breakthrough success with Lyric Studio, an AI songwriting tool that empowers millions of artists by helping them create while staying true to their own voices. She also shares insights from her research on human-centered design, the philosophy behind generative models, and why we should build AI that’s more collaborative than competitive.
Together, we discuss why mistakes — whether made by people or machines — can spark innovation, and how being more forgiving toward imperfection can help both leaders and creators thrive.
“If AI is meant to be human-centric, it must be humble. Its job is to elevate people, not replace them.”
— Maya Ackerman
“Who decided machines have to be perfect? It’s a ridiculous expectation — and a limiting one.”
— Maya Ackerman
Questions and Topics:
- What was your favorite mistake — and what did you learn from it?
- What went wrong with your second product, “ALYSIA,” and how did that shape your later success?
- How did you discover the concept of “humble creative machines”?
- What makes Lyric Studio different from general AI tools like ChatGPT?
- How do you design AI that supports — rather than replaces — human creativity?
- What’s the real difference between AI and a traditional algorithm?
- How do you think about ethical concerns, like AI imitating living artists?
- What do you mean by human-centered AI — and how can we build it?
- Why do AI systems “hallucinate,” and can those mistakes actually be useful?
- How can embracing mistakes — human or machine — lead to more creativity and innovation?
- What are your thoughts on AI’s future — should we be hopeful or concerned?