• What ChatGPT understands: Large language models and the neuroscience of meaning | Laura Gwilliams

  • Apr 17 2025
  • Length: 43 mins
  • Podcast

What ChatGPT understands: Large language models and the neuroscience of meaning | Laura Gwilliams

  • Summary

  • If you spend any time chatting with a modern AI chatbot, you've probably been amazed at just how human it sounds, how much it feels like you're talking to a real person. Much ink has been spilled explaining how these systems are not actually conversing, not actually understanding — they're statistical algorithms trained to predict the next likely word.

    But today on the show, let's flip our perspective on this. What if instead of thinking about how these algorithms are not like the human brain, we talked about how similar they are? What if we could use these large language models to help us understand how our own brains process language to extract meaning?

    There's no one better positioned to take us through this than returning guest Laura Gwilliams, a faculty scholar at the Wu Tsai Neurosciences Institute and Stanford Data Science Institute, and a member of the department of psychology here at Stanford.

    Learn more:

    Gwilliams' Laboratory of Speech Neuroscience

    Fireside chat on AI and Neuroscience at Wu Tsai Neuro's 2024 Symposium (video)

    The co-evolution of neuroscience and AI (Wu Tsai Neuro, 2024)

    How we understand each other (From Our Neurons to Yours, 2023)

    Q&A: On the frontiers of speech science (Wu Tsai Neuro, 2023)

    Computational Architecture of Speech Comprehension in the Human Brain (Annual Review of Linguistics, 2025)

    Hierarchical dynamic coding coordinates speech comprehension in the human brain (PMC Preprint, 2025)

    Behind the Scenes segment:

    By re-creating neural pathway in dish, Sergiu Pasca's research may speed pain treatment (Stanford Medicine, 2025)

    Bridging nature and nurture: The brain's flexible foundation from birth (Wu Tsai Neuro, 2025)


    Get in touch

    We want to hear from your neurons! Email us at at neuronspodcast@stanford.edu if you'd be willing to help out with some listener research, and we'll be in touch with some follow-up questions.

    Episode Credits

    This episode was produced by Michael Osborne at 14th Street Studios, with sound design by Morgan Honaker. Our logo is by Aimee Garza. The show is hosted by Nicholas Weiler at Stanford's

    Send us a text!

    Thanks for listening! If you're enjoying our show, please take a moment to give us a review on your podcast app of choice and share this episode with your friends. That's how we grow as a show and bring the stories of the frontiers of neuroscience to a wider audience.

    Learn more about the Wu Tsai Neurosciences Institute at Stanford and follow us on Twitter, Facebook, and LinkedIn.

    Show more Show less
adbl_web_global_use_to_activate_webcro768_stickypopup

What listeners say about What ChatGPT understands: Large language models and the neuroscience of meaning | Laura Gwilliams

Average customer ratings

Reviews - Please select the tabs below to change the source of reviews.