Do AI Chatbots Have Real Understanding? Watch the Experts’ Fiery Debate! đ§ âĄ

The large language models (LLMs) that power todayâs chatbots have gotten so astoundingly capable, AI researchers are hard pressed to assess those capabilitiesâit seems that no sooner is there a new test than the AI systems ace it. But what does that performance really mean? Do these models genuinely understand our world? Or are they merely a triumph of data and calculations that simulates true understanding?
To hash out these questions, IEEE Spectrum partnered with the Computer History Museum in Mountain View, Calif. to bring two opinionated experts to the stage. I was the moderator of the event, which took place on 25 March. It was a fiery (but respectful) debate, well worth watching in full.
Emily M. Bender is a University of Washington professor and director of its computational linguistics laboratory, and she has emerged over the past decade as one of the fiercest critics of todayâs leading AI companies and their approach to AI. Sheâs also known as one of the coauthors of the seminal 2021 paper âOn the Dangers of Stochastic Parrots,â a paper that laid out the possible risks of LLMs (and caused Google to fire coauthor Timnit Gebru). Bender, unsurprisingly, took the ânoâ position.
Taking the âyesâ position was SĂ©bastien Bubeck, who recently moved to OpenAI from Microsoft, where he was VP of AI. During his time at Microsoft he coauthored the influential preprint âSparks of Artificial General Intelligence,â which described his early experiments with OpenAIâs GPT-4 while it was still under development. In that paper, he described advances over prior LLMs that made him feel that the model had reached a new level of comprehension.
With no further ado, we bring you the matchup that I call âParrots vs. Sparks.â
– YouTubeyoutu.be
From Your Site Articles
Related Articles Around the Web