Friday, November 10, 2017 at 12:15pm
In 1986, the mathematician and philosopher Gian-Carlo Rota wrote, “I wonder whether or when artificial intelligence will ever crash the barrier of meaning.” Here, the phrase “barrier of meaning” refers to a belief about humans versus machines: humans are able to “actually understand” the situations they encounter, whereas it can be argued that AI systems (at least current ones) do not possess such understanding. That is, the internal representations learned by (or programmed into) AI systems do not capture the rich “meanings” that humans bring to bear in perception, language, and reasoning.
In this talk I will survey some prominent modern efforts in AI in which machines learn to "understand" and reason about images and natural language, as well as my own work on enabling programs to perform high-level perception and analogy-making. I will use this survey to facilitate a discussion of some difficult but essential questions about AI research and applications:
Melanie Mitchell is Professor of Computer Science at Portland State University, and External Professor and Member of the Science Board at the Santa Fe Institute. She is the author or editor of five books and over 80 scholarly papers in the fields of artificial intelligence, cognitive science, and complex systems. Her most recent book, Complexity: A Guided Tour (Oxford, 2009), won the 2010 Phi Beta Kappa Science Book Award. It was also named by Amazon.com as one of the ten best science books of 2009, and was longlisted for the Royal Society's 2010 book prize. Melanie originated the Santa Fe Institute’s Complexity Explorer project, which offers online courses and other educational resources related to the field of complex systems. She is currently writing a book about the current state of artificial intelligence and the prospects for human-level AI.