Accueil/ expose
Why are humans still smarter than machines?
vendredi 22 octobre 2021

Loading the player...
Descriptif

Conférence de Jay McClelland (Stanford University) dans le cadre du Colloquium du département d'Etudes Cognitives de l'ENS-PSL.

In 1986, Dave Rumelhart, Geoff Hinton, and I began the first chapter of Parallel Distributed Processing, a two-volume work proposing neural network based models of human cognition, with the question ?Why are people smarter than machines?? At the time, people were far better than existing machine systems in many ways. Since then, machines have come a long way, and many of their successes rely on the kinds of mechanisms we promoted in the PDP volumes a third of a century ago. Neural-network based artificial systems now dominate humans at games like Chess and Go, and they have achieved breakthroughs in vision, language, any many other domains. Yet, it seems clear that these systems have not yet captured important aspects of human intelligence. I will compare human and artificial neural networks and point out some of the ways in which human still exceed our current machine approaches, focusing on a current project that illustrates some of the key differences between human an d contem porary machine intelligence. Do these shortcomings and differences mean we need a radically different approach? In the last part of the talk, I will share my thoughts on this question, and give suggestions for next steps toward addressing the limitations of our current machine systems.

Références:
 Nam, A. J. & McClelland, J. L. (2021). What underlies rapid learning and systematic generalization in humans. Draft dated June 23, 2021. [PDF]
 McClelland, J. L. (in press). Could the AI of our dreams ever become reality? In Vernallis, C., Rogers, H., Leal, J., and Kara, S. (Eds.), Cybermedia: Science, Sound and Vision. New York, NY: Bloomsbury Academic. [PDF]
 McClelland, J. L., Hill, F., Rudolph, M., Baldridge, J., & Schuetze, H. (2020). Placing language in and integrated understanding system: Next steps toward human-level performance in neural language models. Proceedings of the National Academy of Sciences, 117(42), 25966-25974. DOI: 10.1073/pnas.1910416117. [ PDF]

Voir aussi


  • Aucun exposé du même auteur.
  • Base neurale de la mémoire spatiale : Po...
    Alain Berthoz
  • Interprétations spontanées, inférences p...
    Emmanuel Sander
  • Cognitive, developmental and cultural ba...
    Atsushi Senju
  • The origin of prosociality : a comparati...
    Nicolas Claidière
  • (Dis)organizational principles for neuro...
    Miguel Maravall Rodriguez
  • From speech to language in infancy
    Alejandrina Cristia
  • The Neural Marketplace
    Kenneth Harris
  • Why the Internet won't get you any more ...
    Robin Dunbar
  • Synergies in Language Acquisition
    Mark Johnson
  • The neuroeconomics of simple choice
    Antonio Rangel
  • Phonological Effects on the Acquisition ...
    Katherine Demuth
  • Inner speech in action : EMG data durin...
    Hélène Loevenbruck
  • Use of phonetic detail in word learning
    Paola Escudero
  • What is special about eye contact ?
    Laurence Conty
  • The inference theory of discourse refere...
    Amit Almor
  • Syntactic computations, the cartography ...
    Luigi Rizzi
  • Levels of communication and lexical sema...
    Peter Gärdenfors
  • Amygdalar mechanisms for innate, learne...
    Daniel Salzman
  • Explanation and Inference
    Igor Douven
  • Consciousness, Action, PAM !
    Thor Grunbaum
  • Principles of Neural Design
    Peter Sterling
  • Precursors to valuation
    Timothy Behrens
  • Is machine learning a good model of huma...
    Yann LeCun
  • Following and leading social gaze
    Andrew Bayliss
  • It’s the neuron: how the brain really wo...
    Charles Randy Gallistel
  • Biological Information: Genetic, epigene...
    Paul Griffiths
  • From necessity to sufficiency in memory ...
    Karim Benchenane
  • Comparing the difficulty of different ty...
    LouAnn Gerken
  • A big data approach towards functional b...
    Bertrand Thirion
  • Sign language and language emergence
    Marie Coppola
  • The collaborative making of an encyclope...
    Dario Taraborelli
  • The Evolution of Punishment
    Nichola Raihani
  • Metacontrol of reinforcement learning
    Sam Gershman
  • Homo Cyberneticus: Neurocognitive consid...
    Tamar Makin
  • Reverse Engineering Visual Intelligence
    Jim DiCarlo
  • What is listening effort?
    Ingrid Johnsrude
  • Genomic analysis of 1.5 million people r...
    Paige Harden
  • The Language of Life: exploring the orig...
    Catherine Hobaiter
  • Deliberate ignorance: The curious choic...
    Ralph Hertwig
  • The social brain in adolescence
    Sarah-Jayne Blakemore
  • Big data about small people: Studying ch...
    Michael Frank
  • Individual Differences in Lifespan Cogni...
    Stuart Richie
  • Contextual effects, image statistics, an...
    Odelia Schwartz
  • Problem solving in acellular slime mold...
    Audrey Dussutour
  • Redrawing the lines between language an...
    Neil Cohn
  • Choice and value : the biology of decisi...
    Alex Kacelnik
  • What happened to the 'mental' in 'menta...
    Joseph LeDoux
  • Rethinking sex and the brain: Beyond th...
    Daphna Joel
  • How robust are meta-analyses to publicat...
    Maya Mathur
  • How family background affects children’...
    Sophie Von Stumm
Auteur(s)
James L. (Jay) McClelland
Standford University
Neuroscientifique

Plus sur cet auteur
Voir la fiche de l'auteur

Cursus :

Jay McClelland est professeur au département de psychologie et directeur du Center for Mind, Brain, Computation and Technology à Stanford. Ses recherches portent sur un large éventail de sujets en sciences cognitives et en neurosciences cognitives, notamment la perception et la prise de décision perceptive, l'apprentissage et la mémoire, le langage et la lecture, la cognition sémantique et mathématique et le développement cognitif.

La recherche dans son laboratoire s'articule autour des efforts visant à développer des modèles informatiques explicites basés sur ces idées, à tester, affiner et étendre les principes incarnés dans les modèles, puis à appliquer les modèles à des questions de recherche importantes par le biais d'expériences comportementales, de simulations informatiques et d'analyses mathématiques.

Cliquer ICI pour fermer
Annexes
Téléchargements :
   - Télécharger la vidéo
   - Télécharger l'audio (mp3)

Dernière mise à jour : 03/12/2021