Artwork

المحتوى المقدم من Machine Learning Street Talk (MLST). يتم تحميل جميع محتويات البودكاست بما في ذلك الحلقات والرسومات وأوصاف البودكاست وتقديمها مباشرةً بواسطة Machine Learning Street Talk (MLST) أو شريك منصة البودكاست الخاص بهم. إذا كنت تعتقد أن شخصًا ما يستخدم عملك المحمي بحقوق الطبع والنشر دون إذنك، فيمكنك اتباع العملية الموضحة هنا https://ar.player.fm/legal.
Player FM - تطبيق بودكاست
انتقل إلى وضع عدم الاتصال باستخدام تطبيق Player FM !

#83 Dr. ANDREW LAMPINEN (Deepmind) - Natural Language, Symbols and Grounding [NEURIPS2022 UNPLUGGED]

20:36
 
مشاركة
 

Manage episode 348799655 series 2803422
المحتوى المقدم من Machine Learning Street Talk (MLST). يتم تحميل جميع محتويات البودكاست بما في ذلك الحلقات والرسومات وأوصاف البودكاست وتقديمها مباشرةً بواسطة Machine Learning Street Talk (MLST) أو شريك منصة البودكاست الخاص بهم. إذا كنت تعتقد أن شخصًا ما يستخدم عملك المحمي بحقوق الطبع والنشر دون إذنك، فيمكنك اتباع العملية الموضحة هنا https://ar.player.fm/legal.

First in our unplugged series live from #NeurIPS2022

We discuss natural language understanding, symbol meaning and grounding and Chomsky with Dr. Andrew Lampinen from DeepMind.

We recorded a LOT of material from NeurIPS, keep an eye out for the uploads.

YT version: https://youtu.be/46A-BcBbMnA

References

[Paul Cisek] Beyond the computer metaphor: Behaviour as interaction

https://philpapers.org/rec/CISBTC

Linguistic Competence (Chomsky reference)

https://en.wikipedia.org/wiki/Linguistic_competence

[Andrew Lampinen] Can language models handle recursively nested grammatical structures? A case study on comparing models and humans

https://arxiv.org/abs/2210.15303

[Fodor et al] Connectionism and Cognitive Architecture: A Critical Analysis

https://ruccs.rutgers.edu/images/personal-zenon-pylyshyn/proseminars/Proseminar13/ConnectionistArchitecture.pdf

[Melanie Mitchell et al] The Debate Over Understanding in AI's Large Language Models

https://arxiv.org/abs/2210.13966

[Gary Marcus] GPT-3, Bloviator: OpenAI’s language generator has no idea what it’s talking about

https://www.technologyreview.com/2020/08/22/1007539/gpt3-openai-language-generator-artificial-intelligence-ai-opinion/

[Bender et al] On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?

https://dl.acm.org/doi/10.1145/3442188.3445922

[Adam Santoro, Andrew Lampinen et al] Symbolic Behaviour in Artificial Intelligence

https://arxiv.org/abs/2102.03406

[Ishita Dasgupta, Lampinen et al] Language models show human-like content effects on reasoning

https://arxiv.org/abs/2207.07051

REACT - Synergizing Reasoning and Acting in Language Models

https://arxiv.org/pdf/2210.03629.pdf

https://ai.googleblog.com/2022/11/react-synergizing-reasoning-and-acting.html

[Fabian Paischer] HELM - History Compression via Language Models in Reinforcement Learning

https://ml-jku.github.io/blog/2022/helm/

https://arxiv.org/abs/2205.12258

[Laura Ruis] Large language models are not zero-shot communicators

https://arxiv.org/pdf/2210.14986.pdf

[Kumar] Using natural language and program abstractions to instill human inductive biases in machines

https://arxiv.org/pdf/2205.11558.pdf

Juho Kim

https://juhokim.com/

  continue reading

149 حلقات

Artwork
iconمشاركة
 
Manage episode 348799655 series 2803422
المحتوى المقدم من Machine Learning Street Talk (MLST). يتم تحميل جميع محتويات البودكاست بما في ذلك الحلقات والرسومات وأوصاف البودكاست وتقديمها مباشرةً بواسطة Machine Learning Street Talk (MLST) أو شريك منصة البودكاست الخاص بهم. إذا كنت تعتقد أن شخصًا ما يستخدم عملك المحمي بحقوق الطبع والنشر دون إذنك، فيمكنك اتباع العملية الموضحة هنا https://ar.player.fm/legal.

First in our unplugged series live from #NeurIPS2022

We discuss natural language understanding, symbol meaning and grounding and Chomsky with Dr. Andrew Lampinen from DeepMind.

We recorded a LOT of material from NeurIPS, keep an eye out for the uploads.

YT version: https://youtu.be/46A-BcBbMnA

References

[Paul Cisek] Beyond the computer metaphor: Behaviour as interaction

https://philpapers.org/rec/CISBTC

Linguistic Competence (Chomsky reference)

https://en.wikipedia.org/wiki/Linguistic_competence

[Andrew Lampinen] Can language models handle recursively nested grammatical structures? A case study on comparing models and humans

https://arxiv.org/abs/2210.15303

[Fodor et al] Connectionism and Cognitive Architecture: A Critical Analysis

https://ruccs.rutgers.edu/images/personal-zenon-pylyshyn/proseminars/Proseminar13/ConnectionistArchitecture.pdf

[Melanie Mitchell et al] The Debate Over Understanding in AI's Large Language Models

https://arxiv.org/abs/2210.13966

[Gary Marcus] GPT-3, Bloviator: OpenAI’s language generator has no idea what it’s talking about

https://www.technologyreview.com/2020/08/22/1007539/gpt3-openai-language-generator-artificial-intelligence-ai-opinion/

[Bender et al] On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?

https://dl.acm.org/doi/10.1145/3442188.3445922

[Adam Santoro, Andrew Lampinen et al] Symbolic Behaviour in Artificial Intelligence

https://arxiv.org/abs/2102.03406

[Ishita Dasgupta, Lampinen et al] Language models show human-like content effects on reasoning

https://arxiv.org/abs/2207.07051

REACT - Synergizing Reasoning and Acting in Language Models

https://arxiv.org/pdf/2210.03629.pdf

https://ai.googleblog.com/2022/11/react-synergizing-reasoning-and-acting.html

[Fabian Paischer] HELM - History Compression via Language Models in Reinforcement Learning

https://ml-jku.github.io/blog/2022/helm/

https://arxiv.org/abs/2205.12258

[Laura Ruis] Large language models are not zero-shot communicators

https://arxiv.org/pdf/2210.14986.pdf

[Kumar] Using natural language and program abstractions to instill human inductive biases in machines

https://arxiv.org/pdf/2205.11558.pdf

Juho Kim

https://juhokim.com/

  continue reading

149 حلقات

كل الحلقات

×
 
Loading …

مرحبًا بك في مشغل أف ام!

يقوم برنامج مشغل أف أم بمسح الويب للحصول على بودكاست عالية الجودة لتستمتع بها الآن. إنه أفضل تطبيق بودكاست ويعمل على أجهزة اندرويد والأيفون والويب. قم بالتسجيل لمزامنة الاشتراكات عبر الأجهزة.

 

دليل مرجعي سريع