Flash Forward is a show about possible (and not so possible) future scenarios. What would the warranty on a sex robot look like? How would diplomacy work if we couldn’t lie? Could there ever be a fecal transplant black market? (Complicated, it wouldn’t, and yes, respectively, in case you’re curious.) Hosted and produced by award winning science journalist Rose Eveleth, each episode combines audio drama and journalism to go deep on potential tomorrows, and uncovers what those futures might re ...
…
continue reading
المحتوى المقدم من Jay Shah. يتم تحميل جميع محتويات البودكاست بما في ذلك الحلقات والرسومات وأوصاف البودكاست وتقديمها مباشرة بواسطة Jay Shah أو شريك منصة البودكاست الخاص بهم. إذا كنت تعتقد أن شخصًا ما يستخدم عملك المحمي بحقوق الطبع والنشر دون إذنك، فيمكنك اتباع العملية الموضحة هنا https://ar.player.fm/legal.
Player FM - تطبيق بودكاست
انتقل إلى وضع عدم الاتصال باستخدام تطبيق Player FM !
انتقل إلى وضع عدم الاتصال باستخدام تطبيق Player FM !
Instruction Tuning, Prompt Engineering and Self Improving Large Language Models | Dr. Swaroop Mishra
MP3•منزل الحلقة
Manage episode 428058232 series 2859018
المحتوى المقدم من Jay Shah. يتم تحميل جميع محتويات البودكاست بما في ذلك الحلقات والرسومات وأوصاف البودكاست وتقديمها مباشرة بواسطة Jay Shah أو شريك منصة البودكاست الخاص بهم. إذا كنت تعتقد أن شخصًا ما يستخدم عملك المحمي بحقوق الطبع والنشر دون إذنك، فيمكنك اتباع العملية الموضحة هنا https://ar.player.fm/legal.
Swaroop is a research scientist at Google-Deepmind, working on improving Gemini. His research expertise includes instruction tuning and different prompt engineering techniques to improve reasoning and generalization performance in large language models (LLMs) and tackle induced biases in training. Before joining DeepMind, Swaroop graduated from Arizona State University, where his research focused on developing methods that allow models to learn new tasks from instructions. Swaroop has also interned at Microsoft, Allen AI, and Google, and his research on instruction tuning has been influential in the recent developments of LLMs. Time stamps of the conversation: 00:00:50 Introduction 00:01:40 Entry point in AI 00:03:08 Motivation behind Instruction tuning in LLMs 00:08:40 Generalizing to unseen tasks 00:14:05 Prompt engineering vs. Instruction Tuning 00:18:42 Does prompt engineering induce bias? 00:21:25 Future of prompt engineering 00:27:48 Quality checks on Instruction tuning dataset 00:34:27 Future applications of LLMs 00:42:20 Trip planning using LLM 00:47:30 Scaling AI models vs making them efficient 00:52:05 Reasoning abilities of LLMs in mathematics 00:57:16 LLM-based approaches vs. traditional AI 01:00:46 Benefits of doing research internships in industry 01:06:15 Should I work on LLM-related research? 01:09:45 Narrowing down your research interest 01:13:05 Skills needed to be a researcher in industry 01:22:38 On publish or perish culture in AI research More about Swaroop: https://swarooprm.github.io/ And his research works: https://scholar.google.com/citations?user=-7LK2SwAAAAJ&hl=en Twitter: https://x.com/Swarooprm7 About the Host: Jay is a PhD student at Arizona State University working on improving AI for medical diagnosis and prognosis. Linkedin: https://www.linkedin.com/in/shahjay22/ Twitter: https://twitter.com/jaygshah22 Homepage: https://www.public.asu.edu/~jgshah1/ for any queries. Stay tuned for upcoming webinars! ***Disclaimer: The information in this video represents the views and opinions of the speaker and does not necessarily represent the views or opinions of any institution. It does not constitute an endorsement by any Institution or its affiliates of such video content.***
…
continue reading
92 حلقات
Instruction Tuning, Prompt Engineering and Self Improving Large Language Models | Dr. Swaroop Mishra
MP3•منزل الحلقة
Manage episode 428058232 series 2859018
المحتوى المقدم من Jay Shah. يتم تحميل جميع محتويات البودكاست بما في ذلك الحلقات والرسومات وأوصاف البودكاست وتقديمها مباشرة بواسطة Jay Shah أو شريك منصة البودكاست الخاص بهم. إذا كنت تعتقد أن شخصًا ما يستخدم عملك المحمي بحقوق الطبع والنشر دون إذنك، فيمكنك اتباع العملية الموضحة هنا https://ar.player.fm/legal.
Swaroop is a research scientist at Google-Deepmind, working on improving Gemini. His research expertise includes instruction tuning and different prompt engineering techniques to improve reasoning and generalization performance in large language models (LLMs) and tackle induced biases in training. Before joining DeepMind, Swaroop graduated from Arizona State University, where his research focused on developing methods that allow models to learn new tasks from instructions. Swaroop has also interned at Microsoft, Allen AI, and Google, and his research on instruction tuning has been influential in the recent developments of LLMs. Time stamps of the conversation: 00:00:50 Introduction 00:01:40 Entry point in AI 00:03:08 Motivation behind Instruction tuning in LLMs 00:08:40 Generalizing to unseen tasks 00:14:05 Prompt engineering vs. Instruction Tuning 00:18:42 Does prompt engineering induce bias? 00:21:25 Future of prompt engineering 00:27:48 Quality checks on Instruction tuning dataset 00:34:27 Future applications of LLMs 00:42:20 Trip planning using LLM 00:47:30 Scaling AI models vs making them efficient 00:52:05 Reasoning abilities of LLMs in mathematics 00:57:16 LLM-based approaches vs. traditional AI 01:00:46 Benefits of doing research internships in industry 01:06:15 Should I work on LLM-related research? 01:09:45 Narrowing down your research interest 01:13:05 Skills needed to be a researcher in industry 01:22:38 On publish or perish culture in AI research More about Swaroop: https://swarooprm.github.io/ And his research works: https://scholar.google.com/citations?user=-7LK2SwAAAAJ&hl=en Twitter: https://x.com/Swarooprm7 About the Host: Jay is a PhD student at Arizona State University working on improving AI for medical diagnosis and prognosis. Linkedin: https://www.linkedin.com/in/shahjay22/ Twitter: https://twitter.com/jaygshah22 Homepage: https://www.public.asu.edu/~jgshah1/ for any queries. Stay tuned for upcoming webinars! ***Disclaimer: The information in this video represents the views and opinions of the speaker and does not necessarily represent the views or opinions of any institution. It does not constitute an endorsement by any Institution or its affiliates of such video content.***
…
continue reading
92 حلقات
Все серии
×مرحبًا بك في مشغل أف ام!
يقوم برنامج مشغل أف أم بمسح الويب للحصول على بودكاست عالية الجودة لتستمتع بها الآن. إنه أفضل تطبيق بودكاست ويعمل على أجهزة اندرويد والأيفون والويب. قم بالتسجيل لمزامنة الاشتراكات عبر الأجهزة.