Player FM - Internet Radio Done Right
Checked 6h ago
تمت الإضافة منذ قبل two أعوام
المحتوى المقدم من Igor Melnyk. يتم تحميل جميع محتويات البودكاست بما في ذلك الحلقات والرسومات وأوصاف البودكاست وتقديمها مباشرة بواسطة Igor Melnyk أو شريك منصة البودكاست الخاص بهم. إذا كنت تعتقد أن شخصًا ما يستخدم عملك المحمي بحقوق الطبع والنشر دون إذنك، فيمكنك اتباع العملية الموضحة هنا https://ar.player.fm/legal.
Player FM - تطبيق بودكاست
انتقل إلى وضع عدم الاتصال باستخدام تطبيق Player FM !
انتقل إلى وضع عدم الاتصال باستخدام تطبيق Player FM !
المدونة الصوتية تستحق الاستماع
برعاية
A
All About Change


1 Tiffany Yu — Smashing Stereotypes and Building a Disability-Inclusive World 30:23
30:23
التشغيل لاحقا
التشغيل لاحقا
قوائم
إعجاب
احب30:23
Tiffany Yu is the CEO & Founder of Diversability, an award-winning social enterprise to elevate disability pride; the Founder of the Awesome Foundation Disability Chapter, a monthly micro-grant that has awarded $92.5k to 93 disability projects in 11 countries; and the author of The Anti-Ableist Manifesto: Smashing Stereotypes, Forging Change, and Building a Disability-Inclusive World. As a person with visible and invisible disabilities stemming from a car crash, Tiffany has built a career on disability solidarity. Now that she has found success, she works to expand a network of people with disabilities and their allies to decrease stigmas around disability and create opportunities for disabled people in America. Episode Chapters 0:00 Intro 1:26 When do we choose to share our disability stories? 4:12 Jay’s disability story 8:35 Visible and invisible disabilities 13:10 What does an ally to the disability community look like? 16:34 NoBodyIsDisposable and 14(c) 21:26 How does Tiffany’s investment banking background shape her advocacy? 27:47 Goodbye and outro For video episodes, watch on www.youtube.com/@therudermanfamilyfoundation Stay in touch: X: @JayRuderman | @RudermanFdn LinkedIn: Jay Ruderman | Ruderman Family Foundation Instagram: All About Change Podcast | Ruderman Family Foundation To learn more about the podcast, visit https://allaboutchangepodcast.com/…
Arxiv Papers
وسم كل الحلقات كغير/(كـ)مشغلة
Manage series 3524393
المحتوى المقدم من Igor Melnyk. يتم تحميل جميع محتويات البودكاست بما في ذلك الحلقات والرسومات وأوصاف البودكاست وتقديمها مباشرة بواسطة Igor Melnyk أو شريك منصة البودكاست الخاص بهم. إذا كنت تعتقد أن شخصًا ما يستخدم عملك المحمي بحقوق الطبع والنشر دون إذنك، فيمكنك اتباع العملية الموضحة هنا https://ar.player.fm/legal.
Running out of time to catch up with new arXiv papers? We take the most impactful papers and present them as convenient podcasts. If you're a visual learner, we offer these papers in an engaging video format. Our service fills the gap between overly brief paper summaries and time-consuming full paper reads. You gain academic insights in a time-efficient, digestible format. Code behind this work: https://github.com/imelnyk/ArxivPapers
…
continue reading
2393 حلقات
وسم كل الحلقات كغير/(كـ)مشغلة
Manage series 3524393
المحتوى المقدم من Igor Melnyk. يتم تحميل جميع محتويات البودكاست بما في ذلك الحلقات والرسومات وأوصاف البودكاست وتقديمها مباشرة بواسطة Igor Melnyk أو شريك منصة البودكاست الخاص بهم. إذا كنت تعتقد أن شخصًا ما يستخدم عملك المحمي بحقوق الطبع والنشر دون إذنك، فيمكنك اتباع العملية الموضحة هنا https://ar.player.fm/legal.
Running out of time to catch up with new arXiv papers? We take the most impactful papers and present them as convenient podcasts. If you're a visual learner, we offer these papers in an engaging video format. Our service fills the gap between overly brief paper summaries and time-consuming full paper reads. You gain academic insights in a time-efficient, digestible format. Code behind this work: https://github.com/imelnyk/ArxivPapers
…
continue reading
2393 حلقات
Semua episode
×
1 [QA] The Landscape of Memorization in LLMs: Mechanisms, Measurement, and Mitigation 7:35
7:35
التشغيل لاحقا
التشغيل لاحقا
قوائم
إعجاب
احب7:35
This paper reviews Large Language Models' memorization, exploring its causes, detection methods, implications, and mitigation strategies, while highlighting challenges in balancing memorization minimization with model utility. https://arxiv.org/abs//2507.05578 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_papers Apple Podcasts: https://podcasts.apple.com/us/podcast/arxiv-papers/id1692476016 Spotify: https://podcasters.spotify.com/pod/show/arxiv-papers…

1 The Landscape of Memorization in LLMs: Mechanisms, Measurement, and Mitigation 23:36
23:36
التشغيل لاحقا
التشغيل لاحقا
قوائم
إعجاب
احب23:36
This paper reviews Large Language Models' memorization, exploring its causes, detection methods, implications, and mitigation strategies, while highlighting challenges in balancing memorization minimization with model utility. https://arxiv.org/abs//2507.05578 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_papers Apple Podcasts: https://podcasts.apple.com/us/podcast/arxiv-papers/id1692476016 Spotify: https://podcasters.spotify.com/pod/show/arxiv-papers…
This paper introduces a novel differential mechanism for Mamba architecture, enhancing retrieval capabilities and performance while addressing attention overallocation issues found in sequence models like Transformers and RNNs. https://arxiv.org/abs//2507.06204 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_papers Apple Podcasts: https://podcasts.apple.com/us/podcast/arxiv-papers/id1692476016 Spotify: https://podcasters.spotify.com/pod/show/arxiv-papers…
This paper introduces a novel differential mechanism for Mamba architecture, enhancing retrieval capabilities and performance while addressing attention overallocation issues found in sequence models like Transformers and RNNs. https://arxiv.org/abs//2507.06204 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_papers Apple Podcasts: https://podcasts.apple.com/us/podcast/arxiv-papers/id1692476016 Spotify: https://podcasters.spotify.com/pod/show/arxiv-papers…

1 [QA] Cascade: Token-Sharded Private LLM Inference 7:04
7:04
التشغيل لاحقا
التشغيل لاحقا
قوائم
إعجاب
احب7:04
The paper presents Cascade, a multi-party inference protocol that enhances performance and scalability while maintaining privacy for large language models, outperforming existing secure schemes. https://arxiv.org/abs//2507.05228 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_papers Apple Podcasts: https://podcasts.apple.com/us/podcast/arxiv-papers/id1692476016 Spotify: https://podcasters.spotify.com/pod/show/arxiv-papers…

1 Cascade: Token-Sharded Private LLM Inference 35:03
35:03
التشغيل لاحقا
التشغيل لاحقا
قوائم
إعجاب
احب35:03
The paper presents Cascade, a multi-party inference protocol that enhances performance and scalability while maintaining privacy for large language models, outperforming existing secure schemes. https://arxiv.org/abs//2507.05228 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_papers Apple Podcasts: https://podcasts.apple.com/us/podcast/arxiv-papers/id1692476016 Spotify: https://podcasters.spotify.com/pod/show/arxiv-papers…

1 [QA] Real-TabPFN: Improving Tabular Foundation Models via Continued Pre-training With Real-World Data 7:28
7:28
التشغيل لاحقا
التشغيل لاحقا
قوائم
إعجاب
احب7:28
Real-TabPFN enhances tabular data performance by continued pre-training on curated real-world datasets, outperforming models trained on broader datasets, achieving significant gains on 29 OpenML AutoML Benchmark datasets. https://arxiv.org/abs//2507.03971 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_papers Apple Podcasts: https://podcasts.apple.com/us/podcast/arxiv-papers/id1692476016 Spotify: https://podcasters.spotify.com/pod/show/arxiv-papers…

1 Real-TabPFN: Improving Tabular Foundation Models via Continued Pre-training With Real-World Data 10:15
10:15
التشغيل لاحقا
التشغيل لاحقا
قوائم
إعجاب
احب10:15
Real-TabPFN enhances tabular data performance by continued pre-training on curated real-world datasets, outperforming models trained on broader datasets, achieving significant gains on 29 OpenML AutoML Benchmark datasets. https://arxiv.org/abs//2507.03971 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_papers Apple Podcasts: https://podcasts.apple.com/us/podcast/arxiv-papers/id1692476016 Spotify: https://podcasters.spotify.com/pod/show/arxiv-papers…

1 [QA] Strategic Intelligence in Large Language Models Evidence from evolutionary Game Theory. 7:21
7:21
التشغيل لاحقا
التشغيل لاحقا
قوائم
إعجاب
احب7:21
This study explores Large Language Models' strategic intelligence in competitive settings, revealing their reasoning abilities and distinct strategies in evolutionary Iterated Prisoner's Dilemma tournaments against traditional strategies. https://arxiv.org/abs//2507.02618 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_papers Apple Podcasts: https://podcasts.apple.com/us/podcast/arxiv-papers/id1692476016 Spotify: https://podcasters.spotify.com/pod/show/arxiv-papers…

1 Strategic Intelligence in Large Language Models Evidence from evolutionary Game Theory. 34:06
34:06
التشغيل لاحقا
التشغيل لاحقا
قوائم
إعجاب
احب34:06
This study explores Large Language Models' strategic intelligence in competitive settings, revealing their reasoning abilities and distinct strategies in evolutionary Iterated Prisoner's Dilemma tournaments against traditional strategies. https://arxiv.org/abs//2507.02618 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_papers Apple Podcasts: https://podcasts.apple.com/us/podcast/arxiv-papers/id1692476016 Spotify: https://podcasters.spotify.com/pod/show/arxiv-papers…

1 [QA] Fast and Simplex: 2-Simplicial Attention in Triton 7:28
7:28
التشغيل لاحقا
التشغيل لاحقا
قوائم
إعجاب
احب7:28
This paper explores the 2-simplicial Transformer, which enhances token efficiency over standard Transformers, improving performance on mathematics, coding, reasoning, and logic tasks within fixed token budgets. https://arxiv.org/abs//2507.02754 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_papers Apple Podcasts: https://podcasts.apple.com/us/podcast/arxiv-papers/id1692476016 Spotify: https://podcasters.spotify.com/pod/show/arxiv-papers…

1 Fast and Simplex: 2-Simplicial Attention in Triton 17:55
17:55
التشغيل لاحقا
التشغيل لاحقا
قوائم
إعجاب
احب17:55
This paper explores the 2-simplicial Transformer, which enhances token efficiency over standard Transformers, improving performance on mathematics, coding, reasoning, and logic tasks within fixed token budgets. https://arxiv.org/abs//2507.02754 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_papers Apple Podcasts: https://podcasts.apple.com/us/podcast/arxiv-papers/id1692476016 Spotify: https://podcasters.spotify.com/pod/show/arxiv-papers…

1 [QA] Does Math Reasoning Improve General LLM Capabilities? Understanding Transferability of LLM Reasoning 7:21
7:21
التشغيل لاحقا
التشغيل لاحقا
قوائم
إعجاب
احب7:21
https://arxiv.org/abs//2507.00432 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_papers Apple Podcasts: https://podcasts.apple.com/us/podcast/arxiv-papers/id1692476016 Spotify: https://podcasters.spotify.com/pod/show/arxiv-papers

1 Does Math Reasoning Improve General LLM Capabilities? Understanding Transferability of LLM Reasoning 15:33
15:33
التشغيل لاحقا
التشغيل لاحقا
قوائم
إعجاب
احب15:33
https://arxiv.org/abs//2507.00432 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_papers Apple Podcasts: https://podcasts.apple.com/us/podcast/arxiv-papers/id1692476016 Spotify: https://podcasters.spotify.com/pod/show/arxiv-papers

1 [QA] DABstep: Data Agent Benchmark for Multi-step Reasoning 7:54
7:54
التشغيل لاحقا
التشغيل لاحقا
قوائم
إعجاب
احب7:54
DABstep is a benchmark for evaluating AI agents on multi-step data analysis tasks, featuring 450 real-world challenges that test data processing and contextual reasoning capabilities. https://arxiv.org/abs//2506.23719 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_papers Apple Podcasts: https://podcasts.apple.com/us/podcast/arxiv-papers/id1692476016 Spotify: https://podcasters.spotify.com/pod/show/arxiv-papers…
A
Arxiv Papers

1 [QA] Where to find Grokking in LLM Pretraining? Monitor Memorization-to-Generalization without Test 7:49
7:49
التشغيل لاحقا
التشغيل لاحقا
قوائم
إعجاب
احب7:49
This study explores grokking in large language models during pretraining, revealing how training pathways evolve from random to structured, enhancing generalization despite converged loss. https://arxiv.org/abs//2506.21551 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_papers Apple Podcasts: https://podcasts.apple.com/us/podcast/arxiv-papers/id1692476016 Spotify: https://podcasters.spotify.com/pod/show/arxiv-papers…
A
Arxiv Papers

1 Where to find Grokking in LLM Pretraining? Monitor Memorization-to-Generalization without Test 18:35
18:35
التشغيل لاحقا
التشغيل لاحقا
قوائم
إعجاب
احب18:35
This study explores grokking in large language models during pretraining, revealing how training pathways evolve from random to structured, enhancing generalization despite converged loss. https://arxiv.org/abs//2506.21551 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_papers Apple Podcasts: https://podcasts.apple.com/us/podcast/arxiv-papers/id1692476016 Spotify: https://podcasters.spotify.com/pod/show/arxiv-papers…
A
Arxiv Papers

1 [QA] MMSearch-R1: Incentivizing LMMs to Search 8:11
8:11
التشغيل لاحقا
التشغيل لاحقا
قوائم
إعجاب
احب8:11
MMSearch-R1 is a reinforcement learning framework for large multimodal models, enabling efficient, on-demand multi-turn search in real-world environments, outperforming existing methods while reducing search calls by over 30%. https://arxiv.org/abs//2506.20670 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_papers Apple Podcasts: https://podcasts.apple.com/us/podcast/arxiv-papers/id1692476016 Spotify: https://podcasters.spotify.com/pod/show/arxiv-papers…
A
Arxiv Papers

1 MMSearch-R1: Incentivizing LMMs to Search 18:50
18:50
التشغيل لاحقا
التشغيل لاحقا
قوائم
إعجاب
احب18:50
MMSearch-R1 is a reinforcement learning framework for large multimodal models, enabling efficient, on-demand multi-turn search in real-world environments, outperforming existing methods while reducing search calls by over 30%. https://arxiv.org/abs//2506.20670 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_papers Apple Podcasts: https://podcasts.apple.com/us/podcast/arxiv-papers/id1692476016 Spotify: https://podcasters.spotify.com/pod/show/arxiv-papers…
A
Arxiv Papers

1 [QA] Thought Anchors: Which LLM Reasoning Steps Matter? 7:51
7:51
التشغيل لاحقا
التشغيل لاحقا
قوائم
إعجاب
احب7:51
The paper explores sentence-level analysis of reasoning in large language models, presenting three methods to identify influential "thought anchors" that shape multi-step reasoning processes. An open-source tool is provided. https://arxiv.org/abs//2506.19143 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_papers Apple Podcasts: https://podcasts.apple.com/us/podcast/arxiv-papers/id1692476016 Spotify: https://podcasters.spotify.com/pod/show/arxiv-papers…
A
Arxiv Papers

1 Thought Anchors: Which LLM Reasoning Steps Matter? 15:41
15:41
التشغيل لاحقا
التشغيل لاحقا
قوائم
إعجاب
احب15:41
The paper explores sentence-level analysis of reasoning in large language models, presenting three methods to identify influential "thought anchors" that shape multi-step reasoning processes. An open-source tool is provided. https://arxiv.org/abs//2506.19143 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_papers Apple Podcasts: https://podcasts.apple.com/us/podcast/arxiv-papers/id1692476016 Spotify: https://podcasters.spotify.com/pod/show/arxiv-papers…
A
Arxiv Papers

1 [QA] Scaling Speculative Decoding with LOOKAHEAD REASONING 8:06
8:06
التشغيل لاحقا
التشغيل لاحقا
قوائم
إعجاب
احب8:06
LOOKAHEAD REASONING enhances token-level speculative decoding by introducing step-level parallelism, improving speedup from 1.4x to 2.1x while maintaining answer quality across various benchmarks. https://arxiv.org/abs//2506.19830 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_papers Apple Podcasts: https://podcasts.apple.com/us/podcast/arxiv-papers/id1692476016 Spotify: https://podcasters.spotify.com/pod/show/arxiv-papers…
A
Arxiv Papers

1 Scaling Speculative Decoding with LOOKAHEAD REASONING 22:49
22:49
التشغيل لاحقا
التشغيل لاحقا
قوائم
إعجاب
احب22:49
LOOKAHEAD REASONING enhances token-level speculative decoding by introducing step-level parallelism, improving speedup from 1.4x to 2.1x while maintaining answer quality across various benchmarks. https://arxiv.org/abs//2506.19830 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_papers Apple Podcasts: https://podcasts.apple.com/us/podcast/arxiv-papers/id1692476016 Spotify: https://podcasters.spotify.com/pod/show/arxiv-papers…
A
Arxiv Papers

1 [QA] Vision as a Dialect: Unifying Visual Understanding and Generation via Text-Aligned Representations 7:55
7:55
التشغيل لاحقا
التشغيل لاحقا
قوائم
إعجاب
احب7:55
This paper introduces Tar, a multimodal framework integrating visual understanding and generation through a shared semantic representation, enhancing efficiency and performance in cross-modal tasks. https://arxiv.org/abs//2506.18898 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_papers Apple Podcasts: https://podcasts.apple.com/us/podcast/arxiv-papers/id1692476016 Spotify: https://podcasters.spotify.com/pod/show/arxiv-papers…
A
Arxiv Papers

1 Vision as a Dialect: Unifying Visual Understanding and Generation via Text-Aligned Representations 16:59
16:59
التشغيل لاحقا
التشغيل لاحقا
قوائم
إعجاب
احب16:59
This paper introduces Tar, a multimodal framework integrating visual understanding and generation through a shared semantic representation, enhancing efficiency and performance in cross-modal tasks. https://arxiv.org/abs//2506.18898 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_papers Apple Podcasts: https://podcasts.apple.com/us/podcast/arxiv-papers/id1692476016 Spotify: https://podcasters.spotify.com/pod/show/arxiv-papers…
A
Arxiv Papers

1 [QA] Watermarking Autoregressive Image Generation 7:39
7:39
التشغيل لاحقا
التشغيل لاحقا
قوائم
إعجاب
احب7:39
https://arxiv.org/abs//2506.16349 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_papers Apple Podcasts: https://podcasts.apple.com/us/podcast/arxiv-papers/id1692476016 Spotify: https://podcasters.spotify.com/pod/show/arxiv-papers
A
Arxiv Papers

1 Watermarking Autoregressive Image Generation 27:33
27:33
التشغيل لاحقا
التشغيل لاحقا
قوائم
إعجاب
احب27:33
https://arxiv.org/abs//2506.16349 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_papers Apple Podcasts: https://podcasts.apple.com/us/podcast/arxiv-papers/id1692476016 Spotify: https://podcasters.spotify.com/pod/show/arxiv-papers
A
Arxiv Papers

1 [QA] Drag-and-Drop LLMs: Zero-Shot Prompt-to-Weights 6:43
6:43
التشغيل لاحقا
التشغيل لاحقا
قوائم
إعجاب
احب6:43
DnD introduces a prompt-conditioned parameter generator for LLMs, enabling rapid task-specific customization without separate training, achieving significant performance gains and lower overhead compared to traditional methods. https://arxiv.org/abs//2506.16406 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_papers Apple Podcasts: https://podcasts.apple.com/us/podcast/arxiv-papers/id1692476016 Spotify: https://podcasters.spotify.com/pod/show/arxiv-papers…
A
Arxiv Papers

1 Drag-and-Drop LLMs: Zero-Shot Prompt-to-Weights 11:26
11:26
التشغيل لاحقا
التشغيل لاحقا
قوائم
إعجاب
احب11:26
DnD introduces a prompt-conditioned parameter generator for LLMs, enabling rapid task-specific customization without separate training, achieving significant performance gains and lower overhead compared to traditional methods. https://arxiv.org/abs//2506.16406 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_papers Apple Podcasts: https://podcasts.apple.com/us/podcast/arxiv-papers/id1692476016 Spotify: https://podcasters.spotify.com/pod/show/arxiv-papers…
A
Arxiv Papers

1 [QA] Flat Channels to Infinity in Neural Loss Landscapes 7:16
7:16
التشغيل لاحقا
التشغيل لاحقا
قوائم
إعجاب
احب7:16
The paper characterizes special channels in neural network loss landscapes where slow loss decrease occurs, leading to gated linear units, enhancing understanding of gradient dynamics and optimization methods. https://arxiv.org/abs//2506.14951 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_papers Apple Podcasts: https://podcasts.apple.com/us/podcast/arxiv-papers/id1692476016 Spotify: https://podcasters.spotify.com/pod/show/arxiv-papers…
مرحبًا بك في مشغل أف ام!
يقوم برنامج مشغل أف أم بمسح الويب للحصول على بودكاست عالية الجودة لتستمتع بها الآن. إنه أفضل تطبيق بودكاست ويعمل على أجهزة اندرويد والأيفون والويب. قم بالتسجيل لمزامنة الاشتراكات عبر الأجهزة.