Player FM - Internet Radio Done Right
29 subscribers
Checked 3d ago
تمت الإضافة منذ قبل two أعوام
المحتوى المقدم من Arize AI. يتم تحميل جميع محتويات البودكاست بما في ذلك الحلقات والرسومات وأوصاف البودكاست وتقديمها مباشرة بواسطة Arize AI أو شريك منصة البودكاست الخاص بهم. إذا كنت تعتقد أن شخصًا ما يستخدم عملك المحمي بحقوق الطبع والنشر دون إذنك، فيمكنك اتباع العملية الموضحة هنا https://ar.player.fm/legal.
Player FM - تطبيق بودكاست
انتقل إلى وضع عدم الاتصال باستخدام تطبيق Player FM !
انتقل إلى وضع عدم الاتصال باستخدام تطبيق Player FM !
Deep Papers
وسم كل الحلقات كغير/(كـ)مشغلة
Manage series 3448051
المحتوى المقدم من Arize AI. يتم تحميل جميع محتويات البودكاست بما في ذلك الحلقات والرسومات وأوصاف البودكاست وتقديمها مباشرة بواسطة Arize AI أو شريك منصة البودكاست الخاص بهم. إذا كنت تعتقد أن شخصًا ما يستخدم عملك المحمي بحقوق الطبع والنشر دون إذنك، فيمكنك اتباع العملية الموضحة هنا https://ar.player.fm/legal.
Deep Papers is a podcast series featuring deep dives on today’s most important AI papers and research. Hosted by Arize AI founders and engineers, each episode profiles the people and techniques behind cutting-edge breakthroughs in machine learning.
53 حلقات
وسم كل الحلقات كغير/(كـ)مشغلة
Manage series 3448051
المحتوى المقدم من Arize AI. يتم تحميل جميع محتويات البودكاست بما في ذلك الحلقات والرسومات وأوصاف البودكاست وتقديمها مباشرة بواسطة Arize AI أو شريك منصة البودكاست الخاص بهم. إذا كنت تعتقد أن شخصًا ما يستخدم عملك المحمي بحقوق الطبع والنشر دون إذنك، فيمكنك اتباع العملية الموضحة هنا https://ar.player.fm/legal.
Deep Papers is a podcast series featuring deep dives on today’s most important AI papers and research. Hosted by Arize AI founders and engineers, each episode profiles the people and techniques behind cutting-edge breakthroughs in machine learning.
53 حلقات
كل الحلقات
×D
Deep Papers

1 Watermarking for LLMs and Image Models 42:56
42:56
التشغيل لاحقا
التشغيل لاحقا
قوائم
إعجاب
احب42:56
In this AI research paper reading, we dive into "A Watermark for Large Language Models" with the paper's author John Kirchenbauer. This paper is a timely exploration of techniques for embedding invisible but detectable signals in AI-generated text. These watermarking strategies aim to help mitigate misuse of large language models by making machine-generated content distinguishable from human writing, without sacrificing text quality or requiring access to the model’s internals. Learn more about the A Watermark for Large Language Models paper . Learn more about agent observability and LLM observability , join the Arize AI Slack community or get the latest on LinkedIn and X . Learn more about AI observability and evaluation , join the Arize AI Slack community or get the latest on LinkedIn and X .…
D
Deep Papers

1 Self-Adapting Language Models: Paper Authors Discuss Implications 31:26
31:26
التشغيل لاحقا
التشغيل لاحقا
قوائم
إعجاب
احب31:26
The authors of the new paper *Self-Adapting Language Models (SEAL)* shared a behind-the-scenes look at their work, motivations, results, and future directions. The paper introduces a novel method for enabling large language models (LLMs) to adapt their own weights using self-generated data and training directives — “self-edits.” Learn more about the Self-Adapting Language Models paper . Learn more about AI observability and evaluation , join the Arize AI Slack community or get the latest on LinkedIn and X .…
D
Deep Papers

1 The Illusion of Thinking: What the Apple AI Paper Says About LLM Reasoning 30:35
30:35
التشغيل لاحقا
التشغيل لاحقا
قوائم
إعجاب
احب30:35
This week we discuss The Illusion of Thinking, a new paper from researchers at Apple that challenges today’s evaluation methods and introduces a new benchmark: synthetic puzzles with controllable complexity and clean logic. Their findings? Large Reasoning Models (LRMs) show surprising failure modes, including a complete collapse on high-complexity tasks and a decline in reasoning effort as problems get harder. Dylan and Parth dive into the paper's findings as well as the debate around it, including a response paper aptly titled "The Illusion of the Illusion of Thinking." Read the paper: The Illusion of Thinking Read the response: The Illusion of the Illusion of Thinking Explore more AI research and sign up for future readings Learn more about AI observability and evaluation , join the Arize AI Slack community or get the latest on LinkedIn and X .…
D
Deep Papers

1 Accurate KV Cache Quantization with Outlier Tokens Tracing 25:11
25:11
التشغيل لاحقا
التشغيل لاحقا
قوائم
إعجاب
احب25:11
We discuss Accurate KV Cache Quantization with Outlier Tokens Tracing, a deep dive into improving the efficiency of LLM inference. The authors enhance KV Cache quantization, a technique for reducing memory and compute costs during inference, by introducing a method to identify and exclude outlier tokens that hurt quantization accuracy, striking a better balance between efficiency and performance. Read the paper Access the slides Read the blog Join us for Arize Observe Learn more about AI observability and evaluation , join the Arize AI Slack community or get the latest on LinkedIn and X .…
D
Deep Papers

1 Scalable Chain of Thoughts via Elastic Reasoning 28:54
28:54
التشغيل لاحقا
التشغيل لاحقا
قوائم
إعجاب
احب28:54
In this week's episode, we talk about Elastic Reasoning, a novel framework designed to enhance the efficiency and scalability of large reasoning models by explicitly separating the reasoning process into two distinct phases: thinking and solution . This separation allows for independent allocation of computational budgets, addressing challenges related to uncontrolled output lengths in real-world deployments with strict resource constraints. Our discussion explores how Elastic Reasoning contributes to more concise and efficient reasoning, even in unconstrained settings, and its implications for deploying LRMs in resource-limited environments. Read the paper Join us live Read the blog Learn more about AI observability and evaluation , join the Arize AI Slack community or get the latest on LinkedIn and X .…
D
Deep Papers

1 Sleep-time Compute: Beyond Inference Scaling at Test-time 30:24
30:24
التشغيل لاحقا
التشغيل لاحقا
قوائم
إعجاب
احب30:24
What if your LLM could think ahead —preparing answers before questions are even asked? In this week's paper read , we dive into a groundbreaking new paper from researchers at Letta, introducing sleep-time compute: a novel technique that lets models do their heavy lifting offline , well before the user query arrives. By predicting likely questions and precomputing key reasoning steps, sleep-time compute dramatically reduces test-time latency and cost—without sacrificing performance. We explore new benchmarks—Stateful GSM-Symbolic, Stateful AIME, and the multi-query extension of GSM—that show up to 5x lower compute at inference, 2.5x lower cost per query, and up to 18% higher accuracy when scaled. You’ll also see how this method applies to realistic agent use cases and what makes it most effective.If you care about LLM efficiency, scalability, or cutting-edge research. Explore more AI research, or sign up to hear the next session live . Learn more about AI observability and evaluation , join the Arize AI Slack community or get the latest on LinkedIn and X .…
D
Deep Papers

1 LibreEval: The Largest Open Source Benchmark for RAG Hallucination Detection 27:19
27:19
التشغيل لاحقا
التشغيل لاحقا
قوائم
إعجاب
احب27:19
For this week's paper read, we dive into our own research. We wanted to create a replicable, evolving dataset that can keep pace with model training so that you always know you're testing with data your model has never seen before. We also saw the prohibitively high cost of running LLM evals at scale, and have used our data to fine-tune a series of SLMs that perform just as well as their base LLM counterparts, but at 1/10 the cost. So, over the past few weeks, the Arize team generated the largest public dataset of hallucinations, as well as a series of fine-tuned evaluation models. We talk about what we built, the process we took, and the bottom line results. You can read the recap of LibreEval here. Dive into the research , or sign up to join us next time. Learn more about AI observability and evaluation , join the Arize AI Slack community or get the latest on LinkedIn and X .…
D
Deep Papers

1 AI Benchmark Deep Dive: Gemini 2.5 and Humanity's Last Exam 26:11
26:11
التشغيل لاحقا
التشغيل لاحقا
قوائم
إعجاب
احب26:11
This week we talk about modern AI benchmarks, taking a close look at Google's recent Gemini 2.5 release and its performance on key evaluations, notably Humanity's Last Exam (HLE). In the session we covered Gemini 2.5's architecture, its advancements in reasoning and multimodality, and its impressive context window. We also talked about how benchmarks like HLE and ARC AGI 2 help us understand the current state and future direction of AI. Join us for the next live recording , or check out the latest AI research . Learn more about AI observability and evaluation , join the Arize AI Slack community or get the latest on LinkedIn and X .…
D
Deep Papers

We cover Anthropic’s groundbreaking Model Context Protocol (MCP) . Though it was released in November 2024, we've been seeing a lot of hype around it lately, and thought it was well worth digging into. Learn how this open standard is revolutionizing AI by enabling seamless integration between LLMs and external data sources, fundamentally transforming them into capable, context-aware agents. We explore the key benefits of MCP, including enhanced context retention across interactions, improved interoperability for agentic workflows, and the development of more capable AI agents that can execute complex tasks in real-world environments. Read our analysis of MCP on the blog, or dive into the latest AI researc h . Learn more about AI observability and evaluation , join the Arize AI Slack community or get the latest on LinkedIn and X .…
D
Deep Papers

1 AI Roundup: DeepSeek’s Big Moves, Claude 3.7, and the Latest Breakthroughs 30:23
30:23
التشغيل لاحقا
التشغيل لاحقا
قوائم
إعجاب
احب30:23
This week, we're mixing things up a little bit. Instead of diving deep into a single research paper, we cover the biggest AI developments from the past few weeks. We break down key announcements, including: DeepSeek’s Big Launch Week: A look at FlashMLA (DeepSeek’s new approach to efficient inference) and DeepEP (their enhanced pretraining method). Claude 3.7 & Claude Code: What’s new with Anthropic’s latest model, and what Claude Code brings to the AI coding assistant space. Stay ahead of the curve with this fast-paced recap of the most important AI updates. We'll be back next time with our regularly scheduled programming. Dive into the latest AI research Learn more about AI observability and evaluation , join the Arize AI Slack community or get the latest on LinkedIn and X .…
D
Deep Papers

1 How DeepSeek is Pushing the Boundaries of AI Development 29:54
29:54
التشغيل لاحقا
التشغيل لاحقا
قوائم
إعجاب
احب29:54
This week, we dive into DeepSeek. SallyAnn DeLucia, Product Manager at Arize, and Nick Luzio, a Solutions Engineer, break down key insights on a model that have dominating headlines for its significant breakthrough in inference speed over other models. What’s next for AI (and open source)? From training strategies to real-world performance, here’s what you need to know. Read our analysis of DeepSeek , or dive into the latest AI research . Learn more about AI observability and evaluation , join the Arize AI Slack community or get the latest on LinkedIn and X .…
D
Deep Papers

1 Multiagent Finetuning: A Conversation with Researcher Yilun Du 30:03
30:03
التشغيل لاحقا
التشغيل لاحقا
قوائم
إعجاب
احب30:03
We talk to Google DeepMind Senior Research Scientist (and incoming Assistant Professor at Harvard), Yilun Du, about his latest paper, "Multiagent Finetuning: Self Improvement with Diverse Reasoning Chains." This paper introduces a multiagent finetuning framework that enhances the performance and diversity of language models by employing a society of agents with distinct roles, improving feedback mechanisms and overall output quality. The method enables autonomous self-improvement through iterative finetuning, achieving significant performance gains across various reasoning tasks. It's versatile, applicable to both open-source and proprietary LLMs, and can integrate with human-feedback-based methods like RLHF or DPO, paving the way for future advancements in language model development. Read an overview on the blog , watch the full discussion , or join us live for future paper readings . Learn more about AI observability and evaluation , join the Arize AI Slack community or get the latest on LinkedIn and X .…
D
Deep Papers

1 Training Large Language Models to Reason in Continuous Latent Space 24:58
24:58
التشغيل لاحقا
التشغيل لاحقا
قوائم
إعجاب
احب24:58
LLMs have typically been restricted to reason in the "language space," where chain-of-thought (CoT) is used to solve complex reasoning problems. But a new paper argues that language space may not always be the best for reasoning. In this paper read, we cover an exciting new technique from a team at Meta called Chain of Continuous Thought—also known as "Coconut." In the paper, "Training Large Language Models to Reason in a Continuous Latent Space" explores the potential of allowing LLMs to reason in an unrestricted latent space instead of being constrained by natural language tokens. Read a full breakdown of Coconut on our blog , or join us live for the next paper reading . Learn more about AI observability and evaluation , join the Arize AI Slack community or get the latest on LinkedIn and X .…
D
Deep Papers

1 LLMs as Judges: A Comprehensive Survey on LLM-Based Evaluation Methods 28:57
28:57
التشغيل لاحقا
التشغيل لاحقا
قوائم
إعجاب
احب28:57
We discuss a major survey of work and research on LLM-as-Judge from the last few years. "LLMs-as-Judges: A Comprehensive Survey on LLM-based Evaluation Methods" systematically examines the LLMs-as-Judge framework across five dimensions: functionality, methodology, applications, meta-evaluation, and limitations. This survey gives us a birds eye view of the advantages, limitations and methods for evaluating its effectiveness. Read a breakdown on our blog: https://arize.com/blog/llm-as-judge-survey-paper/ Learn more about AI observability and evaluation , join the Arize AI Slack community or get the latest on LinkedIn and X .…
D
Deep Papers

1 Merge, Ensemble, and Cooperate! A Survey on Collaborative LLM Strategies 28:47
28:47
التشغيل لاحقا
التشغيل لاحقا
قوائم
إعجاب
احب28:47
LLMs have revolutionized natural language processing, showcasing remarkable versatility and capabilities. But individual LLMs often exhibit distinct strengths and weaknesses, influenced by differences in their training corpora. This diversity poses a challenge: how can we maximize the efficiency and utility of LLMs? A new paper, "Merge, Ensemble, and Cooperate: A Survey on Collaborative Strategies in the Era of Large Language Models," highlights collaborative strategies to address this challenge. In this week's episode, we summarize key insights from this paper and discuss practical implications of LLM collaboration strategies across three main approaches: merging, ensemble, and cooperation. We also review some new open source models we're excited about. Learn more about AI observability and evaluation , join the Arize AI Slack community or get the latest on LinkedIn and X .…
مرحبًا بك في مشغل أف ام!
يقوم برنامج مشغل أف أم بمسح الويب للحصول على بودكاست عالية الجودة لتستمتع بها الآن. إنه أفضل تطبيق بودكاست ويعمل على أجهزة اندرويد والأيفون والويب. قم بالتسجيل لمزامنة الاشتراكات عبر الأجهزة.