Player FM - Internet Radio Done Right
207 subscribers
Checked 2d ago
تمت الإضافة منذ قبل five أعوام
المحتوى المقدم من Alexandre Andorra. يتم تحميل جميع محتويات البودكاست بما في ذلك الحلقات والرسومات وأوصاف البودكاست وتقديمها مباشرة بواسطة Alexandre Andorra أو شريك منصة البودكاست الخاص بهم. إذا كنت تعتقد أن شخصًا ما يستخدم عملك المحمي بحقوق الطبع والنشر دون إذنك، فيمكنك اتباع العملية الموضحة هنا https://ar.player.fm/legal.
Player FM - تطبيق بودكاست
انتقل إلى وضع عدم الاتصال باستخدام تطبيق Player FM !
انتقل إلى وضع عدم الاتصال باستخدام تطبيق Player FM !
Learning Bayesian Statistics
وسم كل الحلقات كغير/(كـ)مشغلة
Manage series 2635823
المحتوى المقدم من Alexandre Andorra. يتم تحميل جميع محتويات البودكاست بما في ذلك الحلقات والرسومات وأوصاف البودكاست وتقديمها مباشرة بواسطة Alexandre Andorra أو شريك منصة البودكاست الخاص بهم. إذا كنت تعتقد أن شخصًا ما يستخدم عملك المحمي بحقوق الطبع والنشر دون إذنك، فيمكنك اتباع العملية الموضحة هنا https://ar.player.fm/legal.
Are you a researcher or data scientist / analyst / ninja? Do you want to learn Bayesian inference, stay up to date or simply want to understand what Bayesian inference is? Then this podcast is for you! You'll hear from researchers and practitioners of all fields about how they use Bayesian statistics, and how in turn YOU can apply these methods in your modeling workflow. When I started learning Bayesian methods, I really wished there were a podcast out there that could introduce me to the methods, the projects and the people who make all that possible. So I created "Learning Bayesian Statistics", where you'll get to hear how Bayesian statistics are used to detect black matter in outer space, forecast elections or understand how diseases spread and can ultimately be stopped. But this show is not only about successes -- it's also about failures, because that's how we learn best. So you'll often hear the guests talking about what *didn't* work in their projects, why, and how they overcame these challenges. Because, in the end, we're all lifelong learners! My name is Alex Andorra by the way, and I live in Estonia. By day, I'm a data scientist and modeler at the https://www.pymc-labs.io/ (PyMC Labs) consultancy. By night, I don't (yet) fight crime, but I'm an open-source enthusiast and core contributor to the python packages https://docs.pymc.io/ (PyMC) and https://arviz-devs.github.io/arviz/ (ArviZ). I also love https://www.pollsposition.com/ (election forecasting) and, most importantly, Nutella. But I don't like talking about it – I prefer eating it. So, whether you want to learn Bayesian statistics or hear about the latest libraries, books and applications, this podcast is for you -- just subscribe! You can also support the show and https://www.patreon.com/learnbayesstats (unlock exclusive Bayesian swag on Patreon)!
…
continue reading
160 حلقات
وسم كل الحلقات كغير/(كـ)مشغلة
Manage series 2635823
المحتوى المقدم من Alexandre Andorra. يتم تحميل جميع محتويات البودكاست بما في ذلك الحلقات والرسومات وأوصاف البودكاست وتقديمها مباشرة بواسطة Alexandre Andorra أو شريك منصة البودكاست الخاص بهم. إذا كنت تعتقد أن شخصًا ما يستخدم عملك المحمي بحقوق الطبع والنشر دون إذنك، فيمكنك اتباع العملية الموضحة هنا https://ar.player.fm/legal.
Are you a researcher or data scientist / analyst / ninja? Do you want to learn Bayesian inference, stay up to date or simply want to understand what Bayesian inference is? Then this podcast is for you! You'll hear from researchers and practitioners of all fields about how they use Bayesian statistics, and how in turn YOU can apply these methods in your modeling workflow. When I started learning Bayesian methods, I really wished there were a podcast out there that could introduce me to the methods, the projects and the people who make all that possible. So I created "Learning Bayesian Statistics", where you'll get to hear how Bayesian statistics are used to detect black matter in outer space, forecast elections or understand how diseases spread and can ultimately be stopped. But this show is not only about successes -- it's also about failures, because that's how we learn best. So you'll often hear the guests talking about what *didn't* work in their projects, why, and how they overcame these challenges. Because, in the end, we're all lifelong learners! My name is Alex Andorra by the way, and I live in Estonia. By day, I'm a data scientist and modeler at the https://www.pymc-labs.io/ (PyMC Labs) consultancy. By night, I don't (yet) fight crime, but I'm an open-source enthusiast and core contributor to the python packages https://docs.pymc.io/ (PyMC) and https://arviz-devs.github.io/arviz/ (ArviZ). I also love https://www.pollsposition.com/ (election forecasting) and, most importantly, Nutella. But I don't like talking about it – I prefer eating it. So, whether you want to learn Bayesian statistics or hear about the latest libraries, books and applications, this podcast is for you -- just subscribe! You can also support the show and https://www.patreon.com/learnbayesstats (unlock exclusive Bayesian swag on Patreon)!
…
continue reading
160 حلقات
كل الحلقات
×L
Learning Bayesian Statistics


1 BITESIZE | How to Make Your Models Faster, with Haavard Rue & Janet van Niekerk 17:53
17:53
التشغيل لاحقا
التشغيل لاحقا
قوائم
إعجاب
احب17:53
Today’s clip is from episode 136 of the podcast, with Haavard Rue & Janet van Niekerk. Alex, Haavard and Janet explore the world of Bayesian inference with INLA, a fast and deterministic method that revolutionizes how we handle large datasets and complex models. Discover the power of INLA, and why it can make your models go much faster! Get the full conversation here . Intro to Bayes Course (first 2 lessons free) Advanced Regression Course (first 2 lessons free) Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work ! Visit our Patreon page to unlock exclusive Bayesian swag ;) Transcript This is an automatic transcript and may therefore contain errors. Please get in touch if you're willing to correct them.…
L
Learning Bayesian Statistics


1 #136 Bayesian Inference at Scale: Unveiling INLA, with Haavard Rue & Janet van Niekerk 1:17:37
1:17:37
التشغيل لاحقا
التشغيل لاحقا
قوائم
إعجاب
احب1:17:37
Proudly sponsored by PyMC Labs , the Bayesian Consultancy. Book a call , or get in touch ! Intro to Bayes Course (first 2 lessons free) Advanced Regression Course (first 2 lessons free) Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work ! Visit our Patreon page to unlock exclusive Bayesian swag ;) Takeaways: INLA is a fast, deterministic method for Bayesian inference. INLA is particularly useful for large datasets and complex models. The R INLA package is widely used for implementing INLA methodology. INLA has been applied in various fields, including epidemiology and air quality control. Computational challenges in INLA are minimal compared to MCMC methods. The Smart Gradient method enhances the efficiency of INLA. INLA can handle various likelihoods, not just Gaussian. SPDs allow for more efficient computations in spatial modeling. The new INLA methodology scales better for large datasets, especially in medical imaging. Priors in Bayesian models can significantly impact the results and should be chosen carefully. Penalized complexity priors (PC priors) help prevent overfitting in models. Understanding the underlying mathematics of priors is crucial for effective modeling. The integration of GPUs in computational methods is a key future direction for INLA. The development of new sparse solvers is essential for handling larger models efficiently. Chapters: 06:06 Understanding INLA: A Comparison with MCMC 08:46 Applications of INLA in Real-World Scenarios 11:58 Latent Gaussian Models and Their Importance 15:12 Impactful Applications of INLA in Health and Environment 18:09 Computational Challenges and Solutions in INLA 21:06 Stochastic Partial Differential Equations in Spatial Modeling 23:55 Future Directions and Innovations in INLA 39:51 Exploring Stochastic Differential Equations 43:02 Advancements in INLA Methodology 50:40 Getting Started with INLA 56:25 Understanding Priors in Bayesian Models Thank you to my Patrons for making this episode possible! Yusuke Saito, Avi Bryant, Ero Carrera, Giuliano Cruz, James Wade, Tradd Salvo, William Benton, James Ahloy, Robin Taylor,, Chad Scherrer, Zwelithini Tunyiswa, Bertrand Wilden, James Thompson, Stephen Oates, Gian Luca Di Tanna, Jack Wells, Matthew Maldonado, Ian Costley, Ally Salim, Larry Gill, Ian Moran, Paul Oreto, Colin Caprani, Colin Carroll, Nathaniel Burbank, Michael Osthege, Rémi Louf, Clive Edelsten, Henri Wallen, Hugo Botha, Vinh Nguyen, Marcin Elantkowski, Adam C. Smith, Will Kurt, Andrew Moskowitz, Hector Munoz, Marco Gorelli, Simon Kessell, Bradley Rode, Patrick Kelley, Rick Anderson, Casper de Bruin, Philippe Labonde, Michael Hankin, Cameron Smith, Tomáš Frýda, Ryan Wesslen, Andreas Netti, Riley King, Yoshiyuki Hamajima, Sven De Maeyer, Michael DeCrescenzo, Fergal M, Mason Yahr, Naoya Kanai, Aubrey Clayton, Jeannine Sue, Omri Har Shemesh, Scott Anthony Robson, Robert Yolken, Or Duek, Pavel Dusek, Paul Cox, Andreas Kröpelin, Raphaël R, Nicolas Rode, Gabriel Stechschulte, Arkady, Kurt TeKolste, Marcus Nölke, Maggi Mackintosh, Grant Pezzolesi, Joshua Meehl, Javier Sabio, Kristian Higgins, Alex Jones, Matt Rosinski, Bart Trudeau, Luis Fonseca, Dante Gates, Matt Niccolls, Maksim Kuznecov, Michael Thomas, Luke Gorrie, Cory Kiser, Julio, Edvin Saveljev, Frederick Ayala, Jeffrey Powell, Gal Kampel, Adan Romero, Will Geary, Blake Walters, Jonathan Morgan, Francesco Madrisotti, Ivy Huang, Gary Clarke, Robert Flannery, Rasmus Hindström, Stefan, Corey Abshire, Mike Loncaric, David McCormick, Ronald Legere, Sergio Dolia, Michael Cao, Yiğit Aşık, Suyog Chandramouli and Adam Tilmar Jakobsen . Links from the show: R-INLA webpage: https://www.r-inla.org/ R-INLA discussion group: https://groups.google.com/g/r-inla-discussion-group Haavard’s page: https://cemse.kaust.edu.sa/profiles/haavard-rue Haavard on Google Scholar: https://scholar.google.co.uk/citations?user=VJOn_ZkAAAAJ&hl=en Janet’s page: https://cemse.kaust.edu.sa/profiles/janet-van-niekerk Janet on LinkedIn: https://www.linkedin.com/in/janet-van-niekerk-b1803b8a/ Janet on Google Scholar: https://scholar.google.com/citations?user=rZOmGkAAAAAJ&hl=en Approximate Bayesian inference for latent Gaussian models by using integrated nested Laplace approximations (original and classic formulation) : https://users.wpi.edu/~balnan/INLAjrssB2009.pdf A new avenue for Bayesian inference with INLA (modern formulation of INLA and the current default in the R-INLA package): https://www.sciencedirect.com/science/article/pii/S0167947323000038 Smart Gradient - An adaptive technique for improving gradient estimation: https://www.aimsciences.org/article/doi/10.3934/fods.2021037 SPDE-INLA book and other resources: https://rss.onlinelibrary.wiley.com/doi/10.1111/j.1467-9868.2011.00777.x https://becarioprecario.bitbucket.io/spde-gitbook/ Inlabru: https://inlabru-org.github.io/inlabru/ Penalizing complexity priors: Penalizing Model Component Complexity: A Principled, Practical Approach to Constructing Priors: https://doi.org/10.1214/16-STS576 AR processes https://doi.org/10.1111/jtsa.12242 Gaussian fields https://doi.org/10.1080/01621459.2017.1415907 Skew-normal model https://doi.org/10.57805/revstat.v19i1.328 Weibull model https://doi.org/10.1016/j.spl.2021.109098 Splines https://arxiv.org/abs/1511.05748 Many more available in the R-INLA library: inla.pc Transcript This is an automatic transcript and may therefore contain errors. Please get in touch if you're willing to correct them.…
L
Learning Bayesian Statistics


1 BITESIZE | Understanding Simulation-Based Calibration, with Teemu Säilynoja 21:14
21:14
التشغيل لاحقا
التشغيل لاحقا
قوائم
إعجاب
احب21:14
Get 10% off Hugo's "Building LLM Applications for Data Scientists and Software Engineers" online course! Today’s clip is from episode 135 of the podcast, with Teemu Säilynoja. Alex and Teemu discuss the importance of simulation-based calibration (SBC). They explore the practical implementation of SBC in probabilistic programming languages, the challenges faced in developing SBC methods, and the significance of both prior and posterior SBC in ensuring model reliability. The discussion emphasizes the need for careful model implementation and inference algorithms to achieve accurate calibration. Get the full conversation here . Intro to Bayes Course (first 2 lessons free) Advanced Regression Course (first 2 lessons free) Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work ! Visit our Patreon page to unlock exclusive Bayesian swag ;) Transcript This is an automatic transcript and may therefore contain errors. Please get in touch if you're willing to correct them.…
L
Learning Bayesian Statistics


1 #135 Bayesian Calibration and Model Checking, with Teemu Säilynoja 1:12:13
1:12:13
التشغيل لاحقا
التشغيل لاحقا
قوائم
إعجاب
احب1:12:13
Proudly sponsored by PyMC Labs , the Bayesian Consultancy. Book a call , or get in touch ! Intro to Bayes Course (first 2 lessons free) Advanced Regression Course (first 2 lessons free) Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work ! Visit our Patreon page to unlock exclusive Bayesian swag ;) Takeaways: Teemu focuses on calibration assessments and predictive checking in Bayesian workflows. Simulation-based calibration (SBC) checks model implementation SBC involves drawing realizations from prior and generating prior predictive data. Visual predictive checking is crucial for assessing model predictions. Prior predictive checks should be done before looking at data. Posterior SBC focuses on the area of parameter space most relevant to the data. Challenges in SBC include inference time. Visualizations complement numerical metrics in Bayesian modeling. Amortized Bayesian inference benefits from SBC for quick posterior checks. The calibration of Bayesian models is more intuitive than Frequentist models. Choosing the right visualization depends on data characteristics. Using multiple visualization methods can reveal different insights. Visualizations should be viewed as models of the data. Goodness of fit tests can enhance visualization accuracy. Uncertainty visualization is crucial but often overlooked. Chapters : 09:53 Understanding Simulation-Based Calibration (SBC) 15:03 Practical Applications of SBC in Bayesian Modeling 22:19 Challenges in Developing Posterior SBC 29:41 The Role of SBC in Amortized Bayesian Inference 33:47 The Importance of Visual Predictive Checking 36:50 Predictive Checking and Model Fitting 38:08 The Importance of Visual Checks 40:54 Choosing Visualization Types 49:06 Visualizations as Models 55:02 Uncertainty Visualization in Bayesian Modeling 01:00:05 Future Trends in Probabilistic Modeling Thank you to my Patrons for making this episode possible! Yusuke Saito, Avi Bryant, Ero Carrera, Giuliano Cruz, Tim Gasser, James Wade, Tradd Salvo, William Benton, James Ahloy, Robin Taylor,, Chad Scherrer, Zwelithini Tunyiswa, Bertrand Wilden, James Thompson, Stephen Oates, Gian Luca Di Tanna, Jack Wells, Matthew Maldonado, Ian Costley, Ally Salim, Larry Gill, Ian Moran, Paul Oreto, Colin Caprani, Colin Carroll, Nathaniel Burbank, Michael Osthege, Rémi Louf, Clive Edelsten, Henri Wallen, Hugo Botha, Vinh Nguyen, Marcin Elantkowski, Adam C. Smith, Will Kurt, Andrew Moskowitz, Hector Munoz, Marco Gorelli, Simon Kessell, Bradley Rode, Patrick Kelley, Rick Anderson, Casper de Bruin, Philippe Labonde, Michael Hankin, Cameron Smith, Tomáš Frýda, Ryan Wesslen, Andreas Netti, Riley King, Yoshiyuki Hamajima, Sven De Maeyer, Michael DeCrescenzo, Fergal M, Mason Yahr, Naoya Kanai, Steven Rowland, Aubrey Clayton, Jeannine Sue, Omri Har Shemesh, Scott Anthony Robson, Robert Yolken, Or Duek, Pavel Dusek, Paul Cox, Andreas Kröpelin, Raphaël R, Nicolas Rode, Gabriel Stechschulte, Arkady, Kurt TeKolste, Gergely Juhasz, Marcus Nölke, Maggi Mackintosh, Grant Pezzolesi, Avram Aelony, Joshua Meehl, Javier Sabio, Kristian Higgins, Alex Jones, Gregorio Aguilar, Matt Rosinski, Bart Trudeau, Luis Fonseca, Dante Gates, Matt Niccolls, Maksim Kuznecov, Michael Thomas, Luke Gorrie, Cory Kiser, Julio, Edvin Saveljev, Frederick Ayala, Jeffrey Powell, Gal Kampel, Adan Romero, Will Geary, Blake Walters, Jonathan Morgan, Francesco Madrisotti, Ivy Huang, Gary Clarke, Robert Flannery, Rasmus Hindström, Stefan, Corey Abshire, Mike Loncaric, David McCormick, Ronald Legere, Sergio Dolia, Michael Cao, Yiğit Aşık and Suyog Chandramouli . Links from the show: Teemu's website: https://teemusailynoja.github.io/ Teemu on LinkedIn: https://www.linkedin.com/in/teemu-sailynoja/ Teemu on GitHub: https://github.com/TeemuSailynoja Bayesian Workflow group: https://users.aalto.fi/~ave/group.html LBS #107 Amortized Bayesian Inference with Deep Neural Networks, with Marvin Schmitt: https://learnbayesstats.com/episode/107-amortized-bayesian-inference-deep-neural-networks-marvin-schmitt LBS #73 A Guide to Plotting Inferences & Uncertainties of Bayesian Models, with Jessica Hullman: https://learnbayesstats.com/episode/73-guide-plotting-inferences-uncertainties-bayesian-models-jessica-hullman LBS #66 Uncertainty Visualization & Usable Stats, with Matthew Kay: https://learnbayesstats.com/episode/66-uncertainty-visualization-usable-stats-matthew-kay LBS #35 The Past, Present & Future of BRMS, with Paul Bürkner: https://learnbayesstats.com/episode/35-past-present-future-brms-paul-burkner LBS #29 Model Assessment, Non-Parametric Models, And Much More, with Aki Vehtari: https://learnbayesstats.com/episode/model-assessment-non-parametric-models-aki-vehtari Posterior SBC – Simulation-Based Calibration Checking Conditional on Data: https://arxiv.org/abs/2502.03279 Recommendations for visual predictive checks in Bayesian workflow: https://teemusailynoja.github.io/visual-predictive-checks/ Simuk, SBC for PyMC: https://simuk.readthedocs.io/en/latest/ SBC, tools for model validation in R: https://hyunjimoon.github.io/SBC/index.html New ArviZ, Prior and Posterior predictive checks: https://arviz-devs.github.io/EABM/Chapters/Prior_posterior_predictive_checks.html Bayesplot, plotting for Bayesian models in R: https://mc-stan.org/bayesplot/ Transcript This is an automatic transcript and may therefore contain errors. Please get in touch if you're willing to correct them.…
L
Learning Bayesian Statistics


1 Live Show Announcement | Come Meet Me in London! 3:04
3:04
التشغيل لاحقا
التشغيل لاحقا
قوائم
إعجاب
احب3:04
ICYMI, I'll be in London next week, for a live episode of the Learning Bayesian Statistics podcast 🍾 Come say hi on June 24 at Imperial College London ! We'll be talking about uncertainty quantification — not just in theory, but in the messy, practical reality of building models that are supposed to work in the real world. 🎟️ Get your tickets ! Some of the questions we’ll unpack : 🔍 Why is it so hard to model uncertainty reliably? ⚠️ How do overconfident models break things in production? 🧠 What tools and frameworks help today? 🔄 What do we need to rethink if we want robust ML over the next decade? Joining me on stage: the brilliant Mélodie Monod , Yingzhen Li and François-Xavier Briol -- researchers doing cutting-edge work on these questions, across Bayesian methods, statistical learning, and real-world ML deployment. A huge thank you to Oliver Ratmann for setting this up! 📍 Imperial-X, White City Campus (Room LRT 608) 🗓️ June 24, 11:30–13:00 🎙️ Doors open at 11:30 — we start at noon sharp Come say hi, ask hard questions, and be part of the recording. 🎟️ Get your tickets ! Intro to Bayes Course (first 2 lessons free) Advanced Regression Course (first 2 lessons free) Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work ! Visit our Patreon page to unlock exclusive Bayesian swag ;) Thank you to my Patrons for making this episode possible! Yusuke Saito, Avi Bryant, Ero Carrera, Giuliano Cruz, Tim Gasser, James Wade, Tradd Salvo, William Benton, James Ahloy, Robin Taylor,, Chad Scherrer, Zwelithini Tunyiswa, Bertrand Wilden, James Thompson, Stephen Oates, Gian Luca Di Tanna, Jack Wells, Matthew Maldonado, Ian Costley, Ally Salim, Larry Gill, Ian Moran, Paul Oreto, Colin Caprani, Colin Carroll, Nathaniel Burbank, Michael Osthege, Rémi Louf, Clive Edelsten, Henri Wallen, Hugo Botha, Vinh Nguyen, Marcin Elantkowski, Adam C. Smith, Will Kurt, Andrew Moskowitz, Hector Munoz, Marco Gorelli, Simon Kessell, Bradley Rode, Patrick Kelley, Rick Anderson, Casper de Bruin, Philippe Labonde, Michael Hankin, Cameron Smith, Tomáš Frýda, Ryan Wesslen, Andreas Netti, Riley King, Yoshiyuki Hamajima, Sven De Maeyer, Michael DeCrescenzo, Fergal M, Mason Yahr, Naoya Kanai, Steven Rowland, Aubrey Clayton, Jeannine Sue, Omri Har Shemesh, Scott Anthony Robson, Robert Yolken, Or Duek, Pavel Dusek, Paul Cox, Andreas Kröpelin, Raphaël R, Nicolas Rode, Gabriel Stechschulte, Arkady, Kurt TeKolste, Gergely Juhasz, Marcus Nölke, Maggi Mackintosh, Grant Pezzolesi, Avram Aelony, Joshua Meehl, Javier Sabio, Kristian Higgins, Alex Jones, Gregorio Aguilar, Matt Rosinski, Bart Trudeau, Luis Fonseca, Dante Gates, Matt Niccolls, Maksim Kuznecov, Michael Thomas, Luke Gorrie, Cory Kiser, Julio, Edvin Saveljev, Frederick Ayala, Jeffrey Powell, Gal Kampel, Adan Romero, Will Geary, Blake Walters, Jonathan Morgan, Francesco Madrisotti, Ivy Huang, Gary Clarke, Robert Flannery, Rasmus Hindström, Stefan, Corey Abshire, Mike Loncaric, David McCormick, Ronald Legere, Sergio Dolia, Michael Cao, Yiğit Aşık and Suyog Chandramouli . Transcript This is an automatic transcript and may therefore contain errors. Please get in touch if you're willing to correct them.…
L
Learning Bayesian Statistics


1 BITESIZE | Exploring Dynamic Regression Models, with David Kohns 14:34
14:34
التشغيل لاحقا
التشغيل لاحقا
قوائم
إعجاب
احب14:34
Today’s clip is from episode 134 of the podcast, with David Kohns. Alex and David discuss the future of probabilistic programming, focusing on advancements in time series modeling, model selection, and the integration of AI in prior elicitation. The discussion highlights the importance of setting appropriate priors, the challenges of computational workflows, and the potential of normalizing flows to enhance Bayesian inference. Get the full discussion here . Intro to Bayes Course (first 2 lessons free) Advanced Regression Course (first 2 lessons free) Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work ! Visit our Patreon page to unlock exclusive Bayesian swag ;) Transcript This is an automatic transcript and may therefore contain errors. Please get in touch if you're willing to correct them.…
L
Learning Bayesian Statistics


1 #134 Bayesian Econometrics, State Space Models & Dynamic Regression, with David Kohns 1:40:55
1:40:55
التشغيل لاحقا
التشغيل لاحقا
قوائم
إعجاب
احب1:40:55
Proudly sponsored by PyMC Labs , the Bayesian Consultancy. Book a call , or get in touch ! Intro to Bayes Course (first 2 lessons free) Advanced Regression Course (first 2 lessons free) Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work ! Visit our Patreon page to unlock exclusive Bayesian swag ;) Takeaways: Setting appropriate priors is crucial to avoid overfitting in models. R-squared can be used effectively in Bayesian frameworks for model evaluation. Dynamic regression can incorporate time-varying coefficients to capture changing relationships. Predictively consistent priors enhance model interpretability and performance. Identifiability is a challenge in time series models. State space models provide structure compared to Gaussian processes. Priors influence the model's ability to explain variance. Starting with simple models can reveal interesting dynamics. Understanding the relationship between states and variance is key. State-space models allow for dynamic analysis of time series data. AI can enhance the process of prior elicitation in statistical models. Chapters : 10:09 Understanding State Space Models 14:53 Predictively Consistent Priors 20:02 Dynamic Regression and AR Models 25:08 Inflation Forecasting 50:49 Understanding Time Series Data and Economic Analysis 57:04 Exploring Dynamic Regression Models 01:05:52 The Role of Priors 01:15:36 Future Trends in Probabilistic Programming 01:20:05 Innovations in Bayesian Model Selection Thank you to my Patrons for making this episode possible! Yusuke Saito, Avi Bryant, Ero Carrera, Giuliano Cruz, Tim Gasser, James Wade, Tradd Salvo, William Benton, James Ahloy, Robin Taylor,, Chad Scherrer, Zwelithini Tunyiswa, Bertrand Wilden, James Thompson, Stephen Oates, Gian Luca Di Tanna, Jack Wells, Matthew Maldonado, Ian Costley, Ally Salim, Larry Gill, Ian Moran, Paul Oreto, Colin Caprani, Colin Carroll, Nathaniel Burbank, Michael Osthege, Rémi Louf, Clive Edelsten, Henri Wallen, Hugo Botha, Vinh Nguyen, Marcin Elantkowski, Adam C. Smith, Will Kurt, Andrew Moskowitz, Hector Munoz, Marco Gorelli, Simon Kessell, Bradley Rode, Patrick Kelley, Rick Anderson, Casper de Bruin, Philippe Labonde, Michael Hankin, Cameron Smith, Tomáš Frýda, Ryan Wesslen, Andreas Netti, Riley King, Yoshiyuki Hamajima, Sven De Maeyer, Michael DeCrescenzo, Fergal M, Mason Yahr, Naoya Kanai, Steven Rowland, Aubrey Clayton, Jeannine Sue, Omri Har Shemesh, Scott Anthony Robson, Robert Yolken, Or Duek, Pavel Dusek, Paul Cox, Andreas Kröpelin, Raphaël R, Nicolas Rode, Gabriel Stechschulte, Arkady, Kurt TeKolste, Gergely Juhasz, Marcus Nölke, Maggi Mackintosh, Grant Pezzolesi, Avram Aelony, Joshua Meehl, Javier Sabio, Kristian Higgins, Alex Jones, Gregorio Aguilar, Matt Rosinski, Bart Trudeau, Luis Fonseca, Dante Gates, Matt Niccolls, Maksim Kuznecov, Michael Thomas, Luke Gorrie, Cory Kiser, Julio, Edvin Saveljev, Frederick Ayala, Jeffrey Powell, Gal Kampel, Adan Romero, Will Geary, Blake Walters, Jonathan Morgan, Francesco Madrisotti, Ivy Huang, Gary Clarke, Robert Flannery, Rasmus Hindström, Stefan, Corey Abshire, Mike Loncaric, David McCormick, Ronald Legere, Sergio Dolia, Michael Cao, Yiğit Aşık and Suyog Chandramouli . Links from the show: David's website: https://davkoh.github.io/ David on LinkedIn: https://www.linkedin.com/in/david-kohns-03984013b/ David on GitHub: https://github.com/davkoh David on Google Scholar: https://scholar.google.com/citations?user=9gKE8e4AAAAJ&hl=en Dynamic Regression Case Study: https://davkoh.github.io/case-studies/01_dyn_reg/dyn_reg_casestudy5.html ARR2 Paper: https://projecteuclid.org/journals/bayesian-analysis/advance-publication/The-ARR2-Prior--Flexible-Predictive-Prior-Definition-for-Bayesian/10.1214/25-BA1512.full ARR2 Paper GitHub repository: https://github.com/n-kall/arr2/tree/main ARR2 StanCon talk: https://www.youtube.com/watch?v=8XBe2jrOKvw&list=PLCrWEzJgSUqzNzh6mjWsWUu-lSK59VXP6&index=29 ARR2 Prior in PyMC: https://www.austinrochford.com/posts/r2-priors-pymc.html LBS #124 State Space Models & Structural Time Series, with Jesse Grabowski: https://learnbayesstats.com/episode/124-state-space-models-structural-time-series-jesse-grabowski LBS #107 Amortized Bayesian Inference with Deep Neural Networks, with Marvin Schmitt: https://learnbayesstats.com/episode/107-amortized-bayesian-inference-deep-neural-networks-marvin-schmitt LBS #74 Optimizing NUTS and Developing the ZeroSumNormal Distribution, with Adrian Seyboldt: https://learnbayesstats.com/episode/74-optimizing-nuts-developing-zerosumnormal-distribution-adrian-seyboldt Nutpie’s Normalizing Flows adaptation: https://pymc-devs.github.io/nutpie/nf-adapt.html Transcript This is an automatic transcript and may therefore contain errors. Please get in touch if you're willing to correct them.…
L
Learning Bayesian Statistics


1 BITESIZE | Why Your Models Might Be Wrong & How to Fix it, with Sean Pinkney & Adrian Seyboldt 17:04
17:04
التشغيل لاحقا
التشغيل لاحقا
قوائم
إعجاب
احب17:04
Today’s clip is from episode 133 of the podcast, with Sean Pinkney & Adrian Seyboldt. The conversation delves into the concept of Zero-Sum Normal and its application in statistical modeling, particularly in hierarchical models. Alex, Sean and Adrian discuss the implications of using zero-sum constraints, the challenges of incorporating new data points, and the importance of distinguishing between sample and population effects. They also explore practical solutions for making predictions based on population parameters and the potential for developing tools to facilitate these processes. Get the full discussion here . Intro to Bayes Course (first 2 lessons free) Advanced Regression Course (first 2 lessons free) Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work ! Visit our Patreon page to unlock exclusive Bayesian swag ;) Transcript This is an automatic transcript and may therefore contain errors. Please get in touch if you're willing to correct them.…
L
Learning Bayesian Statistics


1 #133 Making Models More Efficient & Flexible, with Sean Pinkney & Adrian Seyboldt 1:12:12
1:12:12
التشغيل لاحقا
التشغيل لاحقا
قوائم
إعجاب
احب1:12:12
Proudly sponsored by PyMC Labs , the Bayesian Consultancy. Book a call , or get in touch ! Intro to Bayes Course (first 2 lessons free) Advanced Regression Course (first 2 lessons free) Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work ! Visit our Patreon page to unlock exclusive Bayesian swag ;) Takeaways : Zero Sum constraints allow for better sampling and estimation in hierarchical models. Understanding the difference between population and sample means is crucial. A library for zero-sum normal effects would be beneficial. Practical solutions can yield decent predictions even with limitations. Cholesky parameterization can be adapted for positive correlation matrices. Understanding the geometry of sampling spaces is crucial. The relationship between eigenvalues and sampling is complex. Collaboration and sharing knowledge enhance research outcomes. Innovative approaches can simplify complex statistical problems. Chapters : 03:35 Sean Pinkney's Journey to Bayesian Modeling 11:21 The Zero-Sum Normal Project Explained 18:52 Technical Insights on Zero-Sum Constraints 32:04 Handling New Elements in Bayesian Models 36:19 Understanding Population Parameters and Predictions 49:11 Exploring Flexible Cholesky Parameterization 01:07:23 Closing Thoughts and Future Directions Thank you to my Patrons for making this episode possible! Yusuke Saito, Avi Bryant, Ero Carrera, Giuliano Cruz, Tim Gasser, James Wade, Tradd Salvo, William Benton, James Ahloy, Robin Taylor,, Chad Scherrer, Zwelithini Tunyiswa, Bertrand Wilden, James Thompson, Stephen Oates, Gian Luca Di Tanna, Jack Wells, Matthew Maldonado, Ian Costley, Ally Salim, Larry Gill, Ian Moran, Paul Oreto, Colin Caprani, Colin Carroll, Nathaniel Burbank, Michael Osthege, Rémi Louf, Clive Edelsten, Henri Wallen, Hugo Botha, Vinh Nguyen, Marcin Elantkowski, Adam C. Smith, Will Kurt, Andrew Moskowitz, Hector Munoz, Marco Gorelli, Simon Kessell, Bradley Rode, Patrick Kelley, Rick Anderson, Casper de Bruin, Philippe Labonde, Michael Hankin, Cameron Smith, Tomáš Frýda, Ryan Wesslen, Andreas Netti, Riley King, Yoshiyuki Hamajima, Sven De Maeyer, Michael DeCrescenzo, Fergal M, Mason Yahr, Naoya Kanai, Steven Rowland, Aubrey Clayton, Jeannine Sue, Omri Har Shemesh, Scott Anthony Robson, Robert Yolken, Or Duek, Pavel Dusek, Paul Cox, Andreas Kröpelin, Raphaël R, Nicolas Rode, Gabriel Stechschulte, Arkady, Kurt TeKolste, Gergely Juhasz, Marcus Nölke, Maggi Mackintosh, Grant Pezzolesi, Avram Aelony, Joshua Meehl, Javier Sabio, Kristian Higgins, Alex Jones, Gregorio Aguilar, Matt Rosinski, Bart Trudeau, Luis Fonseca, Dante Gates, Matt Niccolls, Maksim Kuznecov, Michael Thomas, Luke Gorrie, Cory Kiser, Julio, Edvin Saveljev, Frederick Ayala, Jeffrey Powell, Gal Kampel, Adan Romero, Will Geary, Blake Walters, Jonathan Morgan, Francesco Madrisotti, Ivy Huang, Gary Clarke, Robert Flannery, Rasmus Hindström, Stefan, Corey Abshire, Mike Loncaric, David McCormick, Ronald Legere, Sergio Dolia, Michael Cao, Yiğit Aşık and Suyog Chandramouli . Links from the show: Sean's website: https://spinkney.github.io/ Sean on LinkedIn: https://www.linkedin.com/in/sean-pinkney123/ Sean on GitHub: https://github.com/spinkney Sean on BlueSky: https://bsky.app/profile/spinkney.bsky.social Sean on Mastodon: https://fosstodon.org/@spinkney Sean's talk at StanCon 2024: https://youtu.be/eE8Vqxs8OfQ?si=09-vNvCxpbz8enUj Flexible Cholesky Parameterization of Correlation Matrices: https://arxiv.org/abs/2405.07286 Quantile Regressions in Stan: https://spinkney.github.io/posts/post-2-quantile-reg-series/post-2-quantile-reg-part-I/quantile-reg.html LBS #74 Optimizing NUTS and Developing the ZeroSumNormal Distribution, with Adrian Seyboldt: https://learnbayesstats.com/episode/74-optimizing-nuts-developing-zerosumnormal-distribution-adrian-seyboldt Transcript This is an automatic transcript and may therefore contain errors. Please get in touch if you're willing to correct them.…
L
Learning Bayesian Statistics


1 BITESIZE | How AI is Redefining Human Interactions, with Tom Griffiths 22:06
22:06
التشغيل لاحقا
التشغيل لاحقا
قوائم
إعجاب
احب22:06
Today’s clip is from episode 132 of the podcast, with Tom Griffiths. Tom and Alex Andorra discuss the fundamental differences between human intelligence and artificial intelligence, emphasizing the constraints that shape human cognition, such as limited data, computational resources, and communication bandwidth. They explore how AI systems currently learn and the potential for aligning AI with human cognitive processes. The discussion also delves into the implications of AI in enhancing human decision-making and the importance of understanding human biases to create more effective AI systems. Get the full discussion here . Intro to Bayes Course (first 2 lessons free) Advanced Regression Course (first 2 lessons free) Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work ! Visit our Patreon page to unlock exclusive Bayesian swag ;) Transcript This is an automatic transcript and may therefore contain errors. Please get in touch if you're willing to correct them.…
L
Learning Bayesian Statistics


1 #132 Bayesian Cognition and the Future of Human-AI Interaction, with Tom Griffiths 1:30:15
1:30:15
التشغيل لاحقا
التشغيل لاحقا
قوائم
إعجاب
احب1:30:15
Proudly sponsored by PyMC Labs , the Bayesian Consultancy. Book a call , or get in touch ! Check out Hugo’s latest episode with Fei-Fei Li, on How Human-Centered AI Actually Gets Built Intro to Bayes Course (first 2 lessons free) Advanced Regression Course (first 2 lessons free) Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work ! Visit our Patreon page to unlock exclusive Bayesian swag ;) Takeaways: Computational cognitive science seeks to understand intelligence mathematically. Bayesian statistics is crucial for understanding human cognition. Inductive biases help explain how humans learn from limited data. Eliciting prior distributions can reveal implicit beliefs. The wisdom of individuals can provide richer insights than averaging group responses. Generative AI can mimic human cognitive processes. Human intelligence is shaped by constraints of data, computation, and communication. AI systems operate under different constraints than human cognition. Human intelligence differs fundamentally from machine intelligence. Generative AI can complement and enhance human learning. AI systems currently lack intrinsic human compatibility. Language training in AI helps align its understanding with human perspectives. Reinforcement learning from human feedback can lead to misalignment of AI goals. Representational alignment can improve AI's understanding of human concepts. AI can help humans make better decisions by providing relevant information. Research should focus on solving problems rather than just methods. Chapters : 00:00 Understanding Computational Cognitive Science 13:52 Bayesian Models and Human Cognition 29:50 Eliciting Implicit Prior Distributions 38:07 The Relationship Between Human and AI Intelligence 45:15 Aligning Human and Machine Preferences 50:26 Innovations in AI and Human Interaction 55:35 Resource Rationality in Decision Making 01:00:07 Language Learning in AI Models 01:06:04 Inductive Biases in Language Learning 01:11:55 Advice for Aspiring Cognitive Scientists 01:21:19 Future Trends in Cognitive Science and AI Thank you to my Patrons for making this episode possible! Yusuke Saito, Avi Bryant, Ero Carrera, Giuliano Cruz, Tim Gasser, James Wade, Tradd Salvo, William Benton, James Ahloy, Robin Taylor,, Chad Scherrer, Zwelithini Tunyiswa, Bertrand Wilden, James Thompson, Stephen Oates, Gian Luca Di Tanna, Jack Wells, Matthew Maldonado, Ian Costley, Ally Salim, Larry Gill, Ian Moran, Paul Oreto, Colin Caprani, Colin Carroll, Nathaniel Burbank, Michael Osthege, Rémi Louf, Clive Edelsten, Henri Wallen, Hugo Botha, Vinh Nguyen, Marcin Elantkowski, Adam C. Smith, Will Kurt, Andrew Moskowitz, Hector Munoz, Marco Gorelli, Simon Kessell, Bradley Rode, Patrick Kelley, Rick Anderson, Casper de Bruin, Philippe Labonde, Michael Hankin, Cameron Smith, Tomáš Frýda, Ryan Wesslen, Andreas Netti, Riley King, Yoshiyuki Hamajima, Sven De Maeyer, Michael DeCrescenzo, Fergal M, Mason Yahr, Naoya Kanai, Steven Rowland, Aubrey Clayton, Jeannine Sue, Omri Har Shemesh, Scott Anthony Robson, Robert Yolken, Or Duek, Pavel Dusek, Paul Cox, Andreas Kröpelin, Raphaël R, Nicolas Rode, Gabriel Stechschulte, Arkady, Kurt TeKolste, Gergely Juhasz, Marcus Nölke, Maggi Mackintosh, Grant Pezzolesi, Avram Aelony, Joshua Meehl, Javier Sabio, Kristian Higgins, Alex Jones, Gregorio Aguilar, Matt Rosinski, Bart Trudeau, Luis Fonseca, Dante Gates, Matt Niccolls, Maksim Kuznecov, Michael Thomas, Luke Gorrie, Cory Kiser, Julio, Edvin Saveljev, Frederick Ayala, Jeffrey Powell, Gal Kampel, Adan Romero, Will Geary, Blake Walters, Jonathan Morgan, Francesco Madrisotti, Ivy Huang, Gary Clarke, Robert Flannery, Rasmus Hindström, Stefan, Corey Abshire, Mike Loncaric, David McCormick, Ronald Legere, Sergio Dolia, Michael Cao, Yiğit Aşık and Suyog Chandramouli . Links from the show: Check out Hugo’s latest episode with Fei-Fei Li, on How Human-Centered AI Actually Gets Built: https://high-signal.delphina.ai/episode/fei-fei-on-how-human-centered-ai-actually-gets-built?utm_source=laplace&utm_medium=podcast&utm_campaign=feifei_launch Tom's profile at Princeton University: https://psychology.princeton.edu/people/tom-griffiths Computational Cognitive Science Lab: https://cocosci.princeton.edu/ Tom’s Google Scholar: https://scholar.google.com/citations?user=UAwKvEsAAAAJ&hl=en Tom's latest book, Bayesian Models of Cognition : https://mitpress.mit.edu/9780262049412/bayesian-models-of-cognition/ Transcript This is an automatic transcript and may therefore contain errors. Please get in touch if you're willing to correct them.…
L
Learning Bayesian Statistics


1 BITESIZE | Hacking Bayesian Models for Better Performance, with Luke Bornn 13:35
13:35
التشغيل لاحقا
التشغيل لاحقا
قوائم
إعجاب
احب13:35
Today’s clip is from episode 131 of the podcast, with Luke Bornn. Luke and Alex discuss the application of generative models in sports analytics. They emphasize the importance of Bayesian modeling to account for uncertainty and contextual variations in player data. The discussion also covers the challenges of balancing model complexity with computational efficiency, the innovative ways to hack Bayesian models for improved performance, and the significance of understanding model fitting and discretization in statistical modeling. Get the full discussion here . Intro to Bayes Course (first 2 lessons free) Advanced Regression Course (first 2 lessons free) Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work ! Visit our Patreon page to unlock exclusive Bayesian swag ;) Transcript This is an automatic transcript and may therefore contain errors. Please get in touch if you're willing to correct them.…
L
Learning Bayesian Statistics


1 #131 Decision-Making Under High Uncertainty, with Luke Bornn 1:31:46
1:31:46
التشغيل لاحقا
التشغيل لاحقا
قوائم
إعجاب
احب1:31:46
Proudly sponsored by PyMC Labs , the Bayesian Consultancy. Book a call , or get in touch ! Intro to Bayes Course (first 2 lessons free) Advanced Regression Course (first 2 lessons free) Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work ! Visit our Patreon page to unlock exclusive Bayesian swag ;) Thank you to my Patrons for making this episode possible! Yusuke Saito, Avi Bryant, Ero Carrera, Giuliano Cruz, Tim Gasser, James Wade, Tradd Salvo, William Benton, James Ahloy, Robin Taylor,, Chad Scherrer, Zwelithini Tunyiswa, Bertrand Wilden, James Thompson, Stephen Oates, Gian Luca Di Tanna, Jack Wells, Matthew Maldonado, Ian Costley, Ally Salim, Larry Gill, Ian Moran, Paul Oreto, Colin Caprani, Colin Carroll, Nathaniel Burbank, Michael Osthege, Rémi Louf, Clive Edelsten, Henri Wallen, Hugo Botha, Vinh Nguyen, Marcin Elantkowski, Adam C. Smith, Will Kurt, Andrew Moskowitz, Hector Munoz, Marco Gorelli, Simon Kessell, Bradley Rode, Patrick Kelley, Rick Anderson, Casper de Bruin, Philippe Labonde, Michael Hankin, Cameron Smith, Tomáš Frýda, Ryan Wesslen, Andreas Netti, Riley King, Yoshiyuki Hamajima, Sven De Maeyer, Michael DeCrescenzo, Fergal M, Mason Yahr, Naoya Kanai, Steven Rowland, Aubrey Clayton, Jeannine Sue, Omri Har Shemesh, Scott Anthony Robson, Robert Yolken, Or Duek, Pavel Dusek, Paul Cox, Andreas Kröpelin, Raphaël R, Nicolas Rode, Gabriel Stechschulte, Arkady, Kurt TeKolste, Gergely Juhasz, Marcus Nölke, Maggi Mackintosh, Grant Pezzolesi, Avram Aelony, Joshua Meehl, Javier Sabio, Kristian Higgins, Alex Jones, Gregorio Aguilar, Matt Rosinski, Bart Trudeau, Luis Fonseca, Dante Gates, Matt Niccolls, Maksim Kuznecov, Michael Thomas, Luke Gorrie, Cory Kiser, Julio, Edvin Saveljev, Frederick Ayala, Jeffrey Powell, Gal Kampel, Adan Romero, Will Geary, Blake Walters, Jonathan Morgan, Francesco Madrisotti, Ivy Huang, Gary Clarke, Robert Flannery, Rasmus Hindström, Stefan, Corey Abshire, Mike Loncaric, David McCormick, Ronald Legere, Sergio Dolia, Michael Cao, Yiğit Aşık and Suyog Chandramouli . Takeaways: Player tracking data revolutionized sports analytics. Decision-making in sports involves managing uncertainty and budget constraints. Luke emphasizes the importance of portfolio optimization in team management. Clubs with high budgets can afford inefficiencies in player acquisition. Statistical methods provide a probabilistic approach to player value. Removing human bias is crucial in sports decision-making. Understanding player performance distributions aids in contract decisions. The goal is to maximize performance value per dollar spent. Model validation in sports requires focusing on edge cases. Generative models help account for uncertainty in player performance. Computational efficiency is key in handling large datasets. A diverse skill set enhances problem-solving in sports analytics. Broader knowledge in data science leads to innovative solutions. Integrating software engineering with statistics is crucial in sports analytics. Model validation often requires more work than model fitting itself. Understanding the context of data is essential for accurate predictions. Continuous learning and adaptation are essential in analytics. Chapters: 11:58 Transition from Academia to Sports Analytics 20:44 Evolution of Sports Analytics and Data Sources 23:53 Modeling Uncertainty in Decision Making 32:05 The Role of Statistical Models in Player Evaluation 39:20 Generative Models and Bayesian Framework in Sports 46:54 Hacking Bayesian Models for Better Performance 49:55 Understanding Computational Challenges in Bayesian Inference 52:44 Exploring Different Approaches to Model Fitting 56:30 Building a Comprehensive Statistical Toolbox 01:00:37 The Importance of Data Management in Modeling 01:03:21 Iterative Model Validation and Diagnostics 01:06:53 Uncovering Insights from Sports Data 01:16:47 Emerging Trends in Sports Analytics 01:21:30 Future Directions and Personal Aspirations Links from the show: Luke’s website: http://www.lukebornn.com/ Luke on Linkedin: https://www.linkedin.com/in/lukebornn/ Luke on Wharton Moneyball: https://knowledge.wharton.upenn.edu/podcast/moneyball-highlights/luke-bornn-part-owner-of-ac-milan/ LBS #108 Modeling Sports & Extracting Player Values, with Paul Sabin: https://learnbayesstats.com/episode/108-modeling-sports-extracting-player-values-paul-sabin Transcript This is an automatic transcript and may therefore contain errors. Please get in touch if you're willing to correct them.…
L
Learning Bayesian Statistics


1 BITESIZE | Real-World Applications of Models in Public Health, with Adam Kucharski 16:26
16:26
التشغيل لاحقا
التشغيل لاحقا
قوائم
إعجاب
احب16:26
Today’s clip is from episode 130 of the podcast, with epidemiological modeler Adam Kucharski. This conversation explores the critical role of patient modeling during the COVID-19 pandemic, highlighting how these models informed public health decisions and the relationship between modeling and policy. The discussion emphasizes the need for improved communication and understanding of data among the public and policymakers. Get the full discussion here . Intro to Bayes Course (first 2 lessons free) Advanced Regression Course (first 2 lessons free) Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work ! Visit our Patreon page to unlock exclusive Bayesian swag ;) Transcript This is an automatic transcript and may therefore contain errors. Please get in touch if you're willing to correct them.…
L
Learning Bayesian Statistics


1 #130 The Real-World Impact of Epidemiological Models, with Adam Kucharski 1:09:05
1:09:05
التشغيل لاحقا
التشغيل لاحقا
قوائم
إعجاب
احب1:09:05
Proudly sponsored by PyMC Labs , the Bayesian Consultancy. Book a call , or get in touch ! Intro to Bayes Course (first 2 lessons free) Advanced Regression Course (first 2 lessons free) Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work ! Visit our Patreon page to unlock exclusive Bayesian swag ;) Thank you to my Patrons for making this episode possible! Yusuke Saito, Avi Bryant, Ero Carrera, Giuliano Cruz, Tim Gasser, James Wade, Tradd Salvo, William Benton, James Ahloy, Robin Taylor,, Chad Scherrer, Zwelithini Tunyiswa, Bertrand Wilden, James Thompson, Stephen Oates, Gian Luca Di Tanna, Jack Wells, Matthew Maldonado, Ian Costley, Ally Salim, Larry Gill, Ian Moran, Paul Oreto, Colin Caprani, Colin Carroll, Nathaniel Burbank, Michael Osthege, Rémi Louf, Clive Edelsten, Henri Wallen, Hugo Botha, Vinh Nguyen, Marcin Elantkowski, Adam C. Smith, Will Kurt, Andrew Moskowitz, Hector Munoz, Marco Gorelli, Simon Kessell, Bradley Rode, Patrick Kelley, Rick Anderson, Casper de Bruin, Philippe Labonde, Michael Hankin, Cameron Smith, Tomáš Frýda, Ryan Wesslen, Andreas Netti, Riley King, Yoshiyuki Hamajima, Sven De Maeyer, Michael DeCrescenzo, Fergal M, Mason Yahr, Naoya Kanai, Steven Rowland, Aubrey Clayton, Jeannine Sue, Omri Har Shemesh, Scott Anthony Robson, Robert Yolken, Or Duek, Pavel Dusek, Paul Cox, Andreas Kröpelin, Raphaël R, Nicolas Rode, Gabriel Stechschulte, Arkady, Kurt TeKolste, Gergely Juhasz, Marcus Nölke, Maggi Mackintosh, Grant Pezzolesi, Avram Aelony, Joshua Meehl, Javier Sabio, Kristian Higgins, Alex Jones, Gregorio Aguilar, Matt Rosinski, Bart Trudeau, Luis Fonseca, Dante Gates, Matt Niccolls, Maksim Kuznecov, Michael Thomas, Luke Gorrie, Cory Kiser, Julio, Edvin Saveljev, Frederick Ayala, Jeffrey Powell, Gal Kampel, Adan Romero, Will Geary, Blake Walters, Jonathan Morgan, Francesco Madrisotti, Ivy Huang, Gary Clarke, Robert Flannery, Rasmus Hindström, Stefan, Corey Abshire, Mike Loncaric, David McCormick, Ronald Legere, Sergio Dolia, Michael Cao, Yiğit Aşık and Suyog Chandramouli . Takeaways: Epidemiology requires a blend of mathematical and statistical understanding. Models are essential for informing public health decisions during epidemics. The COVID-19 pandemic highlighted the importance of rapid modeling. Misconceptions about data can lead to misunderstandings in public health. Effective communication is crucial for conveying complex epidemiological concepts. Epidemic thinking can be applied to various fields, including marketing and finance. Public health policies should be informed by robust modeling and data analysis. Automation can help streamline data analysis in epidemic response. Understanding the limitations of models is key to effective decision-making Collaboration is key in developing complex models. Uncertainty estimation is crucial for effective decision-making. AI has the potential to enhance data interpretation in epidemiology. Educational initiatives should focus on understanding exponential growth and lagged outcomes. The complexity of modern epidemics requires a deeper understanding from the public. Understanding the balance between perfection and practicality is essential in modeling. Chapters: 00:00 Introduction to Epidemiological Modeling 05:16 The Role of Bayesian Methods in Epidemic Forecasting 11:29 Real-World Applications of Models in Public Health 19:07 Common Misconceptions About Epidemiological Data 27:43 Understanding the Spread of Ideas and Beliefs 32:55 Workflow and Collaboration in Epidemiological Modeling 34:51 Modeling Approaches in Epidemiology 40:04 Challenges in Model Development 45:55 Uncertainty in Epidemiological Models 48:46 The Impact of AI on Epidemiology 54:55 Educational Initiatives for Future Epidemiologists Links from the show: Adam’s website: https://kucharski.substack.com/ Adam on Google Scholar: https://scholar.google.com/citations?user=eIqfmHYAAAAJ&hl=en Adam on Linkedin: https://www.linkedin.com/in/adam-kucharski-1a1b0225b/ The Rules of Contagion: Why Things Spread - and Why They Stop: https://www.amazon.co.uk/Rules-Contagion-Things-Wellcome-Collection/dp/1788160207 Adam's next book, The Uncertain Science of Certainty: https://proof.kucharski.io/ LBS #50 Ta(l)king Risks & Embracing Uncertainty, with David Spiegelhalter: https://learnbayesstats.com/episode/50-talking-risks-embracing-uncertainty-david-spiegelhalter LBS #51 Bernoulli’s Fallacy & the Crisis of Modern Science, with Aubrey Clayton: https://learnbayesstats.com/episode/51-bernoullis-fallacy-crisis-modern-science-aubrey-clayton Transcript This is an automatic transcript and may therefore contain errors. Please get in touch if you're willing to correct them.…
مرحبًا بك في مشغل أف ام!
يقوم برنامج مشغل أف أم بمسح الويب للحصول على بودكاست عالية الجودة لتستمتع بها الآن. إنه أفضل تطبيق بودكاست ويعمل على أجهزة اندرويد والأيفون والويب. قم بالتسجيل لمزامنة الاشتراكات عبر الأجهزة.