axr عمومي
[search 0]
أكثر
تنزيل التطبيق!
show episodes
 
Artwork

1
AXREM Insights

Melanie Johnson / Sally Edgington

Unsubscribe
Unsubscribe
شهريا+
 
AXREM Insights bringing you insights from within the industry. We'll be talking to our team and our members and delving into the people behind the products and services.
  continue reading
 
AXRP (pronounced axe-urp) is the AI X-risk Research Podcast where I, Daniel Filan, have conversations with researchers about their papers. We discuss the paper, and hopefully get a sense of why it's been written and how it might reduce the risk of AI causing an existential catastrophe: that is, permanently and drastically curtailing humanity's future potential. You can visit the website and read transcripts at axrp.net.
  continue reading
 
Loading …
show series
 
The latest episode of Axrem Insights podcast dives into the upcoming International Imaging Congress (IIC) 2024, where hosts Melanie Johnson and Sally Edgington interview Dr. Ram Senasi, Chair of the IIC Advisory Board and Consultant Pediatric Radiologist. Dr. Senasi shares insights into his passion for education, the role of technology in healthcar…
  continue reading
 
Sometimes, people talk about transformers as having "world models" as a result of being trained to predict text data on the internet. But what does this even mean? In this episode, I talk with Adam Shai and Paul Riechers about their work applying computational mechanics, a sub-field of physics studying how to predict random processes, to neural net…
  continue reading
 
In this episode of AXREM Insights, host Melanie Johnson and co-host Sally Edgington sit down with Jemimah Eve, Director of Policy and Impact at the Institute of Physics and Engineering in Medicine (IPEM). Jemimah discusses her career journey, starting from a background in chemistry and surface science to her current leadership role at IPEM. She exp…
  continue reading
 
In this episode of AXREM Insights, Melanie Johnson and Sally Edgington sit down with Richard Evans, CEO of the Society of Radiographers, for a fascinating chat about his career journey—from hospital porter to radiography expert. Richard shares how a twist of fate led him into the world of radiography and how his passion for the profession has only …
  continue reading
 
The latest episode of the Axrem Insights podcast features a lively discussion from some of our Health care trade associations David Stockdale from the British Healthcare Trades Association (BHTA), Nikki from BAREMA, and Helen from BIVDA. The conversation, hosted by Melanie Johnson and Sally Edgington, focuses on the theme of partnerships in the hea…
  continue reading
 
In this episode of our Partnerships Podcast, Melanie and Sally sit down with Catherine Kirkpatrick, a seasoned professional in the ultrasound community. Catherine shares her journey and insights into the ultrasound field, detailing her multifaceted roles, including her work as a Consultant Sonographer at United Lincolnshire Hospitals and Developmen…
  continue reading
 
How do we figure out what large language models believe? In fact, do they even have beliefs? Do those beliefs have locations, and if so, can we edit those locations to change the beliefs? Also, how are we going to get AI to perform tasks so hard that we can't figure out if they succeeded at them? In this episode, I chat with Peter Hase about his re…
  continue reading
 
In this insightful episode, Melanie Johnson and Sally Edgington welcome Dr. Katherine Halliday, President of the Royal College of Radiologists (RCR). Dr. Halliday shares her inspiring journey from paediatric radiology to becoming a leader in the field. She delves into the challenges and opportunities within the radiology sector, focusing on workfor…
  continue reading
 
How can we figure out if AIs are capable enough to pose a threat to humans? When should we make a big effort to mitigate risks of catastrophic AI misbehaviour? In this episode, I chat with Beth Barnes, founder of and head of research at METR, about these questions and more. Patreon: patreon.com/axrpodcast Ko-fi: ko-fi.com/axrpodcast The transcript:…
  continue reading
 
Welcome to AXREM Insights, where hosts Melanie Johnson and Sally Edgington explore advancements in healthcare through MedTech and innovation. In this special episode on the AXREM Patient Monitoring Manifesto, they interview Yasmeen Mahmood, a business leader at Philips UKI. Yasmeen, who joined Philips through a graduate scheme, has extensive experi…
  continue reading
 
In this pre-election special episode of the podcast, Melanie Johnson and Sally Edgington discuss politics with Ila Dobson, AXREM's Government Affairs Director, and Daniel Laing, Senior Account Director at Tendo Consulting. Ila shares her extensive background in healthcare and long-term involvement with AXREM, while Daniel discusses his career in pu…
  continue reading
 
In this episode of AXREM Insights, hosts Melanie Johnson and Sally Edgington interview several key attendees live from the UKIO event. Dawn PhillipsJarrett, with 20 years of experience in radiology, shares her journey from studying chemistry and working in energy and water conservation to her current role in healthcare imaging. She emphasizes the i…
  continue reading
 
Reinforcement Learning from Human Feedback, or RLHF, is one of the main ways that makers of large language models make them 'aligned'. But people have long noted that there are difficulties with this approach when the models are smarter than the humans providing feedback. In this episode, I talk with Scott Emmons about his work categorizing the pro…
  continue reading
 
In the premiere of Season 2 of AXREM Insights, co-hosts Melanie Johnson and Sally Edgington dive into the world of diagnostic imaging and oncology with a special guest, Dr. Emma Hyde. As the President of UKIO and an Associate Professor of Diagnostic Imaging at the University of Derby, Dr. Hyde shares her journey from a student radiographer to a lea…
  continue reading
 
In this episode of AXREM Insights, Sarah Cowan and David Britton share their professional journeys and personal interests, illustrating the diverse paths within the medical technology industry. Sarah discusses her transition Marketing for a leisure centre to Siemens Medical a company she has been with for 17 years, highlighting her role with AXREM …
  continue reading
 
What's the difference between a large language model and the human brain? And what's wrong with our theories of agency? In this episode, I chat about these questions with Jan Kulveit, who leads the Alignment of Complex Systems research group. Patreon: patreon.com/axrpodcast Ko-fi: ko-fi.com/axrpodcast The transcript: axrp.net/episode/2024/05/30/epi…
  continue reading
 
In this engaging episode of AXREM Insights, hosts Melanie Johnson and Sally Edgington sit down with Huw Shurmer, the strategic and government relationships manager for Fujifilm UK and current vice chair of AXREM. The conversation unfolds as Huw shares his fascinating career trajectory, starting from his academic background in theology to his pivota…
  continue reading
 
In this episode of "Meet the Team," Jeevan Gunaratnam, Head of Government Affairs at Philips and current AXREM Chair, shares his journey in the medical technology field. Inspired by his uncle, a radiographer, Jeevan's early curiosity was piqued by medical devices, leading him from using a pacemaker as a paperweight to pursuing a career in engineeri…
  continue reading
 
In the inaugural episode of the AXREM Insights Podcast, host Melanie Johnson interviews her co-host and AXREM CEO, Sally Edgington. Sally shares her remarkable journey from a diverse career background to her current role, driven by a lifelong interest in healthcare stemming from personal experiences as a patient. Despite facing challenges and setba…
  continue reading
 
What's going on with deep learning? What sorts of models get learned, and what are the learning dynamics? Singular learning theory is a theory of Bayesian statistics broad enough in scope to encompass deep neural networks that may help answer these questions. In this episode, I speak with Daniel Murfet about this research program and what it tells …
  continue reading
 
Top labs use various forms of "safety training" on models before their release to make sure they don't do nasty stuff - but how robust is that? How can we ensure that the weights of powerful AIs don't get leaked or stolen? And what can AI even do these days? In this episode, I speak with Jeffrey Ladish about security and AI. Patreon: patreon.com/ax…
  continue reading
 
Welcome to AXREM Insights, where healthcare meets innovation! Join hosts Melanie Johnson and Sally Edgington as they dive into the world of MedTech with industry leaders and experts. From diagnostic imaging to patient monitoring, we're bringing you first hand insights and intel straight from the heart of the industry. Get ready for Meet the Team, w…
  continue reading
 
In 2022, it was announced that a fairly simple method can be used to extract the true beliefs of a language model on any given topic, without having to actually understand the topic at hand. Earlier, in 2021, it was announced that neural networks sometimes 'grok': that is, when training them on certain tasks, they initially memorize their training …
  continue reading
 
How should the law govern AI? Those concerned about existential risks often push either for bans or for regulations meant to ensure that AI is developed safely - but another approach is possible. In this episode, Gabriel Weil talks about his proposal to modify tort law to enable people to sue AI companies for disasters that are "nearly catastrophic…
  continue reading
 
A lot of work to prevent AI existential risk takes the form of ensuring that AIs don't want to cause harm or take over the world---or in other words, ensuring that they're aligned. In this episode, I talk with Buck Shlegeris and Ryan Greenblatt about a different approach, called "AI control": ensuring that AI systems couldn't take over the world, e…
  continue reading
 
The events of this year have highlighted important questions about the governance of artificial intelligence. For instance, what does it mean to democratize AI? And how should we balance benefits and dangers of open-sourcing powerful AI systems such as large language models? In this episode, I speak with Elizabeth Seger about her research on these …
  continue reading
 
Imagine a world where there are many powerful AI systems, working at cross purposes. You could suppose that different governments use AIs to manage their militaries, or simply that many powerful AIs have their own wills. At any rate, it seems valuable for them to be able to cooperatively work together and minimize pointless conflict. How do we ensu…
  continue reading
 
Recently, OpenAI made a splash by announcing a new "Superalignment" team. Lead by Jan Leike and Ilya Sutskever, the team would consist of top researchers, attempting to solve alignment for superintelligent AIs in four years by figuring out how to build a trustworthy human-level AI alignment researcher, and then using it to solve the rest of the pro…
  continue reading
 
Is there some way we can detect bad behaviour in our AI system without having to know exactly what it looks like? In this episode, I speak with Mark Xu about mechanistic anomaly detection: a research direction based on the idea of detecting strange things happening in neural networks, in the hope that that will alert us of potential treacherous tur…
  continue reading
 
What can we learn about advanced deep learning systems by understanding how humans learn and form values over their lifetimes? Will superhuman AI look like ruthless coherent utility optimization, or more like a mishmash of contextually activated desires? This episode's guest, Quintin Pope, has been thinking about these questions as a leading resear…
  continue reading
 
Lots of people in the field of machine learning study 'interpretability', developing tools that they say give us useful information about neural networks. But how do we know if meaningful progress is actually being made? What should we want out of these tools? In this episode, I speak to Stephen Casper about these questions, as well as about a benc…
  continue reading
 
How should we scientifically think about the impact of AI on human civilization, and whether or not it will doom us all? In this episode, I speak with Scott Aaronson about his views on how to make progress in AI alignment, as well as his work on watermarking the output of language models, and how he moved from a background in quantum complexity the…
  continue reading
 
How good are we at understanding the internal computation of advanced machine learning models, and do we have a hope at getting better? In this episode, Neel Nanda talks about the sub-field of mechanistic interpretability research, as well as papers he's contributed to that explore the basics of transformer circuits, induction heads, and grokking. …
  continue reading
 
I have a new podcast, where I interview whoever I want about whatever I want. It's called "The Filan Cabinet", and you can find it wherever you listen to podcasts. The first three episodes are about pandemic preparedness, God, and cryptocurrency. For more details, check out the podcast website (thefilancabinet.com), or search "The Filan Cabinet" in…
  continue reading
 
Concept extrapolation is the idea of taking concepts an AI has about the world - say, "mass" or "does this picture contain a hot dog" - and extending them sensibly to situations where things are different - like learning that the world works via special relativity, or seeing a picture of a novel sausage-bread combination. For a while, Stuart Armstr…
  continue reading
 
Sometimes, people talk about making AI systems safe by taking examples where they fail and training them to do well on those. But how can we actually do this well, especially when we can't use a computer program to say what a 'failure' is? In this episode, I speak with Daniel Ziegler about his research group's efforts to try doing this with present…
  continue reading
 
Many people in the AI alignment space have heard of AI safety via debate - check out AXRP episode 6 (axrp.net/episode/2021/04/08/episode-6-debate-beth-barnes.html) if you need a primer. But how do we get language models to the stage where they can usefully implement debate? In this episode, I talk to Geoffrey Irving about the role of language model…
  continue reading
 
Why does anybody care about natural abstractions? Do they somehow relate to math, or value learning? How do E. coli bacteria find sources of sugar? All these questions and more will be answered in this interview with John Wentworth, where we talk about his research plan of understanding agency via natural abstractions. Topics we discuss, and timest…
  continue reading
 
Late last year, Vanessa Kosoy and Alexander Appel published some research under the heading of "Infra-Bayesian physicalism". But wait - what was infra-Bayesianism again? Why should we care? And what does any of this have to do with physicalism? In this episode, I talk with Vanessa Kosoy about these questions, and get a technical overview of how inf…
  continue reading
 
How should we think about artificial general intelligence (AGI), and the risks it might pose? What constraints exist on technical solutions to the problem of aligning superhuman AI systems with human intentions? In this episode, I talk to Richard Ngo about his report analyzing AGI safety from first principles, and recent conversations he had with E…
  continue reading
 
Why would advanced AI systems pose an existential risk, and what would it look like to develop safer systems? In this episode, I interview Paul Christiano about his views of how AI could be so dangerous, what bad AI scenarios could look like, and what he thinks about various techniques to reduce this risk. Topics we discuss, and timestamps: - 00:00…
  continue reading
 
Many scary stories about AI involve an AI system deceiving and subjugating humans in order to gain the ability to achieve its goals without us stopping it. This episode's guest, Alex Turner, will tell us about his research analyzing the notions of "attainable utility" and "power" that underlie these stories, so that we can better evaluate how likel…
  continue reading
 
When going about trying to ensure that AI does not cause an existential catastrophe, it's likely important to understand how AI will develop in the future, and why exactly it might or might not cause such a catastrophe. In this episode, I interview Katja Grace, researcher at AI Impacts, who's done work surveying AI researchers about when they expec…
  continue reading
 
Being an agent can get loopy quickly. For instance, imagine that we're playing chess and I'm trying to decide what move to make. Your next move influences the outcome of the game, and my guess of that influences my move, which influences your next move, which influences the outcome of the game. How can we model these dependencies in a general way, …
  continue reading
 
How should we think about the technical problem of building smarter-than-human AI that does what we want? When and how should AI systems defer to us? Should they have their own goals, and how should those goals be managed? In this episode, Dylan Hadfield-Menell talks about his work on assistance games that formalizes these questions. The first coup…
  continue reading
 
If you want to shape the development and forecast the consequences of powerful AI technology, it's important to know when it might appear. In this episode, I talk to Ajeya Cotra about her draft report "Forecasting Transformative AI from Biological Anchors" which aims to build a probabilistic model to answer this question. We talk about a variety of…
  continue reading
 
One way of thinking about how AI might pose an existential threat is by taking drastic actions to maximize its achievement of some objective function, such as taking control of the power supply or the world's computers. This might suggest a mitigation strategy of minimizing the degree to which AI systems have large effects on the world that are not…
  continue reading
 
Loading …

دليل مرجعي سريع