“Activation space interpretability may be doomed” by bilalchughtai, Lucius Bushnaq
MP3•منزل الحلقة
Manage episode 460395663 series 3364758
المحتوى المقدم من LessWrong. يتم تحميل جميع محتويات البودكاست بما في ذلك الحلقات والرسومات وأوصاف البودكاست وتقديمها مباشرة بواسطة LessWrong أو شريك منصة البودكاست الخاص بهم. إذا كنت تعتقد أن شخصًا ما يستخدم عملك المحمي بحقوق الطبع والنشر دون إذنك، فيمكنك اتباع العملية الموضحة هنا https://ar.player.fm/legal.
TL;DR: There may be a fundamental problem with interpretability work that attempts to understand neural networks by decomposing their individual activation spaces in isolation: It seems likely to find features of the activations - features that help explain the statistical structure of activation spaces, rather than features of the model - the features the model's own computations make use of.
Written at Apollo Research
Introduction
Claim: Activation space interpretability is likely to give us features of the activations, not features of the model, and this is a problem.
Let's walk through this claim.
What do we mean by activation space interpretability? Interpretability work that attempts to understand neural networks by explaining the inputs and outputs of their layers in isolation. In this post, we focus in particular on the problem of decomposing activations, via techniques such as sparse autoencoders (SAEs), PCA, or just by looking at individual neurons. This [...]
---
Outline:
(00:33) Introduction
(02:40) Examples illustrating the general problem
(12:29) The general problem
(13:26) What can we do about this?
The original text contained 11 footnotes which were omitted from this narration.
---
First published:
January 8th, 2025
Source:
https://www.lesswrong.com/posts/gYfpPbww3wQRaxAFD/activation-space-interpretability-may-be-doomed
---
Narrated by TYPE III AUDIO.
…
continue reading
Written at Apollo Research
Introduction
Claim: Activation space interpretability is likely to give us features of the activations, not features of the model, and this is a problem.
Let's walk through this claim.
What do we mean by activation space interpretability? Interpretability work that attempts to understand neural networks by explaining the inputs and outputs of their layers in isolation. In this post, we focus in particular on the problem of decomposing activations, via techniques such as sparse autoencoders (SAEs), PCA, or just by looking at individual neurons. This [...]
---
Outline:
(00:33) Introduction
(02:40) Examples illustrating the general problem
(12:29) The general problem
(13:26) What can we do about this?
The original text contained 11 footnotes which were omitted from this narration.
---
First published:
January 8th, 2025
Source:
https://www.lesswrong.com/posts/gYfpPbww3wQRaxAFD/activation-space-interpretability-may-be-doomed
---
Narrated by TYPE III AUDIO.
414 حلقات