Julian Minder
CS Student at ETH Zürich, currently doing my masters thesis at the DLAB at EPFL
Mainly interested in Language Model Interpretability.
Most recent work: https://openreview.net/forum?id=Igm9bbkzHC
MATS 7.0 Winter 2025 Scholar w/ Neel Nanda
jkminder.ch
@srishtiy.bsky.social
ELLIS PhD Fellow @belongielab.org | @aicentre.dk | University of Copenhagen | @amsterdamnlp.bsky.social | @ellis.eu Currently visiting @cs.ubc.ca Multi-modal ML | Alignment | Culture | News + Narratives | AI & Society Web: https://www.srishti.dev/
@frasalvi.bsky.social
Researcher @EPFL / dlab | Computational Social Science, NLP, Network Science, Politics | he/him | https://frasalvi.github.io/
@anthropic.com
We're an Al safety and research company that builds reliable, interpretable, and steerable Al systems. Talk to our Al assistant Claude at Claude.ai.
@sedielem.bsky.social
Blog: https://sander.ai/ 🐦: https://x.com/sedielem Research Scientist at Google DeepMind (WaveNet, Imagen 3, Veo, ...). I tweet about deep learning (research + software), music, generative models (personal account).
@vedanglad.bsky.social
ai interpretability research and running • thinking about how models think • prev @MIT cs + physics
@jonling.bsky.social
Assistant Professor @HopkinsMedicine @JHUPath https://scholar.google.com/citations?user=dGBD72YAAAAJ
@dilya.bsky.social
PhD Candidate in Interpretability @FraunhoferHHI | 📍Berlin, Germany dilyabareeva.github.io
@michaelwhanna.bsky.social
PhD Student at the ILLC / UvA doing work at the intersection of (mechanistic) interpretability and cognitive science. hannamw.github.io
@bleilab.bsky.social
Machine learning lab at Columbia University. Probabilistic modeling and approximate inference, embeddings, Bayesian deep learning, and recommendation systems. 🔗 https://www.cs.columbia.edu/~blei/ 🔗 https://github.com/blei-lab
@jaom7.bsky.social
Associate Professor @UAntwerp, sqIRL/IDLab, imec. #RepresentationLearning, #Model #Interpretability & #Explainability A guy who plays with toy bricks, enjoys research and gaming. Opinions are my own idlab.uantwerpen.be/~joramasmogrovejo
@jannikbrinkmann.bsky.social
@kaiserwholearns.bsky.social
Ph.D. student at @jhuclsp, human LM that hallucinates. Formerly @MetaAI, @uwnlp, and @AWS they/them🏳️🌈 #NLProc #NLP Crossposting on X.
@francescortu.bsky.social
NLP & Interpretability | PhD Student @ University of Trieste & Laboratory of Data Engineering of Area Science Park | Prev MPI-IS
@tomerullman.bsky.social
Assistant Professor, Department of Psychology, Harvard University. Computation, cognition, development.
@melaniemitchell.bsky.social
Professor, Santa Fe Institute. Research on AI, cognitive science, and complex systems. Website: https://melaniemitchell.me Substack: https://aiguide.substack.com/
@daniel-fried.bsky.social
Assistant prof at LTI CMU; Research scientist at Meta AI. Working on NLP: language interfaces, applied pragmatics, language-to-code, grounding. https://dpfried.github.io/
@zhuhao.me
AI researcher. Postdocing at Stanford NLP. Prev: PhD CMU LTI. Visit https://zhuhao.me Raising agents in the Opensocial.world
@hyunwoo-kim.bsky.social
Social Reasoning/Cognition + AI, Postdoc at NVIDIA | Previously @ai2.bsky.social | PhD from Seoul Natl Univ. http://hyunwookim.com
@melaniesclar.bsky.social
PhD student @uwnlp.bsky.social @uwcse.bsky.social | Visiting Researcher @MetaAI FAIR | Prev. Lead ML Engineer @ASAPP | 🇦🇷
@michael-j-black.bsky.social
Director, Max Planck Institute for Intelligent Systems; Chief Scientist Meshcapade; Speaker, Cyber Valley. Building 3D humans. https://ps.is.mpg.de/person/black https://meshcapade.com/ https://scholar.google.com/citations?user=6NjbexEAAAAJ&hl=en&oi=ao
@wendlerc.bsky.social
Postdoc at the interpretable deep learning lab at Northeastern University, deep learning, LLMs, mechanistic interpretability
@mimansaj.bsky.social
Robustness, Data & Annotations, Evaluation & Interpretability in LLMs http://mimansajaiswal.github.io/
@variint.bsky.social
Lost in translation | Interpretability of modular convnets applied to 👁️ and 🛰️🐝 | she/her 🦒💕 variint.github.io
@jbarbosa.org
Junior PI @ INM (Paris) in computational neuroscience, interested in how computations enabling cognition are distributed across brain areas. Expect neuroscience and ML content. jbarbosa.org
@kylem.bsky.social
Full of childlike wonder. Building friendly robots. UT Austin PhD student, MIT ‘20.
@bharathr98.com
Theoretical physicist at day. ML researcher at night. Currently split between CERN and UniGE. Ex - IISER-M, @caltech.edu https://scholar.google.com/citations?user=8BDAnVAAAAAJ
@jeku.bsky.social
Postdoc at Linköping University🇸🇪. Doing NLP, particularly explainability, language adaptation, modular LLMs. I‘m also into🌋🏕️🚴.
@thomasfel.bsky.social
Explainability, Computer Vision, Neuro-AI.🪴 Kempner Fellow @Harvard. Prev. PhD @Brown, @Google, @GoPro. Crêpe lover. 📍 Boston | 🔗 thomasfel.me
@panisson.bsky.social
Principal Researcher @ CENTAI.eu | Leading the Responsible AI Team. Building Responsible AI through Explainable AI, Fairness, and Transparency. Researching Graph Machine Learning, Data Science, and Complex Systems to understand collective human behavior.
@mdhk.net
Linguist in AI & CogSci 🧠👩💻🤖 PhD student @ ILLC, University of Amsterdam 🌐 https://mdhk.net/ 🐘 https://scholar.social/@mdhk 🐦 https://twitter.com/mariannedhk
@anneo.bsky.social
Comm tech & social media research professor by day, symphony violinist by night, outside as much as possible otherwise. German/American Pacific Northwestern New Englander, #firstgen academic, she/her, 🏳️🌈 https://anne-oeldorf-hirsch.uconn.edu
@christophmolnar.bsky.social
Author of Interpretable Machine Learning and other books Newsletter: https://mindfulmodeler.substack.com/ Website: https://christophmolnar.com/
@stanislavfort.bsky.social
AI + security | Stanford PhD in AI & Cambridge physics | techno-optimism + alignment + progress + growth | 🇺🇸🇨🇿
@ericwtodd.bsky.social
CS PhD Student, Northeastern University - Machine Learning, Interpretability https://ericwtodd.github.io
@dashiells.bsky.social
Machine learning haruspex || Norbert Weiner is dead so we should just call it "cybernetics" now
@swetakar.bsky.social
Machine learning PhD student @ Blei Lab in Columbia University Working in mechanistic interpretability, nlp, causal inference, and probabilistic modeling! Previously at Meta for ~3 years on the Bayesian Modeling & Generative AI teams. 🔗 www.sweta.dev
@ddjohnson.bsky.social
PhD student at Vector Institute / University of Toronto. Building tools to study neural nets and find out what they know. He/him. www.danieldjohnson.com
@amakelov.bsky.social
Mechanistic interpretability Creator of https://github.com/amakelov/mandala prev. Harvard/MIT machine learning, theoretical computer science, competition math.