Tim van Erven
Associate professor in machine learning at the University of Amsterdam. Topics: (online) learning theory and the mathematics of interpretable AI.
www.timvanerven.nl
Theory of Interpretable AI seminar: https://tverven.github.io/tiai-seminar
@olgatticus.bsky.social
MSCA PhD student at AMLab with Erik Bekkers, interested in geometric deep learning, generative models and their intersection 🌐 ✨ K-hip hop scholar 🇰🇷 and dancer-in-the-making 💃 bit.ly/olga-zaghen
@sumedh-hindupur.bsky.social
Grad Student at Harvard SEAS Interested in ML Interpretability, Computational Neuroscience, Signal Processing
@docmilanfar.bsky.social
Distinguished Scientist at Google. Computational Imaging, Machine Learning, and Vision. Posts are personal opinions. May change or disappear over time. http://milanfar.org
@mircomutti.bsky.social
Reinforcement learning, but without rewards. Postdoc at the Technion. PhD from Politecnico di Milano. https://muttimirco.github.io
@mdhk.net
Linguist in AI & CogSci 🧠👩💻🤖 PhD student @ ILLC, University of Amsterdam 🌐 https://mdhk.net/ 🐘 https://scholar.social/@mdhk 🐦 https://twitter.com/mariannedhk
@lasha.bsky.social
✨On the faculty job market✨ Postdoc at UW, working on Natural Language Processing 🌐 https://lasharavichander.github.io/
@aliciacurth.bsky.social
Machine Learner by day, 🦮 Statistician at ❤️ In search of statistical intuition for modern ML & simple explanations for complex things👀 Interested in the mysteries of modern ML, causality & all of stats. Opinions my own. https://aliciacurth.github.io
@jkminder.bsky.social
CS Student at ETH Zürich, currently doing my masters thesis at the DLAB at EPFL Mainly interested in Language Model Interpretability. Most recent work: https://openreview.net/forum?id=Igm9bbkzHC MATS 7.0 Winter 2025 Scholar w/ Neel Nanda jkminder.ch
@ovdw.bsky.social
Technology specialist at the EU AI Office / AI Safety / Prev: University of Amsterdam, EleutherAI, BigScience Thoughts & opinions are my own and do not necessarily represent my employer.
@elianapastor.bsky.social
Assistant Professor at PoliTo 🇮🇹 | Currently visiting scholar at UCSC 🇺🇸 | she/her | TrustworthyAI, XAI, Fairness in AI https://elianap.github.io/
@dilya.bsky.social
PhD Candidate in Interpretability @FraunhoferHHI | 📍Berlin, Germany dilyabareeva.github.io
@peyrardmax.bsky.social
Junior Professor CNRS (previously EPFL, TU Darmstadt) -- AI Interpretability, causal machine learning, and NLP. Currently visiting @NYU https://peyrardm.github.io
@jskirzynski.bsky.social
PhD student in Computer Science @UCSD. Studying interpretable AI and RL to improve people's decision-making.
@fionaewald.bsky.social
PhD Student @ LMU Munich Munich Center for Machine Learning (MCML) Research in Interpretable ML / Explainable AI
@simonschrodi.bsky.social
🎓 PhD student @cvisionfreiburg.bsky.social @UniFreiburg 💡 interested in mechanistic interpretability, robustness, AutoML & ML for climate science https://simonschrodi.github.io/
@angieboggust.bsky.social
MIT PhD candidate in the VIS group working on interpretability and human-AI alignment
@kayoyin.bsky.social
PhD student at UC Berkeley. NLP for signed languages and LLM interpretability. kayoyin.github.io 🏂🎹🚵♀️🥋
@mariaeckstein.bsky.social
Research scientist at Google DeepMind. Intersection of cognitive science and AI. Reinforcement learning, decision making, structure learning, abstraction, cognitive modeling, interpretability.
@neelrajani.bsky.social
PhD student in Responsible NLP at the University of Edinburgh, passionate about MechInterp
@andreasmadsen.bsky.social
Ph.D. in NLP Interpretability from Mila. Previously: independent researcher, freelancer in ML, and Node.js core developer.
@jaom7.bsky.social
Associate Professor @UAntwerp, sqIRL/IDLab, imec. #RepresentationLearning, #Model #Interpretability & #Explainability A guy who plays with toy bricks, enjoys research and gaming. Opinions are my own idlab.uantwerpen.be/~joramasmogrovejo
@mlamparth.bsky.social
Postdoc at @Stanford, @StanfordCISAC, Stanford Center for AI Safety, and the SERI program | Focusing on interpretable, safe, and ethical AI decision-making.
@ankareuel.bsky.social
Computer Science PhD Student @ Stanford | Geopolitics & Technology Fellow @ Harvard Kennedy School/Belfer | Vice Chair EU AI Code of Practice | Views are my own
@csinva.bsky.social
Senior researcher at Microsoft Research. Seeking good explanations with machine learning https://csinva.io/
@norabelrose.bsky.social
AI, philosophy, spirituality Head of interpretability research at EleutherAI, but posts are my own views, not Eleuther’s.
@claudiashi.bsky.social
machine learning, causal inference, science of llm, ai safety, phd student @bleilab, keen bean https://www.claudiashi.com/
@katharinaprasse.bsky.social
PhD in Computer Vision Supervised and Inspired by Prof. Dr.-Ing Margret Keuper Member of the Data & Web Science Group @ University of Mannheim.
@juliagrabinski.bsky.social
PhD Student in Computer Vision 💫 seeking Postdoc Opportunities 💫
@sukrutrao.bsky.social
PhD Student at the Max Planck Institute for Informatics @cvml.mpi-inf.mpg.de @maxplanck.de | Explainable AI, Computer Vision, Neuroexplicit Models Web: sukrutrao.github.io
@velezbeltran.bsky.social
Machine Learning PhD Student @ Blei Lab & Columbia University. Working on probabilistic ML | uncertainty quantification | LLM interpretability. Excited about everything ML, AI and engineering!
@wordscompute.bsky.social
nlp/ml phding @ usc, interpretability & reasoning & pretraining & emergence 한american, she, iglee.me, likes ??= bookmarks
@swetakar.bsky.social
Machine learning PhD student @ Blei Lab in Columbia University Working in mechanistic interpretability, nlp, causal inference, and probabilistic modeling! Previously at Meta for ~3 years on the Bayesian Modeling & Generative AI teams. 🔗 www.sweta.dev
@sarah-nlp.bsky.social
Research in LM explainability & interpretability since 2017. sarahwie.github.io Postdoc @ai2.bsky.social & @uwnlp.bsky.social PhD from Georgia Tech Views my own, not my employer's.
@apepa.bsky.social
Assistant Professor, University of Copenhagen; interpretability, xAI, factuality, accountability, xAI diagnostics https://apepa.github.io/
@panisson.bsky.social
Principal Researcher @ CENTAI.eu | Leading the Responsible AI Team. Building Responsible AI through Explainable AI, Fairness, and Transparency. Researching Graph Machine Learning, Data Science, and Complex Systems to understand collective human behavior.
@thomasfel.bsky.social
Explainability, Computer Vision, Neuro-AI.🪴 Kempner Fellow @Harvard. Prev. PhD @Brown, @Google, @GoPro. Crêpe lover. 📍 Boston | 🔗 thomasfel.me
@sejdino.bsky.social
Professor of Statistical Machine Learning at the University of Adelaide. https://sejdino.github.io/
@jeku.bsky.social
Postdoc at Linköping University🇸🇪. Doing NLP, particularly explainability, language adaptation, modular LLMs. I‘m also into🌋🏕️🚴.
@annarogers.bsky.social
Associate professor at IT University of Copenhagen: NLP, language models, interpretability, AI & society. Co-editor-in-chief of ACL Rolling Review. #NLProc #NLP
@kylem.bsky.social
Full of childlike wonder. Building friendly robots. UT Austin PhD student, MIT ‘20.
@martinagvilas.bsky.social
Computer Science PhD student | AI interpretability | Vision + Language | Cogntive Science. 🇦🇷living in 🇩🇪, she/her https://martinagvilas.github.io/