@malwaretech.com
Cybersecurity Specialist, Public Speaker, Ex-Hacker. https://marcushutchins.com
@wellerstein.bsky.social
Historian of science, technology, secrecy, and nukes. Professor. Creator of NUKEMAP. Author of RESTRICTED DATA. New book coming out in 2025 on Truman and the bomb. Working on a video game. Blogging at https://doomsdaymachines.net
@draknek.bsky.social
Personal website: https://alan.draknek.org/ Head Draknek at Draknek & Friends, making thinky puzzle games. He/him. https://www.draknek.org/ Our next game: Spooky Express https://store.steampowered.com/app/3352310/Spooky_Express/
@lucidrains.bsky.social
Boston transplant now living in San Francisco with a border collie / lab mix
@xkcd.com
@christophmolnar.bsky.social
Author of Interpretable Machine Learning and other books Newsletter: https://mindfulmodeler.substack.com/ Website: https://christophmolnar.com/
@mimansaj.bsky.social
Robustness, Data & Annotations, Evaluation & Interpretability in LLMs http://mimansajaiswal.github.io/
@variint.bsky.social
Lost in translation | Interpretability of modular convnets applied to 👁️ and 🛰️🐝 | she/her 🦒💕 variint.github.io
@mdlhx.bsky.social
NLP assistant prof at KU Leuven, PI @lagom-nlp.bsky.social. I like syntax more than most people. Also multilingual NLP, interpretability, mountains and beer. (She/her)
@stephaniebrandl.bsky.social
Assistant Professor in NLP (Fairness, Interpretability and lately interested in Political Science) at the University of Copenhagen ✨ Before: PostDoc in NLP at Uni of CPH, PhD student in ML at TU Berlin
@martinagvilas.bsky.social
Computer Science PhD student | AI interpretability | Vision + Language | Cogntive Science. 🇦🇷living in 🇩🇪, she/her https://martinagvilas.github.io/
@jbarbosa.org
Junior PI @ INM (Paris) in computational neuroscience, interested in how computations enabling cognition are distributed across brain areas. Expect neuroscience and ML content. jbarbosa.org
@kylem.bsky.social
Full of childlike wonder. Building friendly robots. UT Austin PhD student, MIT ‘20.
@bharathr98.com
Theoretical physicist at day. ML researcher at night. Currently split between CERN and UniGE. Ex - IISER-M, @caltech.edu https://scholar.google.com/citations?user=8BDAnVAAAAAJ
@annarogers.bsky.social
Associate professor at IT University of Copenhagen: NLP, language models, interpretability, AI & society. Co-editor-in-chief of ACL Rolling Review. #NLProc #NLP
@jeku.bsky.social
Postdoc at Linköping University🇸🇪. Doing NLP, particularly explainability, language adaptation, modular LLMs. I‘m also into🌋🏕️🚴.
@markriedl.bsky.social
AI for storytelling, games, explainability, safety, ethics. Professor at Georgia Tech. Associate Director of ML Center at GT. Time travel expert. Geek. Dad. he/him
@sejdino.bsky.social
Professor of Statistical Machine Learning at the University of Adelaide. https://sejdino.github.io/
@thomasfel.bsky.social
Explainability, Computer Vision, Neuro-AI.🪴 Kempner Fellow @Harvard. Prev. PhD @Brown, @Google, @GoPro. Crêpe lover. 📍 Boston | 🔗 thomasfel.me
@panisson.bsky.social
Principal Researcher @ CENTAI.eu | Leading the Responsible AI Team. Building Responsible AI through Explainable AI, Fairness, and Transparency. Researching Graph Machine Learning, Data Science, and Complex Systems to understand collective human behavior.
@apepa.bsky.social
Assistant Professor, University of Copenhagen; interpretability, xAI, factuality, accountability, xAI diagnostics https://apepa.github.io/
@sarah-nlp.bsky.social
Research in LM explainability & interpretability since 2017. sarahwie.github.io Postdoc @ai2.bsky.social & @uwnlp.bsky.social PhD from Georgia Tech Views my own, not my employer's.
@swetakar.bsky.social
Machine learning PhD student @ Blei Lab in Columbia University Working in mechanistic interpretability, nlp, causal inference, and probabilistic modeling! Previously at Meta for ~3 years on the Bayesian Modeling & Generative AI teams. 🔗 www.sweta.dev
@wordscompute.bsky.social
nlp/ml phding @ usc, interpretability & reasoning & pretraining & emergence 한american, she, iglee.me, likes ??= bookmarks
@velezbeltran.bsky.social
Machine Learning PhD Student @ Blei Lab & Columbia University. Working on probabilistic ML | uncertainty quantification | LLM interpretability. Excited about everything ML, AI and engineering!
@mdhk.net
Linguist in AI & CogSci 🧠👩💻🤖 PhD student @ ILLC, University of Amsterdam 🌐 https://mdhk.net/ 🐘 https://scholar.social/@mdhk 🐦 https://twitter.com/mariannedhk
@lasha.bsky.social
✨On the faculty job market✨ Postdoc at UW, working on Natural Language Processing 🌐 https://lasharavichander.github.io/
@anneo.bsky.social
Comm tech & social media research professor by day, symphony violinist by night, outside as much as possible otherwise. German/American Pacific Northwestern New Englander, #firstgen academic, she/her, 🏳️🌈 https://anne-oeldorf-hirsch.uconn.edu
@aliciacurth.bsky.social
Machine Learner by day, 🦮 Statistician at ❤️ In search of statistical intuition for modern ML & simple explanations for complex things👀 Interested in the mysteries of modern ML, causality & all of stats. Opinions my own. https://aliciacurth.github.io
@jkminder.bsky.social
CS Student at ETH Zürich, currently doing my masters thesis at the DLAB at EPFL Mainly interested in Language Model Interpretability. Most recent work: https://openreview.net/forum?id=Igm9bbkzHC MATS 7.0 Winter 2025 Scholar w/ Neel Nanda jkminder.ch
@ovdw.bsky.social
Technology specialist at the EU AI Office / AI Safety / Prev: University of Amsterdam, EleutherAI, BigScience Thoughts & opinions are my own and do not necessarily represent my employer.
@elianapastor.bsky.social
Assistant Professor at PoliTo 🇮🇹 | Currently visiting scholar at UCSC 🇺🇸 | she/her | TrustworthyAI, XAI, Fairness in AI https://elianap.github.io/
@dilya.bsky.social
PhD Candidate in Interpretability @FraunhoferHHI | 📍Berlin, Germany dilyabareeva.github.io
@peyrardmax.bsky.social
Junior Professor CNRS (previously EPFL, TU Darmstadt) -- AI Interpretability, causal machine learning, and NLP. Currently visiting @NYU https://peyrardm.github.io
@jskirzynski.bsky.social
PhD student in Computer Science @UCSD. Studying interpretable AI and RL to improve people's decision-making.
@fionaewald.bsky.social
PhD Student @ LMU Munich Munich Center for Machine Learning (MCML) Research in Interpretable ML / Explainable AI
@simonschrodi.bsky.social
🎓 PhD student @cvisionfreiburg.bsky.social @UniFreiburg 💡 interested in mechanistic interpretability, robustness, AutoML & ML for climate science https://simonschrodi.github.io/
@jasmijn.uk
Senior Research Scientist at Google DeepMind. Interested in (equitable) language technology, gender, interpretability, NLP. Views my own. She/her. 🌐 https://jasmijn.uk
@angieboggust.bsky.social
MIT PhD candidate in the VIS group working on interpretability and human-AI alignment
@kayoyin.bsky.social
PhD student at UC Berkeley. NLP for signed languages and LLM interpretability. kayoyin.github.io 🏂🎹🚵♀️🥋
@mariaeckstein.bsky.social
Research scientist at Google DeepMind. Intersection of cognitive science and AI. Reinforcement learning, decision making, structure learning, abstraction, cognitive modeling, interpretability.
@neelrajani.bsky.social
PhD student in Responsible NLP at the University of Edinburgh, passionate about MechInterp
@andreasmadsen.bsky.social
Ph.D. in NLP Interpretability from Mila. Previously: independent researcher, freelancer in ML, and Node.js core developer.
@jaom7.bsky.social
Associate Professor @UAntwerp, sqIRL/IDLab, imec. #RepresentationLearning, #Model #Interpretability & #Explainability A guy who plays with toy bricks, enjoys research and gaming. Opinions are my own idlab.uantwerpen.be/~joramasmogrovejo