Explainable AI
Explainable/Interpretable AI researchers and enthusiasts - DM to join the XAI Slack! Blue Sky and Slack maintained by Nick Kroeger
@markriedl.bsky.social
AI for storytelling, games, explainability, safety, ethics. Professor at Georgia Tech. Associate Director of ML Center at GT. Time travel expert. Geek. Dad. he/him
@maximemeloux.bsky.social
PhD student @LIG | Causal abstraction, interpretability & LLMs
@adamdaviesnlp.bsky.social
PhD candidate @ UIUC | NLP, interpretability, cognitive science | http://ahdavies6.github.io
@sunniesuhyoung.bsky.social
PhD candidate at Princeton CS | AI + HCI | https://sunniesuhyoung.github.io/ Rresponsible AI, Human-AI interaction, AI safety/harms, Human-centered evaluation 🇰🇷→Yale→TTIC→Princeton🐯
@eoindelaney.bsky.social
Assistant Professor at Trinity College Dublin | Previously Oxford | Human-Centered Machine Learning https://e-delaney.github.io/
@kerstinbach.bsky.social
Professor @ NTNU, Research Director @ NorwAI Research on AI, CBR, XAI, intelligence systems, AI+Helath Views are my own
@eberleoliver.bsky.social
Senior Researcher Machine Learning at BIFOLD | TU Berlin 🇩🇪 Prev at IPAM | UCLA | BCCN Interpretability | XAI | NLP & Humanities | ML for Science
@davidvonthenen.com
AI/ML Engineer at @DigitalOcean.com | Keynote Speaker | Building Scalable ML Architectures & Conversational AI Solutions | Python | Go | C++
@giuseppe88.bsky.social
Senior Lecturer and Researcher @LMU_Muenchen working on #ExplainableAI / #interpretableML and #OpenML
@sqirllab.bsky.social
We are "squIRreL", the Interpretable Representation Learning Lab based at IDLab - University of Antwerp & imec. Research Areas: #RepresentationLearning, Model #Interpretability, #explainability, #DeepLearning #ML #AI #XAI #mechinterp
@asilvaguilherme.bsky.social
Fairness • Explainable AI • AutoML http://guilhermealves.eti.br
@dnnslmr.bsky.social
Postdoctoral researcher at the Institute for Logic, Language and Computation at the University of Amsterdam. Previously PhD Student at NLPNorth at the IT University of Copenhagen, with internships at AWS, Parameter Lab, Pacmed. dennisulmer.eu
@jandubinski.bsky.social
PhD student in Machine Learning @Warsaw University of Technology and @IDEAS NCBR
@jurodemann.bsky.social
http://www.julian-rodemann.de | PhD student in statistics @LMU_Muenchen | currently @HarvardStats
@guidelabs.bsky.social
AI systems and models that are engineered to be interpretable and auditable. www.guidelabs.ai
@hthasarathan.bsky.social
PhD student @YorkUniversity @LassondeSchool, I work on computer vision and interpretability.
@zhuzining.bsky.social
Asst Prof @ Stevens. Working on NLP, Explainable, Safe and Trustworthy AI. https://ziningzhu.github.io
@elinguyen.bsky.social
PhD Student in the STAI group at the University of Tübingen and IMPRS-IS | Volunteering at KI macht Schule and Viva con Agua | Currrently visiting Vector Institute elisanguyen.github.io
@stellaathena.bsky.social
I make sure that OpenAI et al. aren't the only people who are able to study large scale AI systems.
@variint.bsky.social
Lost in translation | Interpretability of modular convnets applied to 👁️ and 🛰️🐝 | she/her 🦒💕 variint.github.io
@mdlhx.bsky.social
NLP assistant prof at KU Leuven, PI @lagom-nlp.bsky.social. I like syntax more than most people. Also multilingual NLP, interpretability, mountains and beer. (She/her)
@martinagvilas.bsky.social
Computer Science PhD student | AI interpretability | Vision + Language | Cogntive Science. 🇦🇷living in 🇩🇪, she/her https://martinagvilas.github.io/
@kylem.bsky.social
Full of childlike wonder. Building friendly robots. UT Austin PhD student, MIT ‘20.
@jeku.bsky.social
Postdoc at Linköping University🇸🇪. Doing NLP, particularly explainability, language adaptation, modular LLMs. I‘m also into🌋🏕️🚴.
@sejdino.bsky.social
Professor of Statistical Machine Learning at the University of Adelaide. https://sejdino.github.io/
@thomasfel.bsky.social
Explainability, Computer Vision, Neuro-AI.🪴 Kempner Fellow @Harvard. Prev. PhD @Brown, @Google, @GoPro. Crêpe lover. 📍 Boston | 🔗 thomasfel.me
@panisson.bsky.social
Principal Researcher @ CENTAI.eu | Leading the Responsible AI Team. Building Responsible AI through Explainable AI, Fairness, and Transparency. Researching Graph Machine Learning, Data Science, and Complex Systems to understand collective human behavior.
@sarah-nlp.bsky.social
Research in LM explainability & interpretability since 2017. sarahwie.github.io Postdoc @ai2.bsky.social & @uwnlp.bsky.social PhD from Georgia Tech Views my own, not my employer's.
@wordscompute.bsky.social
nlp/ml phding @ usc, interpretability & reasoning & pretraining & emergence 한american, she, iglee.me, likes ??= bookmarks
@velezbeltran.bsky.social
Machine Learning PhD Student @ Blei Lab & Columbia University. Working on probabilistic ML | uncertainty quantification | LLM interpretability. Excited about everything ML, AI and engineering!
@mdhk.net
Linguist in AI & CogSci 🧠👩💻🤖 PhD student @ ILLC, University of Amsterdam 🌐 https://mdhk.net/ 🐘 https://scholar.social/@mdhk 🐦 https://twitter.com/mariannedhk
@lasha.bsky.social
✨On the faculty job market✨ Postdoc at UW, working on Natural Language Processing 🌐 https://lasharavichander.github.io/
@anneo.bsky.social
Comm tech & social media research professor by day, symphony violinist by night, outside as much as possible otherwise. German/American Pacific Northwestern New Englander, #firstgen academic, she/her, 🏳️🌈 https://anne-oeldorf-hirsch.uconn.edu
@aliciacurth.bsky.social
Machine Learner by day, 🦮 Statistician at ❤️ In search of statistical intuition for modern ML & simple explanations for complex things👀 Interested in the mysteries of modern ML, causality & all of stats. Opinions my own. https://aliciacurth.github.io
@jkminder.bsky.social
CS Student at ETH Zürich, currently doing my masters thesis at the DLAB at EPFL Mainly interested in Language Model Interpretability. Most recent work: https://openreview.net/forum?id=Igm9bbkzHC MATS 7.0 Winter 2025 Scholar w/ Neel Nanda jkminder.ch
@ovdw.bsky.social
Technology specialist at the EU AI Office / AI Safety / Prev: University of Amsterdam, EleutherAI, BigScience Thoughts & opinions are my own and do not necessarily represent my employer.
@elianapastor.bsky.social
Assistant Professor at PoliTo 🇮🇹 | Currently visiting scholar at UCSC 🇺🇸 | she/her | TrustworthyAI, XAI, Fairness in AI https://elianap.github.io/
@dilya.bsky.social
PhD Candidate in Interpretability @FraunhoferHHI | 📍Berlin, Germany dilyabareeva.github.io
@rachel-law.bsky.social
Organic machine turning tea into theorems ☕️ AI @ Microsoft Research ➡️ Goal: Teach models (and humans) to reason better Let’s connect re: AI for social good, graphs & network dynamics, discrete math, logic 🧩, 🥾,🎨 Organizing for democracy.🗽 www.rlaw.me
@peyrardmax.bsky.social
Junior Professor CNRS (previously EPFL, TU Darmstadt) -- AI Interpretability, causal machine learning, and NLP. Currently visiting @NYU https://peyrardm.github.io
@jskirzynski.bsky.social
PhD student in Computer Science @UCSD. Studying interpretable AI and RL to improve people's decision-making.
@fionaewald.bsky.social
PhD Student @ LMU Munich Munich Center for Machine Learning (MCML) Research in Interpretable ML / Explainable AI
@simonschrodi.bsky.social
🎓 PhD student @cvisionfreiburg.bsky.social @UniFreiburg 💡 interested in mechanistic interpretability, robustness, AutoML & ML for climate science https://simonschrodi.github.io/
@angieboggust.bsky.social
MIT PhD candidate in the VIS group working on interpretability and human-AI alignment