XAI Starter pack
Researchers in Interpretable and Explainable AI
Created by
@jskirzynski.bsky.social
@sukrutrao.bsky.social
PhD Student at the Max Planck Institute for Informatics @cvml.mpi-inf.mpg.de @maxplanck.de | Explainable AI, Computer Vision, Neuroexplicit Models Web: sukrutrao.github.io
@simoneschaub.bsky.social
Assistant Professor of Computer Science at TU Darmstadt, Member of @ellis.eu, DFG #EmmyNoether Fellow, PhD @ETH Computer Vision & Deep Learning
@mtiezzi.bsky.social
PostDoc Researcher @ IIT, Continual and Lifelong Learning -> Robots, Graph Neural Networks, Sequence Processing | CoLLAs 2024 Local Chair 🏠 mtiezzi.github.io
@ankareuel.bsky.social
Computer Science PhD Student @ Stanford | Geopolitics & Technology Fellow @ Harvard Kennedy School/Belfer | Vice Chair EU AI Code of Practice | Views are my own
@mlamparth.bsky.social
Postdoc at @Stanford, @StanfordCISAC, Stanford Center for AI Safety, and the SERI program | Focusing on interpretable, safe, and ethical AI decision-making.
@lucasresck.bsky.social
PhD student in NLP at Cambridge | ELLIS PhD student https://lucasresck.github.io/
@sqirllab.bsky.social
We are "squIRreL", the Interpretable Representation Learning Lab based at IDLab - University of Antwerp & imec. Research Areas: #RepresentationLearning, Model #Interpretability, #explainability, #DeepLearning #ML #AI #XAI #mechinterp
@allthingsapx.bsky.social
Product Marketing Lead @NVIDIA | PhD @UMBaltimore | omics, immuno/micro, AI/ML | 🇺🇸🇸🇰 | Posts are my own views, not those of my employer.
@conorosullyds.bsky.social
PhD in ML for coastal monitoring 🌊 South African 🇿🇦 living in Dublin 🇮🇪 I post content about XAI & remote sensing
@neelrajani.bsky.social
PhD student in Responsible NLP at the University of Edinburgh, passionate about MechInterp
@iaugenstein.bsky.social
Professor at the University of Copenhagen. Explainable AI, Natural Language Processing, ML. Head of copenlu.bsky.social lab. #NLProc #NLP #XAI http://isabelleaugenstein.github.io/
@sbordt.bsky.social
Understanding LLMs. Interpretable Machine Learning. Postdoc @ Uni Tuebingen. https://sbordt.github.io/
@sraval.bsky.social
Physics, Visualization and AI PhD @ Harvard | Embedding visualization and LLM interpretability | Love pretty visuals, math, physics and pets | Currently into manifolds Wanna meet and chat? Book a meeting here: https://zcal.co/shivam-raval
@thserra.bsky.social
Assistant professor at University of Iowa, formerly at Bucknell University, mathematical optimizer with an #orms PhD from Carnegie Mellon University, curious about scaling up constraint learning, proud father of two
@adrhill.bsky.social
PhD student at @bifold.berlin, Machine Learning Group, TU Berlin. Automatic Differentiation, Explainable AI and #JuliaLang. Open source person: adrianhill.de/projects
@wzuidema.bsky.social
Associate Professor of Natural Language Processing & Explainable AI, University of Amsterdam, ILLC
@kirillbykov.bsky.social
PhD student in Interpretable ML @UMI_Lab_AI, @bifoldberlin, @TUBerlin
@zeynepakata.bsky.social
Liesel Beckmann Distinguished Professor of Computer Science at Technical University of Munich and Director of the Institute for Explainable ML at Helmholtz Munich
@kbeckh.bsky.social
Data Scientist at Fraunhofer IAIS PhD Student at University of Bonn Lamarr Institute XAI, NLP, Human-centered AI
@dggoldst.bsky.social
Senior Principal Research Manager at Microsoft Research NYC. Economics and Computation Group. Distinguished Scholar at Wharton.
@simonschrodi.bsky.social
🎓 PhD student @cvisionfreiburg.bsky.social @UniFreiburg 💡 interested in mechanistic interpretability, robustness, AutoML & ML for climate science https://simonschrodi.github.io/
@hectorkohler.bsky.social
PhD student in interpretable reinforcement learning at Inria Scool. http://Kohlerhector.github.io/homepage/
@angieboggust.bsky.social
MIT PhD candidate in the VIS group working on interpretability and human-AI alignment
@mlam.bsky.social
Stanford CS PhD student | hci, human-centered AI, social computing, responsible AI (+ dance, design, doodling!) michelle123lam.github.io
@polochau.bsky.social
Professor, Georgia Tech • ML+VIS • Director, Polo Club of AI 🚀 poloclub.gatech.edu • Carnegie Mellon alum. Covert designer, cellist, pianist faculty.cc.gatech.edu/~dchau
@domoritz.de
Visualization, data, AI/ML. Professor at CMU (@dig.cmu.edu, @hcii.cmu.edu) and researcher at Apple. Also sailboats ⛵️ and chocolate 🍫. www.domoritz.de
@johnegan.bsky.social
Albuquerque AI / Atomic Entropy abqgpt.com yourai.expert folks call me the ‘AI expert’, not chasing the $$$ or seeking the spotlight, just trying to help normal folks prosper with this tech in a safe and secure manner, my 1st tech startup was in 1995
@dorsarohani.bsky.social
Deep learning @ NVIDIA, Vector. prev @ DeepGenomics dorsarohani.com
@marvinschmitt.bsky.social
🇪🇺 AI/ML, Member @ellis.eu 🤖 Generative NNs, ProbML, Uncertainty Quantification, Amortized Inference, Simulation Intelligence 🎓 PhD+MSc CS, MSc Psych 🏡 marvinschmitt.github.io ✨ On the job market, DMs open 📩
@stephaniebrandl.bsky.social
Assistant Professor in NLP (Fairness, Interpretability and lately interested in Political Science) at the University of Copenhagen ✨ Before: PostDoc in NLP at Uni of CPH, PhD student in ML at TU Berlin
@sarah-nlp.bsky.social
Research in LM explainability & interpretability since 2017. sarahwie.github.io Postdoc @ai2.bsky.social & @uwnlp.bsky.social PhD from Georgia Tech Views my own, not my employer's.
@dilya.bsky.social
PhD Candidate in Interpretability @FraunhoferHHI | 📍Berlin, Germany dilyabareeva.github.io
@romapatel.bsky.social
research scientist @deepmind. language & multi-agent rl & interpretability. phd @BrownUniversity '22 under ellie pavlick (she/her) https://roma-patel.github.io
@swetakar.bsky.social
Machine learning PhD student @ Blei Lab in Columbia University Working in mechanistic interpretability, nlp, causal inference, and probabilistic modeling! Previously at Meta for ~3 years on the Bayesian Modeling & Generative AI teams. 🔗 www.sweta.dev
@velezbeltran.bsky.social
Machine Learning PhD Student @ Blei Lab & Columbia University. Working on probabilistic ML | uncertainty quantification | LLM interpretability. Excited about everything ML, AI and engineering!
@jkminder.bsky.social
CS Student at ETH Zürich, currently doing my masters thesis at the DLAB at EPFL Mainly interested in Language Model Interpretability. Most recent work: https://openreview.net/forum?id=Igm9bbkzHC MATS 7.0 Winter 2025 Scholar w/ Neel Nanda jkminder.ch
@ovdw.bsky.social
Technology specialist at the EU AI Office / AI Safety / Prev: University of Amsterdam, EleutherAI, BigScience Thoughts & opinions are my own and do not necessarily represent my employer.
@ronitelman.bsky.social
O'Reilly Author, "Unifying Business, Data, and Code" (2024), and Apress author, "The Language of Innovation" (2025)
@wordscompute.bsky.social
nlp/ml phding @ usc, interpretability & reasoning & pretraining & emergence 한american, she, iglee.me, likes ??= bookmarks
@gsarti.com
PhD Student at @gronlp.bsky.social 🐮, core dev @inseq.org. Interpretability ∩ HCI ∩ #NLProc. gsarti.com
@amirrahnama.bsky.social
PhD Student at KTH Royal Institute of Technology. Researching Explainability and Interpretability in Machine Learning
@charlottemagister.bsky.social
PhD student @ University of Cambridge, focusing on Explainability and Interpretability for GNNs
@noahlegall.bsky.social
AppSci @ Dotmatics | Microbial Bioinformatics | Deep Learning & Explainability | Nextflow Ambassador | Author of 'The Microbialist' Substack | Thoughts are my own personal opinions and do not represent a third party
@aparafita.bsky.social
Senior Researcher at Barcelona Supercomputing Center | PhD in Causal Estimation with estimand-agnostic frameworks, working on Machine Learning Explainability Github: @aparafita
@jaom7.bsky.social
Associate Professor @UAntwerp, sqIRL/IDLab, imec. #RepresentationLearning, #Model #Interpretability & #Explainability A guy who plays with toy bricks, enjoys research and gaming. Opinions are my own idlab.uantwerpen.be/~joramasmogrovejo
@tfjgeorge.bsky.social
Explainability of deep neural nets and causality https://tfjgeorge.github.io/
@sebastiendestercke.bsky.social
CS researcher in uncertainty reasoning (whenever it appears: risk analysis, AI, philosophy, ...), mostly mixing sets and probabilities. Posts mostly on this topic (french and english), and a bit about others. Personal account and opinions.
@qiaoyu-rosa.bsky.social
Final year NLP PhD student at UChicago. Explainability, reasoning, and hypothesis generation!
@e-giunchiglia.bsky.social
Assistant Professor at Imperial College London | EEE Department and I-X. Neuro-symbolic AI, Safe AI, Generative Models Previously: Post-doc at TU Wien, DPhil at the University of Oxford.
@lawlessopt.bsky.social
Stanford MS&E Postdoc | Human-Centered AI & OR Prev: @CornellORIE @MSFTResearch, @IBMResearch, @uoftmie 🌈
@olegranmo.bsky.social
AI Professor and Founding Director @ https://cair.uia.no | Chair of Technical Steering Committee @ https://www.literal-labs.ai | Book: https://tsetlinmachine.org
@apepa.bsky.social
Assistant Professor, University of Copenhagen; interpretability, xAI, factuality, accountability, xAI diagnostics https://apepa.github.io/
@begus.bsky.social
Assoc. Professor at UC Berkeley Artificial and biological intelligence and language Linguistics Lead at Project CETI 🐳 PI Berkeley SC Lab 🗣️ College Principal of Bowles Hall 🏰 https://www.gasperbegus.com
@dhadfieldmenell.bsky.social
Assistant Prof of AI & Decision-Making @MIT EECS I run the Algorithmic Alignment Group (https://algorithmicalignment.csail.mit.edu/) in CSAIL. I work on value (mis)alignment in AI systems. https://people.csail.mit.edu/dhm/
@besmiranushi.bsky.social
AI/ML, Responsible AI, Technology & Society @MicrosoftResearch
@fatemehc.bsky.social
PhD student at Utah NLP, Human-centered Interpretability, Trustworthy AI
@iislucas.bsky.social
Machine learning, interpretability, visualization, Language Models, People+AI research
@eberleoliver.bsky.social
Senior Researcher Machine Learning at BIFOLD | TU Berlin 🇩🇪 Prev at IPAM | UCLA | BCCN Interpretability | XAI | NLP & Humanities | ML for Science
@peyrardmax.bsky.social
Junior Professor CNRS (previously EPFL, TU Darmstadt) -- AI Interpretability, causal machine learning, and NLP. Currently visiting @NYU https://peyrardm.github.io
@diatkinson.bsky.social
PhD student at Northeastern, previously at EpochAI. Doing AI interpretability. diatkinson.github.io
@vidhishab.bsky.social
AI Evaluation and Interpretability @MicrosoftResearch, Prev PhD @CMU.
@vedanglad.bsky.social
ai interpretability research and running • thinking about how models think • prev @MIT cs + physics
@alessiodevoto.bsky.social
PhD in ML/AI | Researching Efficient ML/AI (vision & language) 🍀 & Interpretability | @SapienzaRoma @EdinburghNLP | https://alessiodevoto.github.io/
@mariaeckstein.bsky.social
Research scientist at Google DeepMind. Intersection of cognitive science and AI. Reinforcement learning, decision making, structure learning, abstraction, cognitive modeling, interpretability.
@martinagvilas.bsky.social
Computer Science PhD student | AI interpretability | Vision + Language | Cogntive Science. 🇦🇷living in 🇩🇪, she/her https://martinagvilas.github.io/
@fedeadolfi.bsky.social
Computation & Complexity | AI Interpretability | Meta-theory | Computational Cognitive Science https://fedeadolfi.github.io
@annarogers.bsky.social
Associate professor at IT University of Copenhagen: NLP, language models, interpretability, AI & society. Co-editor-in-chief of ACL Rolling Review. #NLProc #NLP
@wattenberg.bsky.social
Human/AI interaction. ML interpretability. Visualization as design, science, art. Professor at Harvard, and part-time at Google DeepMind.
@harrycheon.bsky.social
"Seung Hyun" | MS CS & BS Applied Math @UCSD 🌊 | LPCUWC 18' 🇭🇰 | Interpretability, Explainability, AI Alignment, Safety & Regulation | 🇰🇷
@loradrian.bsky.social
RE at Instadeep, PhD in computational neuroscience, MSc in CS, interested in ML for life sciences.
@asaakyan.bsky.social
PhD student at Columbia University working on human-AI collaboration, AI creativity and explainability. prev. intern @GoogleDeepMind, @AmazonScience asaakyan.github.io
@glima.bsky.social
PhD Researcher at #MPI_SP | MS and BS at KAIST | AI ethics, HCI, justice, accountability, fairness, explainability | he/him http://thegcamilo.github.io/
@thomasfel.bsky.social
Explainability, Computer Vision, Neuro-AI.🪴 Kempner Fellow @Harvard. Prev. PhD @Brown, @Google, @GoPro. Crêpe lover. 📍 Boston | 🔗 thomasfel.me
@henstr.bsky.social
Senior Research Scientist at IBM Research and Explainability lead at the MIT-IBM AI Lab in Cambridge, MA. Interested in all things (X)AI, NLP, Visualization. Hobbies: Social chair at #NeurIPS, MiniConf, Mementor-- http://hendrik.strobelt.com
@panisson.bsky.social
Principal Researcher @ CENTAI.eu | Leading the Responsible AI Team. Building Responsible AI through Explainable AI, Fairness, and Transparency. Researching Graph Machine Learning, Data Science, and Complex Systems to understand collective human behavior.
@michaelhind.bsky.social
IBM Distinguished RSM, working on AI transparency, governance, explainability, and fairness. Proud husband & dad, Soccer lover. Posts are my own.
@elenal3ai.bsky.social
PhD @UChicagoCS / BE in CS @Umich / ✨AI/NLP transparency and interpretability/📷🎨photography painting
@stephmilani.bsky.social
PhD Student in Machine Learning at CMU. On the academic job market! 🐦 twitter.com/steph_milani 🌐 stephmilani.github.io
@friedler.net
CS prof at Haverford, former tech policy at OSTP, research on fairness, accountability, and transparency of ML, @facct.bsky.social co-founder Also at: sorelle@mastodon.social 🦣 (formerly @kdphd 🐦) sorelle.friedler.net
@haldaume3.bsky.social
Human-centered AI #HCAI, NLP & ML. Director TRAILS (Trustworthy AI in Law & Society) and AIM (AI Interdisciplinary Institute at Maryland). Formerly Microsoft Research NYC. Fun: 🧗🧑🍳🧘⛷️🏕️. he/him.
@jessicahullman.bsky.social
Ginni Rometty Prof @NorthwesternCS | Fellow @NU_IPR | Uncertainty + decisions | Humans + AI/ML | Blog @statmodeling
@markriedl.bsky.social
AI for storytelling, games, explainability, safety, ethics. Professor at Georgia Tech. Associate Director of ML Center at GT. Time travel expert. Geek. Dad. he/him
@upolehsan.bsky.social
🎯 Making AI less evil= human-centered + explainable + responsible AI 💼 Harvard Berkman Klein Fellow | CS Prof. @Northeastern | Data & Society 🏢 Prev-Georgia Tech, {Google, IBM, MSFT}Research 🔬 AI, HCI, Philosophy ☕ F1, memes 🌐 upolehsan.com
@jennwv.bsky.social
Sr. Principal Researcher at Microsoft Research, NYC // Machine Learning, Responsible AI, Transparency, Intelligibility, Human-AI Interaction // WiML Co-founder // Former NeurIPS & current FAccT Program Co-chair // Brooklyn, NY // More at http://jennwv.com
@kgajos.bsky.social
Professor of computer science at Harvard. I focus on human-AI interaction, #HCI, and accessible computing.
@zbucinca.bsky.social
PhD Candidate @Harvard; Human-AI Interaction, Responsible AI zbucinca.github.io
@harmankaur.bsky.social
Assistant professor at University of Minnesota CS. Human-centered AI, interpretable ML, hybrid intelligence systems.
@gully.bsky.social
interpretable machine learning for atmospheric and astronomical data analysis, near-IR spectra, climate tech, stars & planets; bikes, Austin, diving off bridges into the ocean.
@lowd.bsky.social
CS Prof at the University of Oregon, studying adversarial machine learning, data poisoning, interpretable AI, probabilistic and relational models, and more. Avid unicyclist and occasional singer-songwriter. He/him
@pedroribeiro.bsky.social
Data Scientist @ Mass General, Beth Israel, Broad | Clinical Research | Automated Interpretable Machine Learning, Evolutionary Algorithms | UPenn MSE Bioengineering, Oberlin BA Computer Science