Visiting scholar @ UW-Madison & PhD student in machine learning @ QMUL. Interested in interpretability and AI safety.
https://james-oldfield.github.io/
@juand-r.bsky.social
CS PhD student at UT Austin in #NLP Interested in language, reasoning, semantics and cognitive science. One day we'll have more efficient, interpretable and robust models! Other interests: math, philosophy, cinema https://www.juandiego-rodriguez.com/
@hylandsl.bsky.social
machine learning for health at microsoft research, based in cambridge UK 🌻 she/her
@lucasventura.com
PhD at Imagine (ENPC) and Willow (Inria) under the supervision of @gulvarol.bsky.social and Cordelia Schmid. Telecommunication Engineer from UPC.
@weitong8591.bsky.social
PhD student at Visual Recognition Group, Czech Technical University in Prague
@atlaswang.bsky.social
https://vita-group.github.io/ 👨🏫 UT Austin ML Professor (on leave) https://www.xtxmarkets.com/ 🏦 XTX Markets Research Director (NYC AI Lab) Superpower is trying everything 🪅 Newest focus: training next-generation super intelligence - Preview above 👶
@hildekuehne.bsky.social
Professor for CS at the Tuebingen AI Center and affiliated Professor at MIT-IBM Watson AI lab - Multimodal learning and video understanding - GC for ICCV 2025 - https://hildekuehne.github.io/
@gkordo.bsky.social
Postdoct Researcher at Visual Recognition Group, CTU in Prague - gkordo.github.io/
@emw123.bsky.social
BME PhD student @ Johns Hopkins Co-advised by Adam S. Charles and Ji Yi Computational Imaging/Neuroscience
@eberleoliver.bsky.social
Senior Researcher Machine Learning at BIFOLD | TU Berlin 🇩🇪 Prev at IPAM | UCLA | BCCN Interpretability | XAI | NLP & Humanities | ML for Science
@kirillbykov.bsky.social
PhD student in Interpretable ML @UMI_Lab_AI, @bifoldberlin, @TUBerlin
@nfel.bsky.social
Post-doctoral Researcher at BIFOLD / TU Berlin interested in interpretability and analysis of language models. Guest researcher at DFKI Berlin. https://nfelnlp.github.io/
@wzuidema.bsky.social
Associate Professor of Natural Language Processing & Explainable AI, University of Amsterdam, ILLC
@zeynepakata.bsky.social
Liesel Beckmann Distinguished Professor of Computer Science at Technical University of Munich and Director of the Institute for Explainable ML at Helmholtz Munich
@zhuzining.bsky.social
Asst Prof @ Stevens. Working on NLP, Explainable, Safe and Trustworthy AI. https://ziningzhu.github.io
@andreasmadsen.bsky.social
Ph.D. in NLP Interpretability from Mila. Previously: independent researcher, freelancer in ML, and Node.js core developer.
@qiaw99.bsky.social
First-year PhD student at XplaiNLP group @TU Berlin: interpretability & explainability Website: https://qiaw99.github.io
@elenal3ai.bsky.social
PhD @UChicagoCS / BE in CS @Umich / ✨AI/NLP transparency and interpretability/📷🎨photography painting
@christophmolnar.bsky.social
Author of Interpretable Machine Learning and other books Newsletter: https://mindfulmodeler.substack.com/ Website: https://christophmolnar.com/
@berkustun.bsky.social
Assistant Prof at UCSD. I work on interpretability, fairness, and safety in ML. www.berkustun.com
@kbeckh.bsky.social
Data Scientist at Fraunhofer IAIS PhD Student at University of Bonn Lamarr Institute XAI, NLP, Human-centered AI
@lorenzlinhardt.bsky.social
PhD Student at the TU Berlin ML group + BIFOLD Model robustness/correction 🤖🔧 Understanding representation spaces 🌌✨
@fionaewald.bsky.social
PhD Student @ LMU Munich Munich Center for Machine Learning (MCML) Research in Interpretable ML / Explainable AI
@annarogers.bsky.social
Associate professor at IT University of Copenhagen: NLP, language models, interpretability, AI & society. Co-editor-in-chief of ACL Rolling Review. #NLProc #NLP
@thserra.bsky.social
Assistant professor at University of Iowa, formerly at Bucknell University, mathematical optimizer with an #orms PhD from Carnegie Mellon University, curious about scaling up constraint learning, proud father of two
@ribana.bsky.social
Professor of Data Science for Crop Systems at Forschungszentrum Jülich and University of Bonn Working on Explainable ML🔍, Data-centric ML🐿️, Sustainable Agriculture🌾, Earth Observation Data Analysis🌍, and more...
@panisson.bsky.social
Principal Researcher @ CENTAI.eu | Leading the Responsible AI Team. Building Responsible AI through Explainable AI, Fairness, and Transparency. Researching Graph Machine Learning, Data Science, and Complex Systems to understand collective human behavior.
@butanium.bsky.social
Master student at ENS Paris-Saclay / aspiring AI safety researcher / improviser Prev research intern @ EPFL w/ wendlerc.bsky.social and Robert West MATS Winter 7.0 Scholar w/ neelnanda.bsky.social https://butanium.github.io
@niklasstoehr.bsky.social
Research Scientist at Google DeepMind and PhD Student at ETH Zurich
@dashiells.bsky.social
Machine learning haruspex || Norbert Weiner is dead so we should just call it "cybernetics" now
@joestacey.bsky.social
NLP PhD student at Imperial College London and Apple AI/ML Scholar. My research is on model robustness and interpretability. #NLP #NLProc
@velezbeltran.bsky.social
Machine Learning PhD Student @ Blei Lab & Columbia University. Working on probabilistic ML | uncertainty quantification | LLM interpretability. Excited about everything ML, AI and engineering!
@ddjohnson.bsky.social
PhD student at Vector Institute / University of Toronto. Building tools to study neural nets and find out what they know. He/him. www.danieldjohnson.com
@amakelov.bsky.social
Mechanistic interpretability Creator of https://github.com/amakelov/mandala prev. Harvard/MIT machine learning, theoretical computer science, competition math.
@ajyl.bsky.social
Post-doc @ Harvard. PhD UMich. Spent time at FAIR and MSR. ML/NLP/Interpretability
@martinagvilas.bsky.social
Computer Science PhD student | AI interpretability | Vision + Language | Cogntive Science. 🇦🇷living in 🇩🇪, she/her https://martinagvilas.github.io/
@apepa.bsky.social
Assistant Professor, University of Copenhagen; interpretability, xAI, factuality, accountability, xAI diagnostics https://apepa.github.io/
@fedeadolfi.bsky.social
Computation & Complexity | AI Interpretability | Meta-theory | Computational Cognitive Science https://fedeadolfi.github.io
@kayoyin.bsky.social
PhD student at UC Berkeley. NLP for signed languages and LLM interpretability. kayoyin.github.io 🏂🎹🚵♀️🥋
@jkminder.bsky.social
CS Student at ETH Zürich, currently doing my masters thesis at the DLAB at EPFL Mainly interested in Language Model Interpretability. Most recent work: https://openreview.net/forum?id=Igm9bbkzHC MATS 7.0 Winter 2025 Scholar w/ Neel Nanda jkminder.ch
@nsubramani23.bsky.social
PhD student @CMU LTI - working on model #interpretability; prev predoc @ai2; intern @MSFT nishantsubramani.github.io
@ericwtodd.bsky.social
CS PhD Student, Northeastern University - Machine Learning, Interpretability https://ericwtodd.github.io
@wendlerc.bsky.social
Postdoc at the interpretable deep learning lab at Northeastern University, deep learning, LLMs, mechanistic interpretability
@vedanglad.bsky.social
ai interpretability research and running • thinking about how models think • prev @MIT cs + physics