@lesiasemenova.bsky.social
Postdoctoral Researcher at Microsoft Research • Incoming Faculty at Rutgers CS • Trustworthy AI • Interpretable ML • https://lesiasemenova.github.io/
@csinva.bsky.social
Senior researcher at Microsoft Research. Seeking good explanations with machine learning https://csinva.io/
@tmiller-uq.bsky.social
Professor in Artificial Intelligence, The University of Queensland, Australia Human-Centred AI, Decision support, Human-agent interaction, Explainable AI https://uqtmiller.github.io
@umangsbhatt.bsky.social
Incoming Assistant Professor @ University of Cambridge. Responsible AI. Human-AI Collaboration. Interactive Evaluation. umangsbhatt.github.io
@stefanherzog.bsky.social
Senior Researcher @arc-mpib.bsky.social @Max Planck Hum. Developm. @mpib-berlin.bsky.social, group leader #BOOSTING decisions: cognitive science, hybrid collective intelligence, AI, behavioral public policy, misinfo; stefanherzog.org scienceofboosting.org
@fionaewald.bsky.social
PhD Student @ LMU Munich Munich Center for Machine Learning (MCML) Research in Interpretable ML / Explainable AI
@ryanchankh.bsky.social
Machine Learning PhD at UPenn. Interested in the theory and practice of interpretable machine learning. ML Intern@Apple.
@pedroribeiro.bsky.social
Data Scientist @ Mass General, Beth Israel, Broad | Clinical Research | Automated Interpretable Machine Learning, Evolutionary Algorithms | UPenn MSE Bioengineering, Oberlin BA Computer Science
@harmankaur.bsky.social
Assistant professor at University of Minnesota CS. Human-centered AI, interpretable ML, hybrid intelligence systems.
@zbucinca.bsky.social
PhD Candidate @Harvard; Human-AI Interaction, Responsible AI zbucinca.github.io
@kgajos.bsky.social
Professor of computer science at Harvard. I focus on human-AI interaction, #HCI, and accessible computing.
@jennwv.bsky.social
Sr. Principal Researcher at Microsoft Research, NYC // Machine Learning, Responsible AI, Transparency, Intelligibility, Human-AI Interaction // WiML Co-founder // Former NeurIPS & current FAccT Program Co-chair // Brooklyn, NY // More at http://jennwv.com
@upolehsan.bsky.social
🎯 Making AI less evil= human-centered + explainable + responsible AI 💼 Harvard Berkman Klein Fellow | CS Prof. @Northeastern | Data & Society 🏢 Prev-Georgia Tech, {Google, IBM, MSFT}Research 🔬 AI, HCI, Philosophy ☕ F1, memes 🌐 upolehsan.com
@friedler.net
CS prof at Haverford, former tech policy at OSTP, research on fairness, accountability, and transparency of ML, @facct.bsky.social co-founder Also at: sorelle@mastodon.social 🦣 (formerly @kdphd 🐦) sorelle.friedler.net
@elenal3ai.bsky.social
PhD @UChicagoCS / BE in CS @Umich / ✨AI/NLP transparency and interpretability/📷🎨photography painting
@michaelhind.bsky.social
IBM Distinguished RSM, working on AI transparency, governance, explainability, and fairness. Proud husband & dad, Soccer lover. Posts are my own.
@henstr.bsky.social
Senior Research Scientist at IBM Research and Explainability lead at the MIT-IBM AI Lab in Cambridge, MA. Interested in all things (X)AI, NLP, Visualization. Hobbies: Social chair at #NeurIPS, MiniConf, Mementor-- http://hendrik.strobelt.com
@glima.bsky.social
PhD Researcher at #MPI_SP | MS and BS at KAIST | AI ethics, HCI, justice, accountability, fairness, explainability | he/him http://thegcamilo.github.io/
@asaakyan.bsky.social
PhD student at Columbia University working on human-AI collaboration, AI creativity and explainability. prev. intern @GoogleDeepMind, @AmazonScience asaakyan.github.io
@loradrian.bsky.social
RE at Instadeep, PhD in computational neuroscience, MSc in CS, interested in ML for life sciences.
@harrycheon.bsky.social
"Seung Hyun" | MS CS & BS Applied Math @UCSD 🌊 | LPCUWC 18' 🇭🇰 | Interpretability, Explainability, AI Alignment, Safety & Regulation | 🇰🇷
@mariaeckstein.bsky.social
Research scientist at Google DeepMind. Intersection of cognitive science and AI. Reinforcement learning, decision making, structure learning, abstraction, cognitive modeling, interpretability.
@alessiodevoto.bsky.social
PhD in ML/AI | Researching Efficient ML/AI (vision & language) 🍀 & Interpretability | @SapienzaRoma @EdinburghNLP | https://alessiodevoto.github.io/
@vidhishab.bsky.social
AI Evaluation and Interpretability @MicrosoftResearch, Prev PhD @CMU.
@diatkinson.bsky.social
PhD student at Northeastern, previously at EpochAI. Doing AI interpretability. diatkinson.github.io
@peyrardmax.bsky.social
Junior Professor CNRS (previously EPFL, TU Darmstadt) -- AI Interpretability, causal machine learning, and NLP. Currently visiting @NYU https://peyrardm.github.io
@eberleoliver.bsky.social
Senior Researcher Machine Learning at BIFOLD | TU Berlin 🇩🇪 Prev at IPAM | UCLA | BCCN Interpretability | XAI | NLP & Humanities | ML for Science
@iislucas.bsky.social
Machine learning, interpretability, visualization, Language Models, People+AI research
@fatemehc.bsky.social
PhD student at Utah NLP, Human-centered Interpretability, Trustworthy AI
@besmiranushi.bsky.social
AI/ML, Responsible AI, Technology & Society @MicrosoftResearch
@dhadfieldmenell.bsky.social
Assistant Prof of AI & Decision-Making @MIT EECS I run the Algorithmic Alignment Group (https://algorithmicalignment.csail.mit.edu/) in CSAIL. I work on value (mis)alignment in AI systems. https://people.csail.mit.edu/dhm/
@begus.bsky.social
Assoc. Professor at UC Berkeley Artificial and biological intelligence and language Linguistics Lead at Project CETI 🐳 PI Berkeley SC Lab 🗣️ College Principal of Bowles Hall 🏰 https://www.gasperbegus.com
@lawlessopt.bsky.social
Stanford MS&E Postdoc | Human-Centered AI & OR Prev: @CornellORIE @MSFTResearch, @IBMResearch, @uoftmie 🌈
@e-giunchiglia.bsky.social
Assistant Professor at Imperial College London | EEE Department and I-X. Neuro-symbolic AI, Safe AI, Generative Models Previously: Post-doc at TU Wien, DPhil at the University of Oxford.
@qiaoyu-rosa.bsky.social
Final year NLP PhD student at UChicago. Explainability, reasoning, and hypothesis generation!
@sebastiendestercke.bsky.social
CS researcher in uncertainty reasoning (whenever it appears: risk analysis, AI, philosophy, ...), mostly mixing sets and probabilities. Posts mostly on this topic (french and english), and a bit about others. Personal account and opinions.
@tfjgeorge.bsky.social
Explainability of deep neural nets and causality https://tfjgeorge.github.io/
@aparafita.bsky.social
Senior Researcher at Barcelona Supercomputing Center | PhD in Causal Estimation with estimand-agnostic frameworks, working on Machine Learning Explainability Github: @aparafita
@noahlegall.bsky.social
AppSci @ Dotmatics | Microbial Bioinformatics | Deep Learning & Explainability | Nextflow Ambassador | Author of 'The Microbialist' Substack | Thoughts are my own personal opinions and do not represent a third party
@charlottemagister.bsky.social
PhD student @ University of Cambridge, focusing on Explainability and Interpretability for GNNs
@amirrahnama.bsky.social
PhD Student at KTH Royal Institute of Technology. Researching Explainability and Interpretability in Machine Learning
@ronitelman.bsky.social
O'Reilly Author, "Unifying Business, Data, and Code" (2024), and Apress author, "The Language of Innovation" (2025)
@ovdw.bsky.social
Technology specialist at the EU AI Office / AI Safety / Prev: University of Amsterdam, EleutherAI, BigScience Thoughts & opinions are my own and do not necessarily represent my employer.
@dilya.bsky.social
PhD Candidate in Interpretability @FraunhoferHHI | 📍Berlin, Germany dilyabareeva.github.io
@dorsarohani.bsky.social
Deep learning @ NVIDIA, Vector. prev @ DeepGenomics dorsarohani.com
@johnegan.bsky.social
Albuquerque AI / Atomic Entropy abqgpt.com yourai.expert folks call me the ‘AI expert’, not chasing the $$$ or seeking the spotlight, just trying to help normal folks prosper with this tech in a safe and secure manner, my 1st tech startup was in 1995