@manoelhortaribeiro.bsky.social
Assistant Professor @ Princeton Previously: EPFL 🇨🇭, UFMG 🇧🇷 Interests: Computational Social Science, Platforms, GenAI, Moderation
@jskirzynski.bsky.social
PhD student in Computer Science @UCSD. Studying interpretable AI and RL to improve people's decision-making.
@berkustun.bsky.social
Assistant Prof at UCSD. I work on interpretability, fairness, and safety in ML. www.berkustun.com
@haileyjoren.bsky.social
PhD Student @ UC San Diego Researching reliable, interpretable, and human-aligned ML/AI
@eml-munich.bsky.social
Institute for Explainable Machine Learning at @www.helmholtz-munich.de and Interpretable and Reliable Machine Learning group at Technical University of Munich and part of @munichcenterml.bsky.social
@zootime.bsky.social
I work with explainability AI in a german research facility
@juffi-jku.bsky.social
Researcher Machine Learning & Data Mining, Prof. Computational Data Analytics @jkulinz.bsky.social, Austria.
@juliusad.bsky.social
ML researcher, building interpretable models at Guide Labs (guidelabs.bsky.social).
@elglassman.bsky.social
Assistant Professor @ Harvard SEAS specializing in human-computer interaction. Also interested in visualization, digital humanities, urban design.
@chhaviyadav.bsky.social
Machine Learning Researcher | PhD Candidate @ucsd_cse | @trustworthy_ml chhaviyadav.org
@lesiasemenova.bsky.social
Postdoctoral Researcher at Microsoft Research • Incoming Faculty at Rutgers CS • Trustworthy AI • Interpretable ML • https://lesiasemenova.github.io/
@christophmolnar.bsky.social
Author of Interpretable Machine Learning and other books Newsletter: https://mindfulmodeler.substack.com/ Website: https://christophmolnar.com/
@csinva.bsky.social
Senior researcher at Microsoft Research. Seeking good explanations with machine learning https://csinva.io/
@tmiller-uq.bsky.social
Professor in Artificial Intelligence, The University of Queensland, Australia Human-Centred AI, Decision support, Human-agent interaction, Explainable AI https://uqtmiller.github.io
@umangsbhatt.bsky.social
Assistant Professor & Faculty Fellow @ NYU. Responsible AI. Human-AI Collaboration. Interactive Evaluation. umangsbhatt.github.io
@stefanherzog.bsky.social
Senior Researcher @arc-mpib.bsky.social @Max Planck Hum. Developm. @mpib-berlin.bsky.social, group leader #BOOSTING decisions: cognitive science, hybrid collective intelligence, AI, behavioral public policy, misinfo; stefanherzog.org scienceofboosting.org
@fionaewald.bsky.social
PhD Student @ LMU Munich Munich Center for Machine Learning (MCML) Research in Interpretable ML / Explainable AI
@ryanchankh.bsky.social
Machine Learning PhD at UPenn. Interested in the theory and practice of interpretable machine learning. ML Intern@Apple.
@pedroribeiro.bsky.social
Data Scientist @ Mass General, Beth Israel, Broad | Clinical Research | Automated Interpretable Machine Learning, Evolutionary Algorithms | UPenn MSE Bioengineering, Oberlin BA Computer Science
@lowd.bsky.social
CS Prof at the University of Oregon, studying adversarial machine learning, data poisoning, interpretable AI, probabilistic and relational models, and more. Avid unicyclist and occasional singer-songwriter. He/him
@gully.bsky.social
interpretable machine learning for atmospheric and astronomical data analysis, near-IR spectra, climate tech, stars & planets; bikes, Austin, diving off bridges into the ocean.
@harmankaur.bsky.social
Assistant professor at University of Minnesota CS. Human-centered AI, interpretable ML, hybrid intelligence systems.
@zbucinca.bsky.social
PhD Candidate @Harvard; Human-AI Interaction, Responsible AI zbucinca.github.io
@kgajos.bsky.social
Professor of computer science at Harvard. I focus on human-AI interaction, #HCI, and accessible computing.
@jennwv.bsky.social
Sr. Principal Researcher at Microsoft Research, NYC // Machine Learning, Responsible AI, Transparency, Intelligibility, Human-AI Interaction // WiML Co-founder // Former NeurIPS & current FAccT Program Co-chair // Brooklyn, NY // More at http://jennwv.com
@upolehsan.bsky.social
🎯 Making AI less evil= human-centered + explainable + responsible AI 💼 Harvard Berkman Klein Fellow | CS Prof. @Northeastern | Data & Society 🏢 Prev-Georgia Tech, {Google, IBM, MSFT}Research 🔬 AI, HCI, Philosophy ☕ F1, memes 🌐 upolehsan.com
@markriedl.bsky.social
AI for storytelling, games, explainability, safety, ethics. Professor at Georgia Tech. Associate Director of ML Center at GT. Time travel expert. Geek. Dad. he/him
@jessicahullman.bsky.social
Ginni Rometty Prof @NorthwesternCS | Fellow @NU_IPR | Uncertainty + decisions | Humans + AI/ML | Blog @statmodeling
@haldaume3.bsky.social
Human-centered AI #HCAI, NLP & ML. Director TRAILS (Trustworthy AI in Law & Society) and AIM (AI Interdisciplinary Institute at Maryland). Formerly Microsoft Research NYC. Fun: 🧗🧑🍳🧘⛷️🏕️. he/him.
@friedler.net
CS prof at Haverford, former tech policy at OSTP, research on fairness, accountability, and transparency of ML, @facct.bsky.social co-founder Also at: sorelle@mastodon.social 🦣 (formerly @kdphd 🐦) sorelle.friedler.net
@stephmilani.bsky.social
PhD Student in Machine Learning at CMU. On the academic job market! 🐦 twitter.com/steph_milani 🌐 stephmilani.github.io
@elenal3ai.bsky.social
PhD @UChicagoCS / BE in CS @Umich / ✨AI/NLP transparency and interpretability/📷🎨photography painting
@michaelhind.bsky.social
IBM Distinguished RSM, working on AI transparency, governance, explainability, and fairness. Proud husband & dad, Soccer lover. Posts are my own.
@panisson.bsky.social
Principal Researcher @ CENTAI.eu | Leading the Responsible AI Team. Building Responsible AI through Explainable AI, Fairness, and Transparency. Researching Graph Machine Learning, Data Science, and Complex Systems to understand collective human behavior.
@henstr.bsky.social
Senior Research Scientist at IBM Research and Explainability lead at the MIT-IBM AI Lab in Cambridge, MA. Interested in all things (X)AI, NLP, Visualization. Hobbies: Social chair at #NeurIPS, MiniConf, Mementor-- http://hendrik.strobelt.com
@thomasfel.bsky.social
Explainability, Computer Vision, Neuro-AI.🪴 Kempner Fellow @Harvard. Prev. PhD @Brown, @Google, @GoPro. Crêpe lover. 📍 Boston | 🔗 thomasfel.me
@glima.bsky.social
PhD Researcher at #MPI_SP | MS and BS at KAIST | AI ethics, HCI, justice, accountability, fairness, explainability | he/him http://thegcamilo.github.io/
@asaakyan.bsky.social
PhD student at Columbia University working on human-AI collaboration, AI creativity and explainability. prev. intern @GoogleDeepMind, @AmazonScience asaakyan.github.io
@loradrian.bsky.social
RE at Instadeep, PhD in computational neuroscience, MSc in CS, interested in ML for life sciences.
@harrycheon.bsky.social
"Seung Hyun" | MS CS & BS Applied Math @UCSD 🌊 | LPCUWC 18' 🇭🇰 | Interpretability, Explainability, AI Alignment, Safety & Regulation | 🇰🇷
@wattenberg.bsky.social
Human/AI interaction. ML interpretability. Visualization as design, science, art. Professor at Harvard, and part-time at Google DeepMind.
@annarogers.bsky.social
Associate professor at IT University of Copenhagen: NLP, language models, interpretability, AI & society. Co-editor-in-chief of ACL Rolling Review. #NLProc #NLP
@fedeadolfi.bsky.social
Computation & Complexity | AI Interpretability | Meta-theory | Computational Cognitive Science https://fedeadolfi.github.io
@martinagvilas.bsky.social
Computer Science PhD student | AI interpretability | Vision + Language | Cogntive Science. 🇦🇷living in 🇩🇪, she/her https://martinagvilas.github.io/
@mariaeckstein.bsky.social
Research scientist at Google DeepMind. Intersection of cognitive science and AI. Reinforcement learning, decision making, structure learning, abstraction, cognitive modeling, interpretability.
@alessiodevoto.bsky.social
PhD in ML/AI | Researching Efficient ML/AI (vision & language) 🍀 & Interpretability | @SapienzaRoma @EdinburghNLP | https://alessiodevoto.github.io/
@vedanglad.bsky.social
ai interpretability research and running • thinking about how models think • prev @MIT cs + physics