Moritz Weckbecker
PhD candidate for Interpretable AI @ Fraunhofer HHI Berlin
@qiaoyu-rosa.bsky.social
Final year NLP PhD student at UChicago. Explainability, reasoning, and hypothesis generation!
@sebastiendestercke.bsky.social
CS researcher in uncertainty reasoning (whenever it appears: risk analysis, AI, philosophy, ...), mostly mixing sets and probabilities. Posts mostly on this topic (french and english), and a bit about others. Personal account and opinions.
@tfjgeorge.bsky.social
Explainability of deep neural nets and causality https://tfjgeorge.github.io/
@aparafita.bsky.social
Senior Researcher at Barcelona Supercomputing Center | PhD in Causal Estimation with estimand-agnostic frameworks, working on Machine Learning Explainability Github: @aparafita
@noahlegall.bsky.social
AppSci @ Dotmatics | Microbial Bioinformatics | Deep Learning & Explainability | Nextflow Ambassador | Author of 'The Microbialist' Substack | Thoughts are my own personal opinions and do not represent a third party
@charlottemagister.bsky.social
PhD student @ University of Cambridge, focusing on Explainability and Interpretability for GNNs
@amirrahnama.bsky.social
PhD Student at KTH Royal Institute of Technology. Researching Explainability and Interpretability in Machine Learning
@ronitelman.bsky.social
O'Reilly Author, "Unifying Business, Data, and Code" (2024), and Apress author, "The Language of Innovation" (2025)
@ovdw.bsky.social
Technology specialist at the EU AI Office / AI Safety / Prev: University of Amsterdam, EleutherAI, BigScience Thoughts & opinions are my own and do not necessarily represent my employer.
@velezbeltran.bsky.social
Machine Learning PhD Student @ Blei Lab & Columbia University. Working on probabilistic ML | uncertainty quantification | LLM interpretability. Excited about everything ML, AI and engineering!
@romapatel.bsky.social
research scientist @deepmind. language & multi-agent rl & interpretability. phd @BrownUniversity '22 under ellie pavlick (she/her) https://roma-patel.github.io
@stephaniebrandl.bsky.social
Assistant Professor in NLP (Fairness, Interpretability and lately interested in Political Science) at the University of Copenhagen ✨ Before: PostDoc in NLP at Uni of CPH, PhD student in ML at TU Berlin
@marvinschmitt.bsky.social
🇪🇺 AI/ML, Member @ellis.eu 🤖 Generative NNs, ProbML, Uncertainty Quantification, Amortized Inference, Simulation Intelligence 🎓 PhD+MSc CS, MSc Psych 🏡 marvinschmitt.github.io ✨ On the job market, DMs open 📩
@johnegan.bsky.social
Albuquerque AI / Atomic Entropy abqgpt.com yourai.expert folks call me the ‘AI expert’, not chasing the $$$ or seeking the spotlight, just trying to help normal folks prosper with this tech in a safe and secure manner, my 1st tech startup was in 1995
@domoritz.de
Visualization, data, AI/ML. Professor at CMU (@dig.cmu.edu, @hcii.cmu.edu) and researcher at Apple. Also sailboats ⛵️ and chocolate 🍫. www.domoritz.de
@polochau.bsky.social
Professor, Georgia Tech • ML+VIS • Director, Polo Club of AI 🚀 poloclub.gatech.edu • Carnegie Mellon alum. Covert designer, cellist, pianist faculty.cc.gatech.edu/~dchau
@mlam.bsky.social
Stanford CS PhD student | hci, human-centered AI, social computing, responsible AI (+ dance, design, doodling!) michelle123lam.github.io
@angieboggust.bsky.social
MIT PhD candidate in the VIS group working on interpretability and human-AI alignment
@hectorkohler.bsky.social
PhD student in interpretable reinforcement learning at Inria Scool. http://Kohlerhector.github.io/homepage/
@dggoldst.bsky.social
Senior Principal Research Manager at Microsoft Research NYC. Economics and Computation Group. Distinguished Scholar at Wharton.
@adrhill.bsky.social
PhD student at @bifold.berlin, Machine Learning Group, TU Berlin. Automatic Differentiation, Explainable AI and #JuliaLang. Open source person: adrianhill.de/projects
@iaugenstein.bsky.social
Professor at the University of Copenhagen. Explainable AI, Natural Language Processing, ML. Head of copenlu.bsky.social lab. #NLProc #NLP #XAI http://isabelleaugenstein.github.io/
@allthingsapx.bsky.social
Product Marketing Lead @NVIDIA | PhD @UMBaltimore | omics, immuno/micro, AI/ML | 🇺🇸🇸🇰 | Posts are my own views, not those of my employer.
@sqirllab.bsky.social
We are "squIRreL", the Interpretable Representation Learning Lab based at IDLab - University of Antwerp & imec. Research Areas: #RepresentationLearning, Model #Interpretability, #explainability, #DeepLearning #ML #AI #XAI #mechinterp
@mtiezzi.bsky.social
PostDoc Researcher @ IIT, Continual and Lifelong Learning -> Robots, Graph Neural Networks, Sequence Processing | CoLLAs 2024 Local Chair 🏠 mtiezzi.github.io
@simoneschaub.bsky.social
Assistant Professor of Computer Science at TU Darmstadt, Member of @ellis.eu, DFG #EmmyNoether Fellow, PhD @ETH Computer Vision & Deep Learning
@sukrutrao.bsky.social
PhD Student at the Max Planck Institute for Informatics @cvml.mpi-inf.mpg.de @maxplanck.de | Explainable AI, Computer Vision, Neuroexplicit Models Web: sukrutrao.github.io
@guidelabs.bsky.social
AI systems and models that are engineered to be interpretable and auditable. www.guidelabs.ai
@dilya.bsky.social
PhD Candidate in Interpretability @FraunhoferHHI | 📍Berlin, Germany dilyabareeva.github.io
@eberleoliver.bsky.social
Senior Researcher Machine Learning at BIFOLD | TU Berlin 🇩🇪 Prev at IPAM | UCLA | BCCN Interpretability | XAI | NLP & Humanities | ML for Science
@kayoyin.bsky.social
PhD student at UC Berkeley. NLP for signed languages and LLM interpretability. kayoyin.github.io 🏂🎹🚵♀️🥋
@sarahwiegreffe.bsky.social
Research in NLP (mostly LM interpretability & explainability). Incoming assistant prof at UMD CS + CLIP. Current postdoc @ai2.bsky.social & @uwnlp.bsky.social Views my own. sarahwie.github.io
@nsaphra.bsky.social
Waiting on a robot body. All opinions are universal and held by both employers and family. Recruiting students to start my lab! ML/NLP/they/she.
@thomasfel.bsky.social
Explainability, Computer Vision, Neuro-AI.🪴 Kempner Fellow @Harvard. Prev. PhD @Brown, @Google, @GoPro. Crêpe lover. 📍 Boston | 🔗 thomasfel.me
@wordscompute.bsky.social
nlp/ml phding @ usc, interpretability & reasoning & pretraining & emergence 한american, she, iglee.me, likes ??= bookmarks
@jaom7.bsky.social
Associate Professor @UAntwerp, sqIRL/IDLab, imec. #RepresentationLearning, #Model #Interpretability & #Explainability A guy who plays with toy bricks, enjoys research and gaming. Opinions are my own idlab.uantwerpen.be/~joramasmogrovejo
@gsarti.com
PhD Student at @gronlp.bsky.social 🐮, core dev @inseq.org. Interpretability ∩ HCI ∩ #NLProc. gsarti.com
@kirillbykov.bsky.social
PhD student in Interpretable ML @UMI_Lab_AI, @bifoldberlin, @TUBerlin
@nfel.bsky.social
Post-doctoral Researcher at BIFOLD / TU Berlin interested in interpretability and analysis of language models. Guest researcher at DFKI Berlin. https://nfelnlp.github.io/
@wzuidema.bsky.social
Associate Professor of Natural Language Processing & Explainable AI, University of Amsterdam, ILLC
@zeynepakata.bsky.social
Liesel Beckmann Distinguished Professor of Computer Science at Technical University of Munich and Director of the Institute for Explainable ML at Helmholtz Munich
@zhuzining.bsky.social
Asst Prof @ Stevens. Working on NLP, Explainable, Safe and Trustworthy AI. https://ziningzhu.github.io
@jasmijn.bastings.me
Senior Research Scientist at Google DeepMind. Interested in (equitable) language technology, gender, interpretability, NLP. Views my own. She/her. 🌐 jasmijn.bastings.me
@colah.bsky.social
Reverse engineering neural networks at Anthropic. Previously Distill, OpenAI, Google Brain.Personal account.