Lorenz Linhardt
PhD Student at the TU Berlin ML group + BIFOLD
Model robustness/correction 🤖🔧
Understanding representation spaces 🌌✨
@lpirch.bsky.social
PhD at BIFOLD, TU-Berlin • Vulnerability Discovery & Graph-based Machine Learning • 🎹🎸
@climatechangeai.bsky.social
Tackling climate change with machine learning. We facilitate cooperation and provide resources for those working in this area. Share is not endorsement. // https://www.climatechange.ai/
@emergentgarden.bsky.social
creator of the emergent garden youtube channel, mindcraft, and the life engine. i like weird programs.
@lauritzthamsen.org
Computer systems faculty at Glasgow, driving research on resource-efficient and carbon-aware distributed computing systems, @glasgowc3lab.bsky.social, lauritzthamsen.org ☁️💻🌱
@jandubinski.bsky.social
PhD student in Machine Learning @Warsaw University of Technology and @IDEAS NCBR
@mitfund2a.bsky.social
@sukrutrao.bsky.social
PhD Student at the Max Planck Institute for Informatics @cvml.mpi-inf.mpg.de @maxplanck.de | Explainable AI, Computer Vision, Neuroexplicit Models Web: sukrutrao.github.io
@giuseppe88.bsky.social
Senior Lecturer and Researcher @LMU_Muenchen working on #ExplainableAI / #interpretableML and #OpenML
@philippwiesner.bsky.social
PhD student at TU Berlin researching on sustainable computing systems. https://philippwiesner.org
@moritzweckbecker.bsky.social
PhD candidate for Interpretable AI @ Fraunhofer HHI Berlin
@mplaue.bsky.social
Math professor, data scientist. Author of text books on applied math and data science. Personal interests include quantum weirdness, time travel, epistemology.
@ellisunitjena.bsky.social
https://ellis-jena.eu is developing+applying #AI #ML in #earth system, #climate & #environmental research. Partner: @uni-jena.de, https://bgc-jena.mpg.de/en, @dlr-spaceagency.bsky.social, @carlzeissstiftung.bsky.social, https://aiforgood.itu.int
@navatintarev.bsky.social
(she/her) Full Professor of Explainable AI, University of Maastricht, NL. Lab director of the lab on trustworthy AI in Media (TAIM). Director of Research at the Department of Advanced Computing Sciences. IPN board member (incoming 2026).
@teresa-klatzer.bsky.social
PhD candidate @ University of Edinburgh Bayesian Stats | Machine Learning | Uncertainty Quantification | ML4Science | Scientific Imaging https://teresa-klatzer.github.io/
@gregko.bsky.social
CTO, Huma.ai. Carbon-based LLM, known to hallucinate at times but knowledge current as of today. I'm here to learn more of how AI can unlock knowledge, solve challenges, and uplift humanity.
@fatimapillosu.bsky.social
Hydro-meteorologist | PhD student @ReadingUni | Visiting scientist @ECMWF | Ensemble forescast | NWP post-processing | Natural hazards | Disaster risk | Climate Ambassador | SciComm | Adovacte for people readiness in disaster preparedness/response/recovery
@albertvilella.bsky.social
Bioinformatics Scientist / Next Generation Sequencing, Single Cell and Spatial Biology, Next Generation Proteomics, Liquid Biopsy, SynBio, Compute Acceleration in biotech // http://albertvilella.substack.com
@kayoyin.bsky.social
PhD student at UC Berkeley. NLP for signed languages and LLM interpretability. kayoyin.github.io 🏂🎹🚵♀️🥋
@thomasfel.bsky.social
Explainability, Computer Vision, Neuro-AI.🪴 Kempner Fellow @Harvard. Prev. PhD @Brown, @Google, @GoPro. Crêpe lover. 📍 Boston | 🔗 thomasfel.me
@wordscompute.bsky.social
nlp/ml phding @ usc, interpretability & reasoning & pretraining & emergence 한american, she, iglee.me, likes ??= bookmarks
@jaom7.bsky.social
Associate Professor @UAntwerp, sqIRL/IDLab, imec. #RepresentationLearning, #Model #Interpretability & #Explainability A guy who plays with toy bricks, enjoys research and gaming. Opinions are my own idlab.uantwerpen.be/~joramasmogrovejo
@wzuidema.bsky.social
Associate Professor of Natural Language Processing & Explainable AI, University of Amsterdam, ILLC
@zeynepakata.bsky.social
Liesel Beckmann Distinguished Professor of Computer Science at Technical University of Munich and Director of the Institute for Explainable ML at Helmholtz Munich
@zhuzining.bsky.social
Asst Prof @ Stevens. Working on NLP, Explainable, Safe and Trustworthy AI. https://ziningzhu.github.io
@jasmijn.uk
Senior Research Scientist at Google DeepMind. Interested in (equitable) language technology, gender, interpretability, NLP. Views my own. She/her. 🌐 https://jasmijn.uk
@colah.bsky.social
Reverse engineering neural networks at Anthropic. Previously Distill, OpenAI, Google Brain.Personal account.
@apepa.bsky.social
Assistant Professor, University of Copenhagen; interpretability, xAI, factuality, accountability, xAI diagnostics https://apepa.github.io/
@andreasmadsen.bsky.social
Ph.D. in NLP Interpretability from Mila. Previously: independent researcher, freelancer in ML, and Node.js core developer.
@michaelwhanna.bsky.social
PhD Student at the ILLC / UvA doing work at the intersection of (mechanistic) interpretability and cognitive science. hannamw.github.io
@diatkinson.bsky.social
PhD student at Northeastern, previously at EpochAI. Doing AI interpretability. diatkinson.github.io
@ddjohnson.bsky.social
PhD student at Vector Institute / University of Toronto. Building tools to study neural nets and find out what they know. He/him. www.danieldjohnson.com
@amuuueller.bsky.social
Postdoc at Northeastern and incoming Asst. Prof. at Boston U. Working on NLP, interpretability, causality. Previously: JHU, Meta, AWS
@amakelov.bsky.social
Mechanistic interpretability Creator of https://github.com/amakelov/mandala prev. Harvard/MIT machine learning, theoretical computer science, competition math.
@elenal3ai.bsky.social
PhD @UChicagoCS / BE in CS @Umich / ✨AI/NLP transparency and interpretability/📷🎨photography painting
@berkustun.bsky.social
Assistant Prof at UCSD. I work on interpretability, fairness, and safety in ML. www.berkustun.com
@wendlerc.bsky.social
Postdoc at the interpretable deep learning lab at Northeastern University, deep learning, LLMs, mechanistic interpretability
@swetakar.bsky.social
Machine learning PhD student @ Blei Lab in Columbia University Working in mechanistic interpretability, nlp, causal inference, and probabilistic modeling! Previously at Meta for ~3 years on the Bayesian Modeling & Generative AI teams. 🔗 www.sweta.dev
@kbeckh.bsky.social
Data Scientist at Fraunhofer IAIS PhD Student at University of Bonn Lamarr Institute XAI, NLP, Human-centered AI
@annarogers.bsky.social
Associate professor at IT University of Copenhagen: NLP, language models, interpretability, AI & society. Co-editor-in-chief of ACL Rolling Review. #NLProc #NLP
@thserra.bsky.social
Assistant professor at University of Iowa, formerly at Bucknell University, mathematical optimizer with an #orms PhD from Carnegie Mellon University, curious about scaling up constraint learning, proud father of two
@panisson.bsky.social
Principal Researcher @ CENTAI.eu | Leading the Responsible AI Team. Building Responsible AI through Explainable AI, Fairness, and Transparency. Researching Graph Machine Learning, Data Science, and Complex Systems to understand collective human behavior.
@harrycheon.bsky.social
"Seung Hyun" | MS CS & BS Applied Math @UCSD 🌊 | LPCUWC 18' 🇭🇰 | Interpretability, Explainability, AI Alignment, Safety & Regulation | 🇰🇷
@jkminder.bsky.social
CS Student at ETH Zürich, currently doing my masters thesis at the DLAB at EPFL Mainly interested in Language Model Interpretability. Most recent work: https://openreview.net/forum?id=Igm9bbkzHC MATS 7.0 Winter 2025 Scholar w/ Neel Nanda jkminder.ch