David Chalmers
@thomasfel.bsky.social
Explainability, Computer Vision, Neuro-AI.🪴 Kempner Fellow @Harvard. Prev. PhD @Brown, @Google, @GoPro. Crêpe lover. 📍 Boston | 🔗 thomasfel.me
@dilya.bsky.social
PhD Candidate in Interpretability @FraunhoferHHI | 📍Berlin, Germany dilyabareeva.github.io
@tpimentel.bsky.social
Postdoc at ETH. Formerly, PhD student at the University of Cambridge :)
@carl-allen.bsky.social
Laplace Junior Chair, Machine Learning ENS Paris. (prev ETH Zurich, Edinburgh, Oxford..) Working on mathematical foundations/probabilistic interpretability of ML (what NNs learn🤷♂️, disentanglement🤔, king-man+woman=queen?👌…)
@natalieshapira.bsky.social
Tell me about challenges, the unbelievable, the human mind and artificial intelligence, thoughts, social life, family life, science and philosophy.
@kaiserwholearns.bsky.social
Ph.D. student at @jhuclsp, human LM that hallucinates. Formerly @MetaAI, @uwnlp, and @AWS they/them🏳️🌈 #NLProc #NLP
@francescortu.bsky.social
NLP & Interpretability | PhD Student @ University of Trieste & Laboratory of Data Engineering of Area Science Park | Prev MPI-IS
@jannikbrinkmann.bsky.social
@jaom7.bsky.social
Associate Professor @UAntwerp, sqIRL/IDLab, imec. #RepresentationLearning, #Model #Interpretability & #Explainability A guy who plays with toy bricks, enjoys research and gaming. Opinions are my own idlab.uantwerpen.be/~joramasmogrovejo
@shan23chen.bsky.social
PhDing @AIM_Harvard @MassGenBrigham|PhD Fellow @Google | Previously @Bos_CHIP @BrandeisU More robustness and explainabilities 🧐 for Health AI. shanchen.dev
@jonling.bsky.social
Assistant Professor @HopkinsMedicine @JHUPath https://scholar.google.com/citations?user=dGBD72YAAAAJ
@vedanglad.bsky.social
ai interpretability research and running • thinking about how models think • prev @MIT cs + physics
@wendlerc.bsky.social
Postdoc at the interpretable deep learning lab at Northeastern University, deep learning, LLMs, mechanistic interpretability
@ericwtodd.bsky.social
CS PhD Student, Northeastern University - Machine Learning, Interpretability https://ericwtodd.github.io
@nsubramani23.bsky.social
PhD student @CMU LTI - working on model #interpretability; prev predoc @ai2; intern @MSFT nishantsubramani.github.io
@jkminder.bsky.social
CS Student at ETH Zürich, currently doing my masters thesis at the DLAB at EPFL Mainly interested in Language Model Interpretability. Most recent work: https://openreview.net/forum?id=Igm9bbkzHC MATS 7.0 Winter 2025 Scholar w/ Neel Nanda jkminder.ch
@kayoyin.bsky.social
PhD student at UC Berkeley. NLP for signed languages and LLM interpretability. kayoyin.github.io 🏂🎹🚵♀️🥋
@colah.bsky.social
Reverse engineering neural networks at Anthropic. Previously Distill, OpenAI, Google Brain.Personal account.
@fedeadolfi.bsky.social
Computation & Complexity | AI Interpretability | Meta-theory | Computational Cognitive Science https://fedeadolfi.github.io
@apepa.bsky.social
Assistant Professor, University of Copenhagen; interpretability, xAI, factuality, accountability, xAI diagnostics https://apepa.github.io/
@wordscompute.bsky.social
nlp/ml phding @ usc, interpretability & reasoning & pretraining & emergence 한american, she, iglee.me, likes ??= bookmarks
@martinagvilas.bsky.social
Computer Science PhD student | AI interpretability | Vision + Language | Cogntive Science. 🇦🇷living in 🇩🇪, she/her https://martinagvilas.github.io/
@ajyl.bsky.social
Post-doc @ Harvard. PhD UMich. Spent time at FAIR and MSR. ML/NLP/Interpretability
@amakelov.bsky.social
Mechanistic interpretability Creator of https://github.com/amakelov/mandala prev. Harvard/MIT machine learning, theoretical computer science, competition math.
@ddjohnson.bsky.social
PhD student at Vector Institute / University of Toronto. Building tools to study neural nets and find out what they know. He/him. www.danieldjohnson.com
@velezbeltran.bsky.social
Machine Learning PhD Student @ Blei Lab & Columbia University. Working on probabilistic ML | uncertainty quantification | LLM interpretability. Excited about everything ML, AI and engineering!
@swetakar.bsky.social
Machine learning PhD student @ Blei Lab in Columbia University Working in mechanistic interpretability, nlp, causal inference, and probabilistic modeling! Previously at Meta for ~3 years on the Bayesian Modeling & Generative AI teams. 🔗 www.sweta.dev
@joestacey.bsky.social
NLP PhD student at Imperial College London and Apple AI/ML Scholar. My research is on model robustness and interpretability. #NLP #NLProc
@dashiells.bsky.social
Machine learning haruspex || Norbert Weiner is dead so we should just call it "cybernetics" now
@gsarti.com
PhD Student at @gronlp.bsky.social 🐮, core dev @inseq.org. Interpretability ∩ HCI ∩ #NLProc. gsarti.com
@niklasstoehr.bsky.social
Research Scientist at Google DeepMind and PhD Student at ETH Zurich
@amuuueller.bsky.social
Postdoc at Northeastern and incoming Asst. Prof. at Boston U. Working on NLP, interpretability, causality. Previously: JHU, Meta, AWS
@butanium.bsky.social
Master student at ENS Paris-Saclay / aspiring AI safety researcher / improviser Prev research intern @ EPFL w/ wendlerc.bsky.social and Robert West MATS Winter 7.0 Scholar w/ neelnanda.bsky.social https://butanium.github.io
@michaelhoffman.bsky.social
Chair, Computational BIology and Medicine Program, Princess Margaret Cancer Centre, University Health Network. Associate Professor, Medical Biophysics, University of Toronto. Disclosures: https://github.com/michaelmhoffman/disclosure/
@jeffreybigham.com
Professor of HCII and LTI at Carnegie Mellon School of Computer Science. jeffreybigham.com
@moberst.bsky.social
Assistant Prof. of CS at Johns Hopkins Visiting Scientist at Abridge AI Causality & Machine Learning in Healthcare Prev: PhD at MIT, Postdoc at CMU
@jascha.sohldickstein.com
Recently a principal scientist at Google DeepMind. Joining Anthropic. Most (in)famous for inventing diffusion models. AI + physics + neuroscience + dynamical systems.
@chrmanning.bsky.social
Stanford Linguistics and Computer Science. Director, Stanford AI Lab. Founder of @stanfordnlp.bsky.social . #NLP https://nlp.stanford.edu/~manning/
@roydanroy.bsky.social
Research Director, Founding Faculty, Canada CIFAR AI Chair @VectorInst. Full Prof @UofT - Statistics and Computer Sci. (x-appt) danroy.org I study assumption-free prediction and decision making under uncertainty, with inference emerging from optimality.