BlackboxNLP
The largest workshop on analysing and interpreting neural networks for NLP.
BlackboxNLP will be held at EMNLP 2025 in Suzhou, China
blackboxnlp.github.io
@jkminder.bsky.social
CS Student at ETH Zürich, currently doing my masters thesis at the DLAB at EPFL Mainly interested in Language Model Interpretability. Most recent work: https://openreview.net/forum?id=Igm9bbkzHC MATS 7.0 Winter 2025 Scholar w/ Neel Nanda jkminder.ch
@elianapastor.bsky.social
Assistant Professor at PoliTo 🇮🇹 | Currently visiting scholar at UCSC 🇺🇸 | she/her | TrustworthyAI, XAI, Fairness in AI https://elianap.github.io/
@wendlerc.bsky.social
Postdoc at the interpretable deep learning lab at Northeastern University, deep learning, LLMs, mechanistic interpretability
@christophmolnar.bsky.social
Author of Interpretable Machine Learning and other books Newsletter: https://mindfulmodeler.substack.com/ Website: https://christophmolnar.com/
@vernadankers.bsky.social
#NLProc PhD student in #Edinburgh 🏴 interpretability x memorisation x (non-)compositionality. she/her 👩💻 🇳🇱
@geiongallego.bsky.social
@sscardapane.bsky.social
I fall in love with a new #machinelearning topic every month 🙄 Ass. Prof. Sapienza (Rome) | Author: Alice in a differentiable wonderland (https://www.sscardapane.it/alice-book/)
@lauraruis.bsky.social
PhD supervised by Tim Rocktäschel and Ed Grefenstette, part time at Cohere. Language and LLMs. Spent time at FAIR, Google, and NYU (with Brenden Lake). She/her.
@leon-lang.bsky.social
PhD Candidate at the University of Amsterdam. AI Alignment and safety research. Formerly multivariate information theory and equivariant deep learning. Masters degrees in both maths and AI. https://langleon.github.io/
@eberleoliver.bsky.social
Senior Researcher Machine Learning at BIFOLD | TU Berlin 🇩🇪 Prev at IPAM | UCLA | BCCN Interpretability | XAI | NLP & Humanities | ML for Science
@neurokim.bsky.social
Neuro + AI Research Scientist at DeepMind; Affiliate Professor at Columbia Center for Theoretical Neuroscience. Likes studying learning+memory, hippocampi, and other things brains have and do, too. she/her.
@jiruiqi.bsky.social
Ph.D Candidate @GroNLP, University of Groningen #NLProc https://betswish.github.io
@gneubig.bsky.social
Associate professor at CMU, studying natural language processing and machine learning. Co-founder All Hands AI
@anasedova.bsky.social
Research Intern @Apple MLR • PhD Student @Uni Vienna • prev: @CisLMU, alumna @DAAD_Germany #NLProc
@adinawilliams.bsky.social
NLP, Linguistics, Cognitive Science, AI, etc. Job currently: Research Scientist (NYC) Job formerly: NYU Linguistics, MSU Linguistics
@yevgenm.bsky.social
Assistant professor @GroNLP, Center for Language and Cognition Groningen https://yevgen.web.rug.nl/ Language acquisition, computational cognitive modelling, computational linguistics, cross-language transfer, human speakers vs language models
@yoshuabengio.bsky.social
Full professor at UdeM, Founder and Scientific Advisor at Mila - Quebec AI Institute, A.M. Turing Award Recipient. Working towards the safe development of AI for the benefit of all. Website and blog: https://yoshuabengio.org/
@sfeucht.bsky.social
PhD student doing LLM interpretability with @davidbau.bsky.social and @byron.bsky.social. (they/them) https://sfeucht.github.io
@lisaalaz.bsky.social
#ML & #NLP PhD student at Imperial College London. Prev. research intern @ Cohere and Google Research. Reasoning and planning with foundation models 🧠 She/her
@mtutek.bsky.social
Postdoc @ TakeLab, UniZG | previously: Technion; TU Darmstadt | PhD @ TakeLab, UniZG Faithful explainability, controllability & safety of LLMs. 🔎 On the academic job market 🔎 https://mttk.github.io/
@lukezettlemoyer.bsky.social
Professor at UW; Researcher at Meta. LMs, NLP, ML. PNW life.
@frap98.bsky.social
1st year PhD Student at @gronlp.bsky.social 🐮 - University of Groningen Language Acquisition - NLP
@kirillbykov.bsky.social
PhD student in Interpretable ML @UMI_Lab_AI, @bifoldberlin, @TUBerlin
@fionaewald.bsky.social
PhD Student @ LMU Munich Munich Center for Machine Learning (MCML) Research in Interpretable ML / Explainable AI
@zeynepakata.bsky.social
Liesel Beckmann Distinguished Professor of Computer Science at Technical University of Munich and Director of the Institute for Explainable ML at Helmholtz Munich
@andreasmadsen.bsky.social
Ph.D. in NLP Interpretability from Mila. Previously: independent researcher, freelancer in ML, and Node.js core developer.
@lorenzlinhardt.bsky.social
PhD Student at the TU Berlin ML group + BIFOLD Model robustness/correction 🤖🔧 Understanding representation spaces 🌌✨
@fatemehc.bsky.social
PhD student at Utah NLP, Human-centered Interpretability, Trustworthy AI
@iislucas.bsky.social
Machine learning, interpretability, visualization, Language Models, People+AI research
@zbucinca.bsky.social
PhD Candidate @Harvard; Human-AI Interaction, Responsible AI zbucinca.github.io
@elenal3ai.bsky.social
PhD @UChicagoCS / BE in CS @Umich / ✨AI/NLP transparency and interpretability/📷🎨photography painting
@angieboggust.bsky.social
MIT PhD candidate in the VIS group working on interpretability and human-AI alignment
@koyena.bsky.social
CS Ph.D. Candidate @ Northeastern | Interpretability + Data Science | BS/MS @ Brown koyenapal.github.io
@juliushense.bsky.social
PhD Student @ https://bifold.berlin/, TU Berlin. Computational Pathology, XAI, Multimodal ML, Representation Learning. Github: https://github.com/bifold-pathomics
@elglassman.bsky.social
Assistant Professor @ Harvard SEAS specializing in human-computer and human-AI interaction. Also interested in visualization, digital humanities, urban design.