Julian Skirzynski
PhD student in Computer Science @UCSD. Studying interpretable AI and RL to improve people's decision-making.
Starter Packs
Created by Julian Skirzynski (1)
XAI Starter pack
Researchers in Interpretable and Explainable AI
@wanlingcai.bsky.social
Postdoctoral Researcher in Human-Computer Interaction @ Trinity College Dublin | Research Interest: HCI, Digital Health, Human-Centered AI/Computing
@sunniesuhyoung.bsky.social
PhD candidate at Princeton CS | AI + HCI | https://sunniesuhyoung.github.io/ Rresponsible AI, Human-AI interaction, AI safety/harms, Human-centered evaluation 🇰🇷→Yale→TTIC→Princeton🐯
@besmiranushi.bsky.social
AI/ML, Responsible AI, Technology & Society @MicrosoftResearch
@angusjnic.bsky.social
DPhil student at University of Oxford. Researcher in interpretable AI for medical imaging. Supervised by Alison Noble and Yarin Gal.
@jandubinski.bsky.social
PhD student in Machine Learning @Warsaw University of Technology and @IDEAS NCBR
@xai-research.bsky.social
Explainable/Interpretable AI researchers and enthusiasts - DM to join the XAI Slack! Blue Sky and Slack maintained by Nick Kroeger
@sukrutrao.bsky.social
PhD Student at the Max Planck Institute for Informatics @cvml.mpi-inf.mpg.de @maxplanck.de | Explainable AI, Computer Vision, Neuroexplicit Models Web: sukrutrao.github.io
@markar.bsky.social
#nlp researcher interested in evaluation including: multilingual models, long-form input/output, processing/generation of creative texts previous: postdoc @ umass_nlp phd from utokyo https://marzenakrp.github.io/
@simoneschaub.bsky.social
Assistant Professor of Computer Science at TU Darmstadt, Member of @ellis.eu, DFG #EmmyNoether Fellow, PhD @ETH Computer Vision & Deep Learning
@jakehofman.bsky.social
senior principal researcher at msr nyc, adjunct professor at columbia
@erata.bsky.social
PhD student @Yale • Applied Scientist @AWS AI • Automated Reasoning • Neuro-Symbolic AI • Alignment • Security & Privacy • Views my own • https://ferhat.ai
@marleneberke.bsky.social
PhD candidate @ YalePsychology | Computational Modeling | Metacognition | Social Cognition | Perception | Women’s Health Advocacy marleneberke.github.io
@giosuebaggio.bsky.social
Cognitive scientist @NTNU · Author of ‘Meaning in the Brain’ and ‘Neurolinguistics’ @mitpress · www.ntnu.edu/employees/giosue.baggio
@mlamparth.bsky.social
Postdoc at @Stanford, @StanfordCISAC, Stanford Center for AI Safety, and the SERI program | Focusing on interpretable, safe, and ethical AI decision-making.
@lucasresck.bsky.social
PhD student in NLP at Cambridge | ELLIS PhD student https://lucasresck.github.io/
@sqirllab.bsky.social
We are "squIRreL", the Interpretable Representation Learning Lab based at IDLab - University of Antwerp & imec. Research Areas: #RepresentationLearning, Model #Interpretability, #explainability, #DeepLearning #ML #AI #XAI #mechinterp
@conorosullyds.bsky.social
PhD in ML for coastal monitoring 🌊 South African 🇿🇦 living in Dublin 🇮🇪 I post content about XAI & remote sensing
@neelrajani.bsky.social
PhD student in Responsible NLP at the University of Edinburgh, passionate about MechInterp
@francescortu.bsky.social
NLP & Interpretability | PhD Student @ University of Trieste & Laboratory of Data Engineering of Area Science Park | Prev MPI-IS
@stevemacn.bsky.social
Assistant Professor, HCI Lab Director, Temple University Currently passionate about computing education, assistive technology, and undergraduate research.
@iaugenstein.bsky.social
Professor at the University of Copenhagen. Explainable AI, Natural Language Processing, ML. Head of copenlu.bsky.social lab. #NLProc #NLP #XAI http://isabelleaugenstein.github.io/
@sbordt.bsky.social
Understanding LLMs. Interpretable Machine Learning. Postdoc @ Uni Tuebingen. https://sbordt.github.io/
@sraval.bsky.social
Physics, Visualization and AI PhD @ Harvard | Embedding visualization and LLM interpretability | Love pretty visuals, math, physics and pets | Currently into manifolds Wanna meet and chat? Book a meeting here: https://zcal.co/shivam-raval
@adrhill.bsky.social
PhD student at @bifold.berlin, Machine Learning Group, TU Berlin. Automatic Differentiation, Explainable AI and #JuliaLang. Open source person: adrianhill.de/projects
@neuripsconf.bsky.social
The Thirty-Eighth Annual Conference on Neural Information Processing Systems will be held in Vancouver Convention Center, on Tuesday, Dec 10 through Sunday, Dec 15. https://neurips.cc/
@zeynepakata.bsky.social
Liesel Beckmann Distinguished Professor of Computer Science at Technical University of Munich and Director of the Institute for Explainable ML at Helmholtz Munich
@kirillbykov.bsky.social
PhD student in Interpretable ML @UMI_Lab_AI, @bifoldberlin, @TUBerlin
@mateuszpach.bsky.social
ELLIS PhD Student @ TU Munich and Helmholtz AI 🔍⚙️ Interpretability 🖼️📚 Multimodal ML ✨🎨 Generative AI
@kbeckh.bsky.social
Data Scientist at Fraunhofer IAIS PhD Student at University of Bonn Lamarr Institute XAI, NLP, Human-centered AI
@dggoldst.bsky.social
Senior Principal Research Manager at Microsoft Research NYC. Economics and Computation Group. Distinguished Scholar at Wharton.
@tiagotorrent.com
Cognitive Linguist doing research on Natural Language Processing with Frames and Constructions at FrameNetBrasil and GlobalFrameNet (he/him). https://tiagotorrent.com
@simonschrodi.bsky.social
🎓 PhD student @cvisionfreiburg.bsky.social @UniFreiburg 💡 interested in mechanistic interpretability, robustness, AutoML & ML for climate science https://simonschrodi.github.io/
@hectorkohler.bsky.social
PhD student in interpretable reinforcement learning at Inria Scool. http://Kohlerhector.github.io/homepage/
@domoritz.de
Visualization, data, AI/ML. Professor at CMU (@dig.cmu.edu, @hcii.cmu.edu) and researcher at Apple. Also sailboats ⛵️ and chocolate 🍫. www.domoritz.de
@angieboggust.bsky.social
MIT PhD candidate in the VIS group working on interpretability and human-AI alignment
@dorsarohani.bsky.social
Deep learning @ NVIDIA, Vector. prev @ DeepGenomics dorsarohani.com
@variint.bsky.social
Lost in translation | Interpretability of modular convnets applied to 👁️ and 🛰️🐝 | she/her 🦒💕 variint.github.io
@marvinschmitt.bsky.social
🇪🇺 AI/ML, Member @ellis.eu 🤖 Generative NNs, ProbML, Uncertainty Quantification, Amortized Inference, Simulation Intelligence 🎓 PhD+MSc CS, MSc Psych 🏡 marvinschmitt.github.io ✨ On the job market, DMs open 📩
@stephaniebrandl.bsky.social
Assistant Professor in NLP (Fairness, Interpretability and lately interested in Political Science) at the University of Copenhagen ✨ Before: PostDoc in NLP at Uni of CPH, PhD student in ML at TU Berlin
@dilya.bsky.social
PhD Candidate in Interpretability @FraunhoferHHI | 📍Berlin, Germany dilyabareeva.github.io
@sarah-nlp.bsky.social
Research in LM explainability & interpretability since 2017. sarahwie.github.io Postdoc @ai2.bsky.social & @uwnlp.bsky.social PhD from Georgia Tech Views my own, not my employer's.
@swetakar.bsky.social
Machine learning PhD student @ Blei Lab in Columbia University Working in mechanistic interpretability, nlp, causal inference, and probabilistic modeling! Previously at Meta for ~3 years on the Bayesian Modeling & Generative AI teams. 🔗 www.sweta.dev
@jkminder.bsky.social
CS Student at ETH Zürich, currently doing my masters thesis at the DLAB at EPFL Mainly interested in Language Model Interpretability. Most recent work: https://openreview.net/forum?id=Igm9bbkzHC MATS 7.0 Winter 2025 Scholar w/ Neel Nanda jkminder.ch
@ovdw.bsky.social
Technology specialist at the EU AI Office / AI Safety / Prev: University of Amsterdam, EleutherAI, BigScience Thoughts & opinions are my own and do not necessarily represent my employer.
@romapatel.bsky.social
research scientist @deepmind. language & multi-agent rl & interpretability. phd @BrownUniversity '22 under ellie pavlick (she/her) https://roma-patel.github.io