@intelligence.org
The Machine Intelligence Research Institute (MIRI) exists to maximize the probability that the creation of smarter-than-human intelligence has a positive impact.
@ronak69.bsky.social
i believe the propaganda i read u believe the propaganda u read x.com/ronax69
@aisupremacy.bsky.social
Canadian in Taiwan. Emerging tech writer, and analyst with a flagship Newsletter called A.I. Supremacy reaching 115k readers Also watching Semis, China, robotics, Quantum, BigTech, open-source AI and Gen AI tools. https://www.ai-supremacy.com/archive
@mlamparth.bsky.social
Postdoc at @Stanford, @StanfordCISAC, Stanford Center for AI Safety, and the SERI program | Focusing on interpretable, safe, and ethical AI decision-making.
@diatkinson.bsky.social
PhD student at Northeastern, previously at EpochAI. Doing AI interpretability. diatkinson.github.io
@roman.technology
CS undergrad at UT Dallas trying to help the singularity go well. Into software engineering, Effective Altruism, AI research, weightlifting, personal knowledge management, consciousness, and longevity. https://roman.technology
@yulislavutsky.bsky.social
Stats Postdoc at Columbia, @bleilab.bsky.social Statistical ML, Generalization, Uncertainty, Empirical Bayes https://yulisl.github.io/
@claudiashi.bsky.social
machine learning, causal inference, science of llm, ai safety, phd student @bleilab, keen bean https://www.claudiashi.com/
@pbarnett.bsky.social
Trying to ensure the future is bright. Technical governance research at MIRI
@lauraruis.bsky.social
PhD supervised by Tim Rocktäschel and Ed Grefenstette, part time at Cohere. Language and LLMs. Spent time at FAIR, Google, and NYU (with Brenden Lake). She/her.
@orpheuslummis.info
Building software & events for AI safety, collective intelligence, civ resilience – https://orpheuslummis.info – 📍Montréal
@notthatcomplicated.bsky.social
computer science, politics, a few opinions here and there. i don't always think them through
@tarasteele.bsky.social
AI safety for children | Founder, The Safe AI for Children Alliance | Exploring AI’s potential for both harm and good! (Please note that my BlueSky direct messages are not always checked regularly)
@fedeadolfi.bsky.social
Computation & Complexity | AI Interpretability | Meta-theory | Computational Cognitive Science https://fedeadolfi.github.io
@martinagvilas.bsky.social
Computer Science PhD student | AI interpretability | Vision + Language | Cogntive Science. 🇦🇷living in 🇩🇪, she/her https://martinagvilas.github.io/
@jbarbosa.org
Junior PI @ INM (Paris) in computational neuroscience, interested in how computations enabling cognition are distributed across brain areas. Expect neuroscience and ML content. jbarbosa.org
@kylem.bsky.social
Full of childlike wonder. Building friendly robots. UT Austin PhD student, MIT ‘20.
@annarogers.bsky.social
Associate professor at IT University of Copenhagen: NLP, language models, interpretability, AI & society. Co-editor-in-chief of ACL Rolling Review. #NLProc #NLP
@jeku.bsky.social
Postdoc at Linköping University🇸🇪. Doing NLP, particularly explainability, language adaptation, modular LLMs. I‘m also into🌋🏕️🚴.
@markriedl.bsky.social
AI for storytelling, games, explainability, safety, ethics. Professor at Georgia Tech. Associate Director of ML Center at GT. Time travel expert. Geek. Dad. he/him
@sejdino.bsky.social
Professor of Statistical Machine Learning at the University of Adelaide. https://sejdino.github.io/
@thomasfel.bsky.social
Explainability, Computer Vision, Neuro-AI.🪴 Kempner Fellow @Harvard. Prev. PhD @Brown, @Google, @GoPro. Crêpe lover. 📍 Boston | 🔗 thomasfel.me
@panisson.bsky.social
Principal Researcher @ CENTAI.eu | Leading the Responsible AI Team. Building Responsible AI through Explainable AI, Fairness, and Transparency. Researching Graph Machine Learning, Data Science, and Complex Systems to understand collective human behavior.
@apepa.bsky.social
Assistant Professor, University of Copenhagen; interpretability, xAI, factuality, accountability, xAI diagnostics https://apepa.github.io/
@sarah-nlp.bsky.social
Research in LM explainability & interpretability since 2017. sarahwie.github.io Postdoc @ai2.bsky.social & @uwnlp.bsky.social PhD from Georgia Tech Views my own, not my employer's.
@swetakar.bsky.social
Machine learning PhD student @ Blei Lab in Columbia University Working in mechanistic interpretability, nlp, causal inference, and probabilistic modeling! Previously at Meta for ~3 years on the Bayesian Modeling & Generative AI teams. 🔗 www.sweta.dev
@velezbeltran.bsky.social
Machine Learning PhD Student @ Blei Lab & Columbia University. Working on probabilistic ML | uncertainty quantification | LLM interpretability. Excited about everything ML, AI and engineering!
@mdhk.net
Linguist in AI & CogSci 🧠👩💻🤖 PhD student @ ILLC, University of Amsterdam 🌐 https://mdhk.net/ 🐘 https://scholar.social/@mdhk 🐦 https://twitter.com/mariannedhk
@anneo.bsky.social
Comm tech & social media research professor by day, symphony violinist by night, outside as much as possible otherwise. German/American Pacific Northwestern New Englander, #firstgen academic, she/her, 🏳️🌈 https://anne-oeldorf-hirsch.uconn.edu
@aliciacurth.bsky.social
Machine Learner by day, 🦮 Statistician at ❤️ In search of statistical intuition for modern ML & simple explanations for complex things👀 Interested in the mysteries of modern ML, causality & all of stats. Opinions my own. https://aliciacurth.github.io
@jkminder.bsky.social
CS Student at ETH Zürich, currently doing my masters thesis at the DLAB at EPFL Mainly interested in Language Model Interpretability. Most recent work: https://openreview.net/forum?id=Igm9bbkzHC MATS 7.0 Winter 2025 Scholar w/ Neel Nanda jkminder.ch
@elianapastor.bsky.social
Assistant Professor at PoliTo 🇮🇹 | Currently visiting scholar at UCSC 🇺🇸 | she/her | TrustworthyAI, XAI, Fairness in AI https://elianap.github.io/
@dilya.bsky.social
PhD Candidate in Interpretability @FraunhoferHHI | 📍Berlin, Germany dilyabareeva.github.io
@rachel-law.bsky.social
Organic machine turning tea into theorems ☕️ AI @ Microsoft Research ➡️ Goal: Teach models (and humans) to reason better Let’s connect re: AI for social good, graphs & network dynamics, discrete math, logic 🧩, 🥾,🎨 Organizing for democracy.🗽 www.rlaw.me
@peyrardmax.bsky.social
Junior Professor CNRS (previously EPFL, TU Darmstadt) -- AI Interpretability, causal machine learning, and NLP. Currently visiting @NYU https://peyrardm.github.io
@jskirzynski.bsky.social
PhD student in Computer Science @UCSD. Studying interpretable AI and RL to improve people's decision-making.
@fionaewald.bsky.social
PhD Student @ LMU Munich Munich Center for Machine Learning (MCML) Research in Interpretable ML / Explainable AI
@simonschrodi.bsky.social
🎓 PhD student @cvisionfreiburg.bsky.social @UniFreiburg 💡 interested in mechanistic interpretability, robustness, AutoML & ML for climate science https://simonschrodi.github.io/
@fernbear.bsky.social
Neural network speedrunner and community-funded open source researcher. Set the CIFAR-10 record several times. Send me consulting/contracting work! she/they❤️
@marcmarone.com
PhD student at JHU. @Databricks MosaicML, Microsoft Semantic Machines/Translate, Georgia Tech. I like datasets! https://marcmarone.com/
@kesnet50.bsky.social
PhD candidate in NLP, CV at JHU. Previously robotics at UC Berkeley. I work on video-language understanding, transparent reasoning, information extraction, & uncertainty. #NLProc