Natalie Shapira
Tell me about challenges, the unbelievable, the human mind and artificial intelligence, thoughts, social life, family life, science and philosophy.
@lampinen.bsky.social
Interested in cognition and artificial intelligence. Research Scientist at Google DeepMind. Previously cognitive science at Stanford. Posts are mine. lampinen.github.io
@pkhdipraja.bsky.social
PhD student @ Fraunhofer HHI. Interpretability, incremental NLP, and NLU. https://pkhdipraja.github.io/
@lambdaviking.bsky.social
Will irl - PhD student @ NYU on the academic job market! Using complexity theory and formal languages to understand the power and limits of LLMs https://lambdaviking.com/ https://github.com/viking-sudo-rm
@andreasopedal.bsky.social
PhD student at ETH Zurich & MPI-IS in NLP & ML Language, Reasoning and Cognition https://opedal.github.io
@datatherapist.bsky.social
#NLP / #NLProc , #dataScience, #AI / #ArtificialIntelligence, #linguistics (#syntax, #semantics, …), occasional #parenting, #gardening, & what not. PhD. Adjunct prof once in a full red moon. Industry / technical mentor. Not my opinion, never my employer’s
@nfel.bsky.social
Post-doctoral Researcher at BIFOLD / TU Berlin interested in interpretability and analysis of language models. Guest researcher at DFKI Berlin. https://nfelnlp.github.io/
@zhuzining.bsky.social
Asst Prof @ Stevens. Working on NLP, Explainable, Safe and Trustworthy AI. https://ziningzhu.github.io
@andreasmadsen.bsky.social
Ph.D. in NLP Interpretability from Mila. Previously: independent researcher, freelancer in ML, and Node.js core developer.
@qiaw99.bsky.social
First-year PhD student at XplaiNLP group @TU Berlin: interpretability & explainability Website: https://qiaw99.github.io
@lorenzlinhardt.bsky.social
PhD Student at the TU Berlin ML group + BIFOLD Model robustness/correction 🤖🔧 Understanding representation spaces 🌌✨
@ribana.bsky.social
Professor of Data Science for Crop Systems at Forschungszentrum Jülich and University of Bonn Working on Explainable ML🔍, Data-centric ML🐿️, Sustainable Agriculture🌾, Earth Observation Data Analysis🌍, and more...
@danielsc4.it
🦖 PhD student, Interpretability & NLP @unimib 🇮🇹 & @gronlp.bsky.social 🇳🇱 danielsc4.it
@farnoushrj.bsky.social
ML Ph.D. Candidate @tuberlin.bsky.social and @bifold.berlin | Explainable AI, Interpretability, Efficient Machine Learning farnoushrj.github.io
@sparsity.bsky.social
Professor of Machine Learning at TUBerlin, group leader at PTB. Lab account: @qailabs.bsky.social. @[email protected] tu.berlin/uniml/about/head-of-group
@naturecomputes.bsky.social
Searching for principles of neural representation | Neuro + AI @ enigmaproject.ai | Stanford | sophiasanborn.com
@ruizheli.bsky.social
Assistant Professor at University of Aberdeen | Postdoc at UCL | PhD at University of Sheffield | mechanistic interpretability & multimodal LLMs | https://www.ruizhe.space
@gowthami.bsky.social
PhD-ing at UMD. Knows a little about multimodal generative models. Check out my website to know more - https://somepago.github.io/
@sarahooker.bsky.social
I lead Cohere For AI. Formerly Research Google Brain. ML Efficiency, LLMs, @trustworthy_ml.
@rosanneliu.com
Founder & executive & community builder & organizer & researcher ML Collective (mlcollective.org) Google DeepMind rosanneliu.com
@judyh.bsky.social
Prof at Georgia Tech https://faculty.cc.gatech.edu/~judy/ Machine Learning and Computer Vision Researcher
@mimansaj.bsky.social
Robustness, Data & Annotations, Evaluation & Interpretability in LLMs http://mimansajaiswal.github.io/
@ronisen.bsky.social
Asst. Prof. UNC Chapel Hill CS Computer Vision & Graphics. https://www.cs.unc.edu/~ronisen/
@hildekuehne.bsky.social
Professor for CS at the Tuebingen AI Center and affiliated Professor at MIT-IBM Watson AI lab - Multimodal learning and video understanding - GC for ICCV 2025 - https://hildekuehne.github.io/
@alexiajm.bsky.social
AI Researcher at the Samsung SAIT AI Lab 🐱💻 I build generative models for images, videos, text, tabular data, NN weights, molecules, and now video games!
@jskirzynski.bsky.social
PhD student in Computer Science @UCSD. Studying interpretable AI and RL to improve people's decision-making.
@haileyjoren.bsky.social
PhD Student @ UC San Diego Researching reliable, interpretable, and human-aligned ML/AI
@eml-munich.bsky.social
Institute for Explainable Machine Learning at @www.helmholtz-munich.de and Interpretable and Reliable Machine Learning group at Technical University of Munich and part of @munichcenterml.bsky.social
@zootime.bsky.social
I work with explainability AI in a german research facility
@juffi-jku.bsky.social
Researcher Machine Learning & Data Mining, Prof. Computational Data Analytics @jkulinz.bsky.social, Austria.
@juliusad.bsky.social
ML researcher, building interpretable models at Guide Labs (guidelabs.bsky.social).
@elglassman.bsky.social
Assistant Professor @ Harvard SEAS specializing in human-computer and human-AI interaction. Also interested in visualization, digital humanities, urban design.
@chhaviyadav.bsky.social
Machine Learning Researcher | PhD Candidate @ucsd_cse | @trustworthy_ml chhaviyadav.org
@lesiasemenova.bsky.social
Postdoctoral Researcher at Microsoft Research • Incoming Faculty at Rutgers CS • Trustworthy AI • Interpretable ML • https://lesiasemenova.github.io/
@csinva.bsky.social
Senior researcher at Microsoft Research. Seeking good explanations with machine learning https://csinva.io/
@tmiller-uq.bsky.social
Professor in Artificial Intelligence, The University of Queensland, Australia Human-Centred AI, Decision support, Human-agent interaction, Explainable AI https://uqtmiller.github.io
@umangsbhatt.bsky.social
Incoming Assistant Professor @ University of Cambridge. Responsible AI. Human-AI Collaboration. Interactive Evaluation. umangsbhatt.github.io
@stefanherzog.bsky.social
Senior Researcher @arc-mpib.bsky.social MaxPlanck @mpib-berlin.bsky.social, group leader #BOOSTING decisions: cognitive science, AI/collective intelligence, behavioral public policy, comput. social science, misinfo; stefanherzog.org scienceofboosting.org
@fionaewald.bsky.social
PhD Student @ LMU Munich Munich Center for Machine Learning (MCML) Research in Interpretable ML / Explainable AI
@ryanchankh.bsky.social
Machine Learning PhD at UPenn. Interested in the theory and practice of interpretable machine learning. ML Intern@Apple.
@pedroribeiro.bsky.social
Data Scientist @ Mass General, Beth Israel, Broad | Clinical Research | Automated Interpretable Machine Learning, Evolutionary Algorithms | UPenn MSE Bioengineering, Oberlin BA Computer Science
@lowd.bsky.social
CS Prof at the University of Oregon, studying adversarial machine learning, data poisoning, interpretable AI, probabilistic and relational models, and more. Avid unicyclist and occasional singer-songwriter. He/him
@gully.bsky.social
interpretable machine learning for atmospheric and astronomical data analysis, near-IR spectra, climate tech, stars & planets; bikes, Austin, diving off bridges into the ocean.
@harmankaur.bsky.social
Assistant professor at University of Minnesota CS. Human-centered AI, interpretable ML, hybrid intelligence systems.
@zbucinca.bsky.social
PhD Candidate @Harvard; Human-AI Interaction, Responsible AI zbucinca.github.io
@kgajos.bsky.social
Professor of computer science at Harvard. I focus on human-AI interaction, #HCI, and accessible computing.