J. Llarena
Independent Researcher, NLP/ASR Engineer looking for a PhD position in Computational Neuro/Psycho/linguistics. He/him.
josellarena.github.io
@sussillodavid.bsky.social
Neural reverse engineer, scientist at Meta Reality Labs, Adjunct Prof at Stanford.
@benjamingagl.bsky.social
Assistant Professor for Self Learning Systems @UniCologne #Reading #NeuroCognition #ComputationalModels https://selflearningsystems.uni-koeln.de/
@angie-chen.bsky.social
@_angie_chen at the other place PhD student @NYU, formerly at @Princeton 🐅 Interested in LLMs/NLP, pastries, and running. She/her.
@nouhadziri.bsky.social
Research Scientist at Ai2, PhD in NLP 🤖 UofA. Ex GoogleDeepMind, MSFTResearch, MilaQuebec https://nouhadziri.github.io/
@nfel.bsky.social
Post-doctoral Researcher at BIFOLD / TU Berlin interested in interpretability and analysis of language models. Guest researcher at DFKI Berlin. https://nfelnlp.github.io/
@zhuzining.bsky.social
Asst Prof @ Stevens. Working on NLP, Explainable, Safe and Trustworthy AI. https://ziningzhu.github.io
@andreasmadsen.bsky.social
Ph.D. in NLP Interpretability from Mila. Previously: independent researcher, freelancer in ML, and Node.js core developer.
@qiaw99.bsky.social
First-year PhD student at XplaiNLP group @TU Berlin: interpretability & explainability Website: https://qiaw99.github.io
@lorenzlinhardt.bsky.social
PhD Student at the TU Berlin ML group + BIFOLD Model robustness/correction 🤖🔧 Understanding representation spaces 🌌✨
@ribana.bsky.social
Professor of Data Science for Crop Systems at Forschungszentrum Jülich and University of Bonn Working on Explainable ML🔍, Data-centric ML🐿️, Sustainable Agriculture🌾, Earth Observation Data Analysis🌍, and more...
@sparsity.bsky.social
Professor of Machine Learning at TUBerlin, group leader at PTB. Lab account: @qailabs.bsky.social. @[email protected] tu.berlin/uniml/about/head-of-group
@farnoushrj.bsky.social
ML Ph.D. Candidate @tuberlin.bsky.social and @bifold.berlin | Explainable AI, Interpretability, Efficient Machine Learning farnoushrj.github.io
@andreasopedal.bsky.social
PhD student at ETH Zurich & MPI-IS in NLP & ML Language, Reasoning and Cognition https://opedal.github.io
@berkustun.bsky.social
Assistant Prof at UCSD. I work on interpretability, fairness, and safety in ML. www.berkustun.com
@haileyjoren.bsky.social
PhD Student @ UC San Diego Researching reliable, interpretable, and human-aligned ML/AI
@eml-munich.bsky.social
Institute for Explainable Machine Learning at @www.helmholtz-munich.de and Interpretable and Reliable Machine Learning group at Technical University of Munich and part of @munichcenterml.bsky.social
@zootime.bsky.social
I work with explainability AI in a german research facility
@juliusad.bsky.social
ML researcher, building interpretable models at Guide Labs (guidelabs.bsky.social).
@chhaviyadav.bsky.social
Machine Learning Researcher | PhD Candidate @ucsd_cse | @trustworthy_ml chhaviyadav.org
@lesiasemenova.bsky.social
Postdoctoral Researcher at Microsoft Research • Incoming Faculty at Rutgers CS • Trustworthy AI • Interpretable ML • https://lesiasemenova.github.io/
@csinva.bsky.social
Senior researcher at Microsoft Research. Seeking good explanations with machine learning https://csinva.io/
@tmiller-uq.bsky.social
Professor in Artificial Intelligence, The University of Queensland, Australia Human-Centred AI, Decision support, Human-agent interaction, Explainable AI https://uqtmiller.github.io
@umangsbhatt.bsky.social
Incoming Assistant Professor @ University of Cambridge. Responsible AI. Human-AI Collaboration. Interactive Evaluation. umangsbhatt.github.io
@ryanchankh.bsky.social
Machine Learning PhD at UPenn. Interested in the theory and practice of interpretable machine learning. ML Intern@Apple.
@pedroribeiro.bsky.social
Data Scientist @ Mass General, Beth Israel, Broad | Clinical Research | Automated Interpretable Machine Learning, Evolutionary Algorithms | UPenn MSE Bioengineering, Oberlin BA Computer Science
@lowd.bsky.social
CS Prof at the University of Oregon, studying adversarial machine learning, data poisoning, interpretable AI, probabilistic and relational models, and more. Avid unicyclist and occasional singer-songwriter. He/him
@gully.bsky.social
interpretable machine learning for atmospheric and astronomical data analysis, near-IR spectra, climate tech, stars & planets; bikes, Austin, diving off bridges into the ocean.
@harmankaur.bsky.social
Assistant professor at University of Minnesota CS. Human-centered AI, interpretable ML, hybrid intelligence systems.
@zbucinca.bsky.social
PhD Candidate @Harvard; Human-AI Interaction, Responsible AI zbucinca.github.io
@kgajos.bsky.social
Professor of computer science at Harvard. I focus on human-AI interaction, #HCI, and accessible computing.
@friedler.net
CS prof at Haverford, former tech policy at OSTP, research on fairness, accountability, and transparency of ML, @facct.bsky.social co-founder Also at: [email protected] 🦣 (formerly @kdphd 🐦) sorelle.friedler.net
@stephmilani.bsky.social
PhD Student in Machine Learning at CMU. On the academic job market! 🐦 twitter.com/steph_milani 🌐 stephmilani.github.io
@elenal3ai.bsky.social
PhD @UChicagoCS / BE in CS @Umich / ✨AI/NLP transparency and interpretability/📷🎨photography painting
@michaelhind.bsky.social
IBM Distinguished RSM, working on AI transparency, governance, explainability, and fairness. Proud husband & dad, Soccer lover. Posts are my own.
@henstr.bsky.social
Senior Research Scientist at IBM Research and Explainability lead at the MIT-IBM AI Lab in Cambridge, MA. Interested in all things (X)AI, NLP, Visualization. Hobbies: Social chair at #NeurIPS, MiniConf, Mementor-- http://hendrik.strobelt.com
@glima.bsky.social
PhD Researcher at #MPI_SP | MS and BS at KAIST | AI ethics, HCI, justice, accountability, fairness, explainability | he/him http://thegcamilo.github.io/
@asaakyan.bsky.social
PhD student at Columbia University working on human-AI collaboration, AI creativity and explainability. prev. intern @GoogleDeepMind, @AmazonScience asaakyan.github.io
@loradrian.bsky.social
RE at Instadeep, PhD in computational neuroscience, MSc in CS, interested in ML for life sciences.
@harrycheon.bsky.social
"Seung Hyun" | MS CS & BS Applied Math @UCSD 🌊 | LPCUWC 18' 🇭🇰 | Interpretability, Explainability, AI Alignment, Safety & Regulation | 🇰🇷
@wattenberg.bsky.social
Human/AI interaction. ML interpretability. Visualization as design, science, art. Professor at Harvard, and part-time at Google DeepMind.
@eberleoliver.bsky.social
Senior Researcher Machine Learning at BIFOLD | TU Berlin 🇩🇪 Prev at IPAM | UCLA | BCCN Interpretability | XAI | NLP & Humanities | ML for Science
@iislucas.bsky.social
Machine learning, interpretability, visualization, Language Models, People+AI research
@fatemehc.bsky.social
PhD student at Utah NLP, Human-centered Interpretability, Trustworthy AI
@dhadfieldmenell.bsky.social
Assistant Prof of AI & Decision-Making @MIT EECS I run the Algorithmic Alignment Group (https://algorithmicalignment.csail.mit.edu/) in CSAIL. I work on value (mis)alignment in AI systems. https://people.csail.mit.edu/dhm/