Tristan Josef Legg
Researching Deep Reinforcement Learning at Stellenbosch University
@finbarr.bsky.social
building the future research at midjourney, deepmind. slinging ai hot takes 🥞at artfintel.com
@babak-heydari.bsky.social
Associate Prof at Northeastern University CoE and Network Science Institute | Building models and doing Interdisciplinary Research (AI/Network Science/Sociotechnical Systems)
@modanesh.bsky.social
CS PhD @mcgill.ca and @mila-quebec.bsky.social, working on 🍒 and 🤖 stuff ex- @LetsUnifyAI, @NUSComputing, @EngineeringOSU https://modanesh.github.io/ 📍 Montreal, Canada
@mircomutti.bsky.social
Reinforcement learning, but without rewards. Postdoc at the Technion. PhD from Politecnico di Milano. https://muttimirco.github.io
@rl-conference.bsky.social
Information and updates about RLC 2025 at the University of Alberta from Aug. 5th to 8th! https://rl-conference.cc
@dylancope.bsky.social
Researching multi-agent RL, emergent communication, and evolutionary computation. Incoming Postdoc at FLAIR Oxford. PhD from Safe and Trusted AI CDT @ KCL/Imperial. Previously visiting researcher at CHAI U.C. Berkeley. dylancope.com he/him London 🇬🇧
@pcastr.bsky.social
Señor swesearcher @ Google DeepMind, adjunct prof at Université de Montréal and Mila. Musician. From 🇪🇨 living in 🇨🇦. https://psc-g.github.io/
@djfoster.bsky.social
Principal Researcher in AI/ML/RL Theory @ Microsoft Research NE/NYC. Previously @ MIT, Cornell. http://dylanfoster.net RL Theory Lecture Notes: https://arxiv.org/abs/2312.16730
@clarkai.bsky.social
PhD student studying AI for tactical decision making in complex ocean environments @uniofbath.bsky.social Distinctly amateur cyclist, triathlete, sailor, climber https://scholar.google.com/citations?user=WE-zZMoAAAAJ&hl=en
@ftudisco.bsky.social
Machine Learning @ University of Edinburgh | AI4Science | optimization | numerics | networks | co-founder @ MiniML.ai | ftudisco.gitlab.io
@dtiapkin.bsky.social
PhD student at École polytechnique and Université Paris-Saclay 🇫🇷 Reinforcement Learning enjoyer, sometimes even with human feedback Ex. student-researcher at Google DeepMind Paris 🌐 https://d-tiapkin.github.io/
@vinfl.bsky.social
Assistant Professor in machine learning @VUAmsterdam Abstract representations+reinforcement learning.
@rupspace.bsky.social
AI Researcher @ NNAISENSE. (Co)developed Highway Networks, Upside-Down RL, Bayesian Flow Networks, EvoTorch 📜 Learning is compression https://rupeshks.cc/
@akanksha-saran.bsky.social
AI. RL. Robots+Humans. Building general purpose agents. Research Scientist in the Gaming and Interactive Agents Group at Sony AI. Prev: MSFT Research, UT Austin, CMU, IIT Jodhpur. https://scholar.google.com/citations?user=zZhWSQ0AAAAJ&hl=en
@dabelcs.bsky.social
Researcher @ Google DeepMind and Honorary Fellow @ U of Edinburgh. RL, philosophy, foundations, AI. https://david-abel.github.io
@daniel-brown.bsky.social
CS assistant prof @Utah. Researches human-robot interaction, human-in-the-loop ML, AI safety and alignment. https://users.cs.utah.edu/~dsbrown/
@mwulfmeier.bsky.social
Large-Scale Robot Decision Making @GoogleDeepMind European @ELLISforEurope - imitation interaction transfer - priors: @oxfordrobots @berkeley_ai @ETH @MIT
@florentdelgrange.bsky.social
postdoc @ ai lab, Vrije Universiteit Brussel working on providing reliable and verifiable ai mechanisms #RL & formal methods delgrange.me
@cong-ml.bsky.social
Research Scientist @ Google DeepMind, in open-ended learning, and AI for Scientific Discovery.
@katjahofmann.bsky.social
At Microsoft Research. Lead of https://aka.ms/game-intelligence - we drive innovation in machine learning with applications in games. https://iclr.cc Board.
@miguelsuau.bsky.social
Machine Teacher. Research Scientist at Phaidra. PhD from TU Delft. Previously JP Morgan, Huawei, Unity. https://www.suau.io/
@cathywu.bsky.social
MIT Associate Professor | AI & Transportation. Using machine learning, optimization, and reinforcement learning to empower sociotechnical decision makers. A bit wary of tech hype. http://www.wucathy.com
@marloscmachado.bsky.social
Assistant Professor at the University of Alberta. Amii Fellow, Canada CIFAR AI chair. Machine learning researcher. All things reinforcement learning. 📍 Edmonton, Canada 🇨🇦 🔗 https://webdocs.cs.ualberta.ca/~machado/ 🗓️ Joined November, 2024
@jfisac.bsky.social
Assistant Professor @ Princeton ECE Safe Human-Centered Robotics and AI
@skiandsolve.bsky.social
⛷️ ML Theorist carving equations and mountain trails | 🚴♂️ Biker, Climber, Adventurer | 🧠 Reinforcement Learning: Always seeking higher peaks, steeper walls and better policies. https://ualberta.ca/~szepesva
@audurand.bsky.social
Associate professor @ Université Laval - IID - Mila Interested in reinforcement learning, bandits, partial monitoring, active learning, ... anything that learns by getting its own data from the environment!
@hardmaru.bsky.social
I work at Sakana AI 🐟🐠🐡 → @sakanaai.bsky.social https://sakana.ai/careers
@harshitsikchi.bsky.social
I study Reinforcement Learning. PhD from UT Austin. Previously FAIR Paris, Meta US, NVIDIA, CMU, and IIT Kharagpur. Website: https://hari-sikchi.github.io/
@stonet2000.bsky.social
PhDing @UCSanDiego @HaoSuLabUCSD @hillbot_ai on scalable robot learning, reinforcement learning, and embodied AI. Co-founded @LuxAIChallenge to build AI competitions. @NSF GRFP fellow http://stoneztao.com
@yus167.bsky.social
PhD at Machine Learning Department, Carnegie Mellon University | Interactive Decision Making | https://yudasong.github.io
@willemropke.bsky.social
PhD student | Interested in all things decision-making and learning
@upiter.bsky.social
PhD at NYU studying reasoning, decision-making, and open-endedness alum of MIT | prev: Google, MSR, MIT CoCoSci https://upiterbarg.github.io/
@ahana.bsky.social
Reinforcement Learning PhD student, UPF Barcelona. Uncertain in the face of optimism. ahanadeb.github.io
@orrkrup.bsky.social
Robot Learning Research | Prev: PhD @Technion More data isn't all we need 🔭🦾 🌍
@amsks96.bsky.social
PhD Student working on Generlization and State abstractions in #RL, #MetaLearning, and #AutoRL amsks.github.io
@theeimer.bsky.social
RL researcher looking for DACs // What is this AutoRL anyway? she/her Currently: Leibniz Uni Hannover Previously: Uni Freiburg (Master's) | Meta AI London (Intern) Always & Forever: AutoRL.org
@allenanie.bsky.social
Stanford CS PhD working on RL and LLMs with Emma Brunskill and Chris Piech. Co-creator of Trace. Prev @GoogleDeepMind @MicrosoftResearch Specifically - Offline RL - In-context RL - Causality https://anie.me/about Unverified hot takes go to this account
@cvoelcker.bsky.social
Reinforcement Learning @ UofT/Vector Institute, political agitation @ Queer in AI If I seem very angry, check if I have been watered in the last 24 hours. For professional, see https://cvoelcker.de
@marcelhussing.bsky.social
PhD student at the University of Pennsylvania. Currently, intern at MSR. Interested in reliable and replicable reinforcement learning and using it for knowledge discovery: https://marcelhussing.github.io/ All posts are my own.
@artemzholus.bsky.social
Visiting Researcher at Meta; PhD student @mila.quebec. Ex: Intern @GoogleDeepMind, Intern @ EPFL, MSc@MIPT; artemzholus.github.io
@onnoeberhard.com
PhD Student in Tübingen (MPI-IS & Uni Tü), interested in reinforcement learning. Ex research intern at Google Research. https://onnoeberhard.com/
@cassidylaidlaw.bsky.social
PhD student at UC Berkeley studying RL and AI safety. https://cassidylaidlaw.com
@yihe-deng.bsky.social
CS PhD candidate @UCLA | Prev. Research Intern @MSFTResearch, Applied Scientist Intern @AWS | LLM post-training, multi-modal learning https://yihedeng9.github.io
@hyperpotatoneo.bsky.social
PhD student at Mila | Diffusion models and reinforcement learning 🧐 | hyperpotatoneo.github.io