David Atkinson
PhD student at Northeastern, previously at EpochAI. Doing AI interpretability.
diatkinson.github.io
@hibaahsan.bsky.social
PhD student @ Northeastern University, Clinical NLP https://hibaahsan.github.io/ she/her
@hyesunyun.bsky.social
PhD candidate in CS at Northeastern University | NLP + HCI for health | she/her 🏃♀️🧅🌈
@yidachen.bsky.social
CS PhD student at Harvard. Interested in Interpretability 🔍, Visualizations 📊, Human-AI Interaction🧍🤖. All opinions are mine. https://yc015.github.io/
@juliusad.bsky.social
ML researcher, building interpretable models at Guide Labs (guidelabs.bsky.social).
@maximemeloux.bsky.social
PhD student @LIG | Causal abstraction, interpretability & LLMs
@claudiashi.bsky.social
machine learning, causal inference, science of llm, ai safety, phd student @bleilab, keen bean https://www.claudiashi.com/
@timhua.bsky.social
Helping people is good I guess Trying to do AI interp and control Used to do economics timhua.me
@koyena.bsky.social
CS Ph.D. Candidate @ Northeastern | Interpretability + Data Science | BS/MS @ Brown koyenapal.github.io
@benstew.bsky.social
Research Fellow at Open Philanthropy, on catastrophic risks from AI and biology. Own views. 🔸 giving 10% of my lifetime income to effective charities via Giving What We Can
@lynettebye.bsky.social
@bshlgrs.bsky.social
@vidurkapur.bsky.social
Superforecaster at Good Judgment. Also forecasting at Swift Centre, Samotsvety, RAND and a hedge fund. Impartial beneficence enthusiast.
@ryancbriggs.net
Raising kids & bread & grant money. Cleaning data & diapers & fish. EA (bed nets, not light cone). Social scientist. typos. twitter.com/ryancbriggs
@thetetra.space
@carlrobi.bsky.social
Program Officer on nuclear policy at Longview Philanthropy (http://longview.org). Opinions are my own.
@astralcodexten.com.web.brid.gy
P(A|B) = [P(A)*P(B|A)]/P(B), all the rest is commentary. Click to read Astral Codex Ten, by Scott Alexander, a […] [bridged from astralcodexten.com on the web: https://fed.brid.gy/web/astralcodexten.com ]
@sebfar.bsky.social
Senior Research Scientist at Google DeepMind. AGI Alignment researcher. Views my dog's.
@garrisonlovely.bsky.social
Omidyar Network - Reporter in Residence + freelance journalist. Covers: The Nation, Jacobin. Bylines: NYT, Nature, BBC, Guardian, TIME, The Verge, Vox, Thomson Reuters Foundation, + others.
@rossaokod.bsky.social
Research @ Open Philanthropy. Formerly economist at GPI / Nuffield College, Oxford. Interests: development econ, animal welfare, global catastrophic risks
@aarongertler.bsky.social
Comms officer @ Open Philanthropy, former Magic pro, webfiction connoisseur. https://aarongertler.net/
@aaronbergman18.bsky.social
👎: suffering | 👍: EA, AI alignment, decoupling, R, cringe, amateur pharmacology + programming | Georgetown '22 (math+econ+phil) | Career status: 🤷♂️
@weden.bsky.social
@ankareuel.bsky.social
Computer Science PhD Student @ Stanford | Geopolitics & Technology Fellow @ Harvard Kennedy School/Belfer | Vice Chair EU AI Code of Practice | Views are my own
@binksmith.com
Building tools for forecasting and understanding AI at https://sage-future.org 🔭 Effective altruism! https://binksmith.com
@epochai.bsky.social
We are a research institute investigating the trajectory of AI for the benefit of society. epoch.ai
@yonashav.bsky.social
policy for v smart things @openai. Past: PhD @HarvardSEAS/@SchmidtFutures/@MIT_CSAIL. Posts my own; on my head be it
@metr.org
METR is a research nonprofit that builds evaluations to empirically test AI systems for capabilities that could threaten catastrophic harm to society.
@elifland.bsky.social
@tobyord.bsky.social
Senior Researcher at Oxford University. Author — The Precipice: Existential Risk and the Future of Humanity. tobyord.com
@trevorlevin.bsky.social
Trying to help the world navigate the potential craziness of the 21st century, currently via AI Governance and Policy at @openphil; dad rock enjoyer; he/him