Zhaofeng Lin
PhD student @Trinity College Dublin | Multimodal speech recognition
https://chaufanglin.github.io/
@rogerkmoore.bsky.social
Professor of Spoken Language Processing, runner & photographer. Editor-in-Chief of Computer Speech and Language.
@icmi2025official.bsky.social
ICMI is the premier international forum that brings together multimodal artificial intelligence (AI) and social interaction research.
@msoleymani.bsky.social
Research associate professor of computer science at USC; Affective Computing, social AI, multimodal stuff
@ejsorourke.bsky.social
#neural #FPGA #inniskeen #astronomy #semiconductor #physics #asic #vlsi #radio #sdr 🔭 📻 🛰 📡 🪐 🖖 ⛵️ RT ≠ endorsement - My opinions, not my employer's.
@dklement.bsky.social
Speech Researcher @ BUT SPEECH Visiting student @ CLSP Johns Hopkins University GitHub: https://github.com/domklement LinkedIN: https://www.linkedin.com/in/dominik-klement/
@mathfontaine.bsky.social
Associate professor at Télécom Paris in machine listening and audio applied to extended reality
@reihanehamooie.bsky.social
Speech/Language enthusiast PhD student at Rijksuniversiteit Groningen (SLG: Speech Lab Groningen) Tehran
@yoavgo.bsky.social
@hildekuehne.bsky.social
Professor for CS at the Tuebingen AI Center and affiliated Professor at MIT-IBM Watson AI lab - Multimodal learning and video understanding - GC for ICCV 2025 - https://hildekuehne.github.io/
@wanchichen.bsky.social
PhD Student @ltiatcmu.bsky.social I work in speech processing. wanchichen.github.io
@larryniven4.bsky.social
Lecturer at the University of Edinburgh. Member of Centre of Speech Technology Research (CSTR).
@cdminix.bsky.social
PhD Student @ University of Edinburgh. Working on Synthetic Speech Evaluation at the moment. 🇳🇴 Oslo 🏴 Edinburgh 🇦🇹 Graz
@zouharvi.bsky.social
PhD student @ ETH Zürich | all aspects of NLP but mostly evaluation and MT | go vegan | https://vilda.net
@beluticona.bsky.social
CS PhD student @GeorgeMasonU @ComputacionUBA NLP, Speech &🤎 Language Technologies for Crisis Response, AI + Indigenous People 🌱 http://beluticona.github.io
@eyeo1.bsky.social
https://eunjung31.github.io/ Visiting Scholar at ChangeLingLab, LTI, CMU, specializing in computational phonetics and phonology with a particular focus on clinical speech analysis and applications.
@jiaruizhang.bsky.social
USC CS Ph.D. student Prev Tsinghua Uni NLP, Multimodal Learning, AI for Science https://saccharomycetes.github.io/
@badralabsi.bsky.social
Computational Linguistics, Speech Technology Postdoc @ Saarland University 🦉
@tanelalumae.bsky.social
Associate Professor of Speech Processing Tallinn University of Technology, Estonia
@delphine-charuau.bsky.social
PhD in Language Sciences and Phonetics Postdoctoral researcher at Trinity College Dublin Dublin
@wietsedv.nl
Postdoc on low-resource speech tech at the University of Groningen 🇳🇱 🐝 🍯 🍄 🧗 ⛸️ 🐈 📕 https://wietsedv.nl @gronlp.bsky.social
@faroit.bsky.social
AudioML research scientist at https://audioshake.ai, before: post-doc @inria@social.numerique.gouv.fr, Editor at https://bsky.app/profile/joss-openjournals.bsky.social All in 17.68% of grey, located in Frankfurt (Germany)
@gallilmaimon.bsky.social
PhD student @CseHuji; Audio Processing, Speech Language Modelling
@grzegorz.chrupala.me
Speech • Language • Learning https://grzegorz.chrupala.me @ Tilburg University
@docmilanfar.bsky.social
Distinguished Scientist at Google. Computational Imaging, Machine Learning, and Vision. Posts are personal opinions. May change or disappear over time. http://milanfar.org
@shinjiw.bsky.social
I'm working at CMU (2021-). I was working at NTT (2001-2011), MERL (2012-2017), and JHU (2017-2020). Speech and Audio Processing is my main research topic.
@jordiponsdotme.bsky.social
Music, audio, and deep learning research at Stability AI ~ Building bridges between audio signal processing wisdom and deep learning. artintech.substack.com www.jordipons.me
@arxiv-sound.bsky.social
Automated posting of sound-related articles uploaded to arxiv.org (eess.AS + cs.SD) Source: https://github.com/dsuedholt/bsky-paperbot-sound/ Inspired by @paperposterbot.bsky.social and https://twitter.com/ArxivSound
@rdesh26.bsky.social
Research Scientist @ Meta GenAI in NYC. Working on audio/speech for LLaMA. Previously: PhD @ JHU CLSP desh2608.github.io
@avsp.bsky.social
The official(ish) account of the Auditory-VIsual Speech Association (AVISA) AV 👄 👓 speech references, but mostly what interests me avisa.loria.fr
@catlai.bsky.social
Lecturer in speech and language technology, CSTR, University of Edinburgh. https://homepages.inf.ed.ac.uk/clai/
@jonathanleroux.bsky.social
Speech and audio research scientist @MERL. saneworkshop.org co-founder. IguanaTex developer. 🌐 jonathanleroux.org 🐙 github.com/Jonathan-LeRoux/ 🎓 scholar.google.com/citations?user=aUpxty8AAAAJ&hl=en
@odettes.bsky.social
Associate professor of inclusive speech technology at TU Delft, The Netherlands. President of the International Speech Communication Association (ISCA). General Chair of @interspeech.bsky.social Rotterdam, 2025. Mother of 3🌈
@interspeech.bsky.social
Welcome to the 26th Interspeech Conference, the premier global event on spoken language processing technology, held in August 17-21, 2025, in Rotterdam, NL.