sineadwilliamson.bsky.social
@sineadwilliamson.bsky.social
Pinned
📢 We’re looking for a researcher in in cogsci, neuroscience, linguistics, or related disciplines to work with us at Apple Machine Learning Research! We're hiring for a one-year interdisciplinary AIML Resident to work on understanding reasoning and decision making in LLMs. 🧵
Reposted
If you are a senior researcher at #NeurIPS2025 (i.e., roughly full professor level or later), and you're interested in moving to the University of Waterloo (best CS program in Canada) for a CERC (biggest chair position in Canada), email me. Discretion guaranteed.
December 3, 2025 at 2:51 PM
Reposted
If you like our @deisenroth.bsky.social #mathematics for #machinelearning textbook, and want a hard copy, Cambridge is providing a 30% discount for #NeurIPS2025.

Of course you can buy other books, and also download the PDF from: mml-book.com

www.cambridge.org/us/universit...
Discount code: 107890
Mathematics for Machine Learning
Companion webpage to the book “Mathematics for Machine Learning”. Copyright 2020 by Marc Peter Deisenroth, A. Aldo Faisal, and Cheng Soon Ong. Published by Cambridge University Press.
mml-book.com
November 26, 2025 at 1:14 AM
📢 We’re looking for a researcher in in cogsci, neuroscience, linguistics, or related disciplines to work with us at Apple Machine Learning Research! We're hiring for a one-year interdisciplinary AIML Resident to work on understanding reasoning and decision making in LLMs. 🧵
November 7, 2025 at 9:19 PM
Reposted
We have been working with Michal Klein on pushing a module to train *flow matching* models using JAX. This is shipped as part of our new release of the OTT-JAX toolbox (github.com/ott-jax/ott)

The tutorial to do so is here: ott-jax.readthedocs.io/tutorials/ne...
November 5, 2025 at 2:04 PM
Really glad to have been a part of this super cool project... LLMs can verbalize more than just a single confidence number, and we can evaluate their ability to do so!
Many treat uncertainty = a number. At Apple, we're rethinking this: LLMs should output strings that reveal all information of their internal distributions. We find that Reasoning, SFT, CoT can't do it - yet. To get there, we introduce the SelfReflect benchmark.

arxiv.org/pdf/2505.20295
October 2, 2025 at 7:39 PM
Reposted
Many treat uncertainty = a number. At Apple, we're rethinking this: LLMs should output strings that reveal all information of their internal distributions. We find that Reasoning, SFT, CoT can't do it - yet. To get there, we introduce the SelfReflect benchmark.

arxiv.org/pdf/2505.20295
October 1, 2025 at 9:53 AM
Reposted
Now that @interspeech.bsky.social registration is open, time for some shameless promo!

Sign-up and join our Interspeech tutorial: Speech Technology Meets Early Language Acquisition: How Interdisciplinary Efforts Benefit Both Fields. 🗣️👶

www.interspeech2025.org/tutorials

⬇️ (1/2)
https://www.interspeech2025.org/tutorials
Your cookies are disabled, please enable them.
www.interspeech2025.org
May 27, 2025 at 4:14 PM