Daniel Arteaga
dnlrtg.bsky.social
Daniel Arteaga
@dnlrtg.bsky.social
Physicist. Audio and deep learning research at Dolby Labs. Physics, audio, AI, science, technology and society.

Personal account @contraidees.bsky.social
The question isn't whether AI doom is likely.

It's whether the expected harm is significant enough to act on.

Given the math? The answer is clearly yes.

We don't need certainty to justify precaution. We need responsible risk assessment.

(7/7)
November 26, 2025 at 6:05 PM
We explore 4 DAC-based models:
1️⃣ AR w/ cross-attention
2️⃣ AR w/ classifier guidance
3️⃣ MaskGIT w/ adaptive layer norm
4️⃣ Flow matching

The MaskGIT model achieves the best subjective quality (avg. 70 MUSHRA score), beating state of the art comparisons.
July 18, 2025 at 8:13 AM
Instead of simulating room geometry, we train four different generative model to produce RIRs conditioned on acoustic attributes (T30, T15, EDT, D50, C80, source-receiver distance)
July 18, 2025 at 8:09 AM
New paper!
We're introducing a new way to generate realistic room impulse responses not from room geometry, but by directly controlling acoustic parameters like reverb time and direct-to-reverb ratio.

🔗 Demo: silviaarellanogarcia.github.io/rir-acoustic/
📄 Paper: arxiv.org/pdf/2507.12136
July 18, 2025 at 8:03 AM
The key issue isn't the most likely outcome — it's the worst-case scenario we must be prepared for.

arxiv.org/abs/2401.02843
May 28, 2025 at 8:42 AM
Spatial aliasing occurs when microphone spacing in an array is too large relative to the wavelength of sound, degrading the accuracy of beamforming.

As far as we know, this is the first deep learning paper that addresses directly this problem (other approaches deal with this in an indirect way).
May 27, 2025 at 2:36 PM