That’s the question we explore in our new paper, just accepted at #ISMIR2025! 📣
📄 Paper: arxiv.org/pdf/2507.03599
📌 Openness leaderboard: roserbatlleroca.github.io/MusGO_framew...
🧵👇
That’s the question we explore in our new paper, just accepted at #ISMIR2025! 📣
📄 Paper: arxiv.org/pdf/2507.03599
📌 Openness leaderboard: roserbatlleroca.github.io/MusGO_framew...
🧵👇
That’s the question we explore in our new paper, just accepted at #ISMIR2025! 📣
📄 Paper: arxiv.org/pdf/2507.03599
📌 Openness leaderboard: roserbatlleroca.github.io/MusGO_framew...
🧵👇
if you have 20 minutes to spare, it'd be great to get your input on how people perceive sound and recognise similarities between different audio clips 🎶
👇
help us understand how you identify similarities and relationships between sounds 🧠🎶
available both in English and Catalan:
🔗 en - mtg.upf.edu/similarity-e...
🔗 cat - mtg.upf.edu/similarity-e...
thank you for your time and support!
if you have 20 minutes to spare, it'd be great to get your input on how people perceive sound and recognise similarities between different audio clips 🎶
👇
help us understand how you identify similarities and relationships between sounds 🧠🎶
available both in English and Catalan:
🔗 en - mtg.upf.edu/similarity-e...
🔗 cat - mtg.upf.edu/similarity-e...
thank you for your time and support!
help us understand how you identify similarities and relationships between sounds 🧠🎶
available both in English and Catalan:
🔗 en - mtg.upf.edu/similarity-e...
🔗 cat - mtg.upf.edu/similarity-e...
thank you for your time and support!
More info here👇🏽
Join us at HCMIR, a satellite workshop of ISMIR 2025, exploring ethics, UX/UI, AI collaboration & more in MIR!
📅 Submission opens: June 2, 2025
📌 Deadline: June 18, 2025
🔗 Details: sites.google.com/view/hcmir25/
#MIR #ISMIR2025 #AI #MusicTech
More info here👇🏽
We now analyse 107 publications in a two-stage review, uncovering gaps in transparency for genAI in music. Jump to section 5.2 for TL;DR ✨
doi.org/10.21203/rs....
We now analyse 107 publications in a two-stage review, uncovering gaps in transparency for genAI in music. Jump to section 5.2 for TL;DR ✨
doi.org/10.21203/rs....
Sketch2Sound can create sounds from sonic imitations (i.e., a vocal imitation or a reference sound) via interpretable, time-varying control signals.
paper: arxiv.org/abs/2412.08550
web: hugofloresgarcia.art/sketch2sound
Sketch2Sound can create sounds from sonic imitations (i.e., a vocal imitation or a reference sound) via interpretable, time-varying control signals.
paper: arxiv.org/abs/2412.08550
web: hugofloresgarcia.art/sketch2sound
Have you ever wondered what an open model means? Help us shape the definition of open models in generative AI for music by taking our survey — just 10 minutes!
👉 forms.gle/Z48t6HPBXwWC3r…
thank you 💫
Have you ever wondered what an open model means? Help us shape the definition of open models in generative AI for music by taking our survey — just 10 minutes!
👉 forms.gle/Z48t6HPBXwWC3r…
thank you 💫