Also at mastodon.social/@matrig
I just verified my 🦋Bluesky handle.
So expect shitposting to massively decrease in proportion to the decrease in plausible deniability of my content 😅
📍 Helsinki 🇫🇮
📅 Apply by Feb 5th
🔗 https://bit.ly/4jYDoO0
📍 Helsinki 🇫🇮
📅 Apply by Feb 5th
🔗 https://bit.ly/4jYDoO0
We are looking for people to help us pioneer the next generation of AI—building from Japan to the world.
Join us: sakana.ai/careers
We are looking for people to help us pioneer the next generation of AI—building from Japan to the world.
Join us: sakana.ai/careers
www.cell.com/current-biol...
www.cell.com/current-biol...
We are looking for talented Cognitive Neuroscientists to join our team at Trinity College Dublin for postdoc positions funded by a European Research Council (ERC) Consolidator grant.
www.jobs.ac.uk/job/DPV296/r...
www.ktsetsoslab.net/_files/ugd/0...
We are looking for talented Cognitive Neuroscientists to join our team at Trinity College Dublin for postdoc positions funded by a European Research Council (ERC) Consolidator grant.
www.jobs.ac.uk/job/DPV296/r...
www.ktsetsoslab.net/_files/ugd/0...
Find out more & register for the information webinar 👉 www.ucl.ac.uk/life-science...
Find out more & register for the information webinar 👉 www.ucl.ac.uk/life-science...
Current LLM agents lack reliability, creating a gap between demos and production. We solve this by automating the complex workflow of debugging, evaluation, and iteration required to make agents robust. 👇
Current LLM agents lack reliability, creating a gap between demos and production. We solve this by automating the complex workflow of debugging, evaluation, and iteration required to make agents robust. 👇
We found that if you simply delete them after pretraining and recalibrate for <1% of the original budget, you unlock massive context windows. Smarter, not harder.
We found embeddings like RoPE aid training but bottleneck long-sequence generalization. Our solution’s simple: treat them as a temporary training scaffold, not a permanent necessity.
arxiv.org/abs/2512.12167
pub.sakana.ai/DroPE
We found that if you simply delete them after pretraining and recalibrate for <1% of the original budget, you unlock massive context windows. Smarter, not harder.
We are hiring. Join our team in Tokyo.
sakana.ai/careers/#sof...
Not in a bad way, k-nn are surprisingly powerful methods.
For Gemini 2.5 Pro and Grok 3, we _didn't_ need to jailbreak, and got 76.8% and 70.3% from each.
It was relatively simple to evade guardrails with two steps:
Not in a bad way, k-nn are surprisingly powerful methods.
gershmanlab.com/textbook.html
It's a textbook called Computational Foundations of Cognitive Neuroscience, which I wrote for my class.
My hope is that this will be a living document, continuously improved as I get feedback.
gershmanlab.com/textbook.html
It's a textbook called Computational Foundations of Cognitive Neuroscience, which I wrote for my class.
My hope is that this will be a living document, continuously improved as I get feedback.
We are organizing this #ICLR2026 workshop to bring these three communities together and learn from each other 🦾🔥💥
Submission deadline: 30 Jan 2026
When? 26 or 27 April 2026
Where? Rio de Janeiro, Brazil
Call for papers, schedule, invited speakers & more:
ucrl-iclr26.github.io
Looking forward to your submissions!
We are organizing this #ICLR2026 workshop to bring these three communities together and learn from each other 🦾🔥💥
Submission deadline: 30 Jan 2026
phd.tech.au.dk/for-applican...
phd.tech.au.dk/for-applican...
rdcu.be/eVZ1A
rdcu.be/eVZ1A
⏳ Apply by 28th February 2026
Details: www.haberkernlab.de/docs/ENPostd...
#neuroscience #academicjobs #postdoc
⏳ Apply by 28th February 2026
Details: www.haberkernlab.de/docs/ENPostd...
#neuroscience #academicjobs #postdoc