Joachim W Pedersen
joachimwpedersen.bsky.social
Joachim W Pedersen
@joachimwpedersen.bsky.social
Bio-inspired AI, meta-learning, evolution, self-organization, developmental algorithms, and structural flexibility.
Postdoc @ ITU of Copenhagen.
https://scholar.google.com/citations?user=QVN3iv8AAAAJ&hl=en
Pinned
In deep learning research, we often categorize meta-learning approaches as either gradient-based or black-box meta-learning. In my PhD thesis, I argued that it can sometimes be useful to classify approaches based on how the outer-loop optimization affects the inner-loop optimization.
Reposted by Joachim W Pedersen
Neural Network Quines: training a model to output its own weights

arxiv.org/abs/1803.05859
Neural Network Quine
Self-replication is a key aspect of biological life that has been largely overlooked in Artificial Intelligence systems. Here we describe how to build and train self-replicating neural networks. The n...
arxiv.org
October 13, 2025 at 10:18 PM
Looking forward to this!
We're excited to announce final program of the @alife2025.bsky.social SONI session
which will host a panel discussion with @blaiseaguera.bsky.social, @risi.bsky.social, @emilydolson.bsky.social & Sidney Pontes-Filho

Check out the full program: sites.google.com/view/soni-al...

See you in Kyoto ⛩️
September 11, 2025 at 9:40 AM
These perspectives formulated by Joe Hudson resonate a lot with me as an AI researcher with a background in psychology
every.to/thesis/knowl...
Knowledge Work Is Dying—Here’s What Comes Next
While AI devours information-based roles, OpenAI, Alphabet, and Apple are investing in wisdom work—and you can, too
every.to
June 29, 2025 at 6:42 PM
Reposted by Joachim W Pedersen
Introducing The Darwin Gödel Machine

sakana.ai/dgm

The Darwin Gödel Machine is a self-improving agent that can modify its own code. Inspired by evolution, we maintain an expanding lineage of agent variants, allowing for open-ended exploration of the vast design space of such self-improving agents.
May 30, 2025 at 2:29 AM
Reposted by Joachim W Pedersen
“Continuous Thought Machines”

Blog → sakana.ai/ctm

Modern AI is powerful, but it's still distinct from human-like flexible intelligence. We believe neural timing is key. Our Continuous Thought Machine is built from the ground up to use neural dynamics as a powerful representation for intelligence.
May 12, 2025 at 2:33 AM
New submission deadline: April 2nd!
So still some time to put interesting thoughts on Evolving Self-Organization together!

Also: We are very fortunate to have the great Risto Miikkulainen as the keynote speaker at the workshop!

Can't wait to see you all there! 🤩🙌
#Evolution #Gecco #ALife
Join us for the Evolving Self-Organisation workshop at #GECCO this year! Great chance to submit your favourite ideas concerning self-organisation processes and evolution, and how they interact.
Relevant for Alifers #ALife and anyone interested in #evolution, #self-organisation, and #ComplexSystems.
We're excited to announce the first Evolving Self-organisation workshop at GECCO 2025!

Submission deadline: March 26, 2025

More information: evolving-self-organisation-workshop.github.io
March 26, 2025 at 9:51 AM
www.youtube.com/watch?v=jnoa...
Bio-Inspired Plastic Neural Nets that continually adapt their own synaptic strengths can make for extremely robust locomotion policies!
Trained exclusively in simulation, the plastic networks transfer easily to the real world, even under various extra OOD situations.
[IROS25] Bio-Inspired Plastic Neural Nets for Zero-Shot Out-of-Distribution Generalization in Robots
YouTube video by Worasuchad Haomachai
www.youtube.com
March 19, 2025 at 10:24 AM
Remember that 4-page submissions of early results are also welcome!

Also, does anyone know if #GECCO has an official 🦋 account? I cannot seem to find it...
Join us for the Evolving Self-Organisation workshop at #GECCO this year! Great chance to submit your favourite ideas concerning self-organisation processes and evolution, and how they interact.
Relevant for Alifers #ALife and anyone interested in #evolution, #self-organisation, and #ComplexSystems.
We're excited to announce the first Evolving Self-organisation workshop at GECCO 2025!

Submission deadline: March 26, 2025

More information: evolving-self-organisation-workshop.github.io
February 26, 2025 at 6:35 PM
Join us for the Evolving Self-Organisation workshop at #GECCO this year! Great chance to submit your favourite ideas concerning self-organisation processes and evolution, and how they interact.
Relevant for Alifers #ALife and anyone interested in #evolution, #self-organisation, and #ComplexSystems.
We're excited to announce the first Evolving Self-organisation workshop at GECCO 2025!

Submission deadline: March 26, 2025

More information: evolving-self-organisation-workshop.github.io
February 10, 2025 at 1:46 PM
Reposted by Joachim W Pedersen
Ever wish you could coordinate thousands of units in games such as StarCraft through natural language alone?

We are excited to present our HIVE approach, a framework and benchmark for LLM-driven multi-agent control.
January 21, 2025 at 12:39 PM
With all the research coming from Sakana AI, this figure needs to be updated fast! direct.mit.edu/isal/proceed...

#LLM #ALife #ArtificialIntelligence
January 15, 2025 at 1:03 PM
Reposted by Joachim W Pedersen
Transformer²: Self-adaptive LLMs

arxiv.org/abs/2501.06252

Check out the new paper from Sakana AI (@sakanaai.bsky.social) paper. We show the power of an LLM that can self-adapt its weights to its environment!
January 15, 2025 at 5:56 AM
Reposted by Joachim W Pedersen
Vi har samlet et starter pack med forskere og repræsentanter fra ITU på Bluesky. Mød dem her 👇
go.bsky.app/E8WJwXS
January 14, 2025 at 1:03 PM
Reposted by Joachim W Pedersen
Can Dynamic Neural Networks boost Computer Vision and Sensor Fusion?
We are very happy to share this awesome collection of papers on the topic!
January 8, 2025 at 9:33 AM
Reposted by Joachim W Pedersen
If microchip ~= silicon
then AGI ~= huge pile of sand
December 22, 2024 at 9:42 AM
Reposted by Joachim W Pedersen
Neural Attention Memory Models are evolved to optimize the performance of Transformers by actively pruning the KV cache memory. Surprisingly, we find that NAMMs are able to zero-shot transfer its performance gains across architectures, input modalities and even task domains! arxiv.org/abs/2410.13166
An Evolved Universal Transformer Memory

sakana.ai/namm/

Introducing Neural Attention Memory Models (NAMM), a new kind of neural memory system for Transformers that not only boost their performance and efficiency but are also transferable to other foundation models without any additional training!
December 10, 2024 at 1:41 AM
In deep learning research, we often categorize meta-learning approaches as either gradient-based or black-box meta-learning. In my PhD thesis, I argued that it can sometimes be useful to classify approaches based on how the outer-loop optimization affects the inner-loop optimization.
December 4, 2024 at 3:21 AM
Reposted by Joachim W Pedersen
Like 130,000 others, I made a starter pack. This one is people working on or with evolutionary computation in its many forms: genetic algorithms, genetic programming, evolution strategies.

If you like to be added, or suggest someone else, message me or reply to this post.
November 28, 2024 at 4:58 AM
Reposted by Joachim W Pedersen
For the Blueskyers interested in #NeuroAI 🧠🤖,
I created a starter pack! Please comment on this if you are not on the list and working in this field 🙂

go.bsky.app/CscFTAr
October 17, 2024 at 8:04 PM
Reposted by Joachim W Pedersen
A visionary Ross Ashby writing this, 62 years ago:

“A type of system that deserves much more thorough investigation is the large system that is built of parts that have many states of equilibrium”

csis.pace.edu/~marchese/CS...

#ComplexSystems #Cybernetics
November 16, 2024 at 5:04 PM
Reposted by Joachim W Pedersen
Here's an attempt to bring together ALife researchers and enthusiasts here, who am I missing?

go.bsky.app/LgoJN2N
November 13, 2024 at 10:33 AM
Nothing like a country-side writing retreat in cozy surroundings! Expect new things from the Grow-AI project soon!
November 14, 2024 at 1:01 PM