Dylan Cope
banner
dylancope.bsky.social
Dylan Cope
@dylancope.bsky.social
Researching multi-agent RL, emergent communication, and evolutionary computation.

Postdoc at FLAIR Oxford. PhD from Safe and Trusted AI CDT @ KCL/Imperial. Previously visiting researcher at CHAI U.C. Berkeley.

dylancope.com

he/him
London 🇬🇧
@ordinarythings.bsky.social has a better understanding of the social impacts of AI than many of the people in the industry, and is doing a great job clearly explaining these issues in an entertaining way. This is the kind of public outreach the world needs more of.
June 27, 2025 at 9:09 AM
Mmh I don't know if I would say they're Sagans of our time. I think it's people like Vsauce, Hank Green, 3blue1brown, Physicsgirl, smartereveryday, Simone Giertz, Veritasium, MinutePhysics, etc.
December 5, 2024 at 9:45 PM
I think some people are annoyed and the baby bird response is a form of condescension. I don't like it.

I think it's good to be considerate and express gratitude if a reviewer has put in time. But you also have to make actual arguments.
December 4, 2024 at 12:22 AM
I never knew a photo of someone holding a hedgehog could feel so inspirational. This looks like it should be on a political poster or something!
November 27, 2024 at 1:30 PM
👋🏻
November 24, 2024 at 2:56 PM
I think the LLMs would generally write jax that isn't compatible with jit - lots of non-concrete shape issues. But if you know a couple patterns for doing branchless conditionals in SIMD settings it's not too hard to fix.

Or you could try aggressively prompting the LLMs 😂
November 23, 2024 at 3:39 PM
For my domains it is night and day! Easily 10x speed-ups. I've been using JAX for the last 8 months, and I was using RLlib before which was very slow for my purposes.

Writing custom environments in JAX can be a bit of a pain though.
November 22, 2024 at 5:12 PM
Currently I'm using:

- Custom gymnax env
- PureJAXRL
- PPO
- GRU RNNs
- wandb
- praying that my choice of hyper parameters is fine
November 22, 2024 at 1:15 PM
My hopeful interpretation is that tweet is getting less engagement because we're all over here now, and not looking at Twitter!

But it also wouldn't remotely surprise me if Musk is suppressing mentions of bluesky over there.
November 22, 2024 at 11:48 AM
I really hope it lasts! Feels very refreshing to see so many interesting things on the feed.
November 22, 2024 at 1:17 AM
Put differently - LLM pre training is imitation learning, and so maybe they will imitate our ability to adapt OOD?

Imo the problem is that IL is notoriously bad OOD. Not yet convinced "just scale" fixes the fundamental issue of biased demo data/compounding errors.
November 20, 2024 at 7:52 PM
Managed to stump it with a drop that relies on correcting your balance with the wall. Wasn't too hard for me to get it but the agents don't get it!
November 20, 2024 at 12:37 PM
Could you add me! :)
November 20, 2024 at 9:51 AM
Please get the others from Novara on too 😅
November 14, 2024 at 1:47 PM
It works better than Twitter!
November 14, 2024 at 1:41 PM