Dileep George @dileeplearning
dileeplearning.bsky.social
Dileep George @dileeplearning
@dileeplearning.bsky.social
AGI research @DeepMind.
Ex cofounder & CTO Vicarious AI (acqd by Alphabet),
Cofounder Numenta
Triply EE (BTech IIT-Mumbai, MS&PhD Stanford). #AGIComics
blog.dileeplearning.com
Can AIs be conscious? Should we consider them as persons? Here are my current thoughts.....

blog.dileeplearning.com/p/ai-conscio...
AI consciousness, qualia, and personhood.
A note on my current positions, in a FAQ format.
blog.dileeplearning.com
November 7, 2025 at 4:35 PM
blog.dileeplearning.com/p/quick-note...

TLDR: It was fun and the process felt 'magical' at times. If you have lots of small project ideas you want to prototype, vibe-coding is a fun way to do that as long as you are willing to settle for 'good enough'.
Quick notes from vibe-coding a comic website
Hate less, vibe more!
blog.dileeplearning.com
October 20, 2025 at 4:19 PM
Those who think there's an AI bubble are unaware of a recent breakthrough....
www.agicomics.net/c/ag-breakth...
AGI Comics — #2: Artificial General Breakthrough
Self-congratulatory learning would be a cool name for a real learning algorithm.
www.agicomics.net
October 20, 2025 at 12:41 AM
New and improved and 10000% vibe-coded! Check out www.agicomics.net
AGI Comics — #1: Artificial General Productivity
A comic series.
www.agicomics.net
October 20, 2025 at 12:25 AM
Reposted by Dileep George @dileeplearning
1/4) I’m excited to announce that I have joined the Paradigms of Intelligence team at Google (github.com/paradigms-of...)! Our team, led by @blaiseaguera.bsky.social, is bringing forward the next stage of AI by pushing on some of the assumptions that underpin current ML.

#MLSky #AI #neuroscience
Paradigms of Intelligence Team
Advance our understanding of how intelligence evolves to develop new technologies for the benefit of humanity and other sentient life - Paradigms of Intelligence Team
github.com
September 23, 2025 at 3:06 PM
Reposted by Dileep George @dileeplearning
Jesus Christ.
September 17, 2025 at 5:46 PM
Reposted by Dileep George @dileeplearning
1/
🚨 New preprint! 🚨

Excited and proud (& a little nervous 😅) to share our latest work on the importance of #theta-timescale spiking during #locomotion in #learning. If you care about how organisms learn, buckle up. 🧵👇

📄 www.biorxiv.org/content/10.1...
💻 code + data 🔗 below 🤩

#neuroskyence
September 17, 2025 at 7:33 PM
#AGIComics now has a website! And it is 100% vibe coded!

Check out agicomics.net
AGI Comics — #23: Artificial General Productivity
A comic series.
agicomics.net
September 17, 2025 at 10:01 AM
Reposted by Dileep George @dileeplearning
12 leading neuroscientists tackle a big question: Will we ever understand the brain?

Their reflections span philosophy, complexity, and the limits of scientific explanation.

www.sainsburywellcome.org/web/blog/wil...

Illustration by @gilcosta.bsky.social & @joanagcc.bsky.social
August 6, 2025 at 8:41 AM
🎯
June 5, 2025 at 9:53 PM
Hmm…I don’t think it’s impossible.

Evolution could create structures in the brain that are in correspondence with structure in the world.
Dear neuroscientists,

The brain cannot generate information about the world de novo, it's impossible.

All the brain can do is:

1. Selectively remove info that is irrelevant.
2. Re-emit info previously absorbed via evolution or memory.

Our brain never "creates" information. Never.

🧠📈 🧪
May 15, 2025 at 6:02 PM
This paper turned up on a feed, I was intrigued by it and started reading...

..but then I was quite baffled because our CSCG work seem to have tackled many of these problems in a more general setting and it's not even mentioned!

So I asked ChatGPT... ...I'm impressed by the answer1. 1/🧵
May 15, 2025 at 1:22 AM
Wow, very cool to see this work from Alla Karpova's lab. She had shown me the results when I visited @hhmijanelia.bsky.social and I was blown away.

www.biorxiv.org/content/10.1...

1/
April 29, 2025 at 12:05 AM
Reposted by Dileep George @dileeplearning
𝗛𝗼𝘄 𝘀𝗵𝗼𝘂𝗹𝗱 𝘄𝗲 𝗱𝗲𝗳𝗶𝗻𝗲 𝗮𝗻𝗱 𝗱𝗲𝘁𝗲𝗿𝗺𝗶𝗻𝗲 𝗮 𝗯𝗿𝗮𝗶𝗻 𝗿𝗲𝗴𝗶𝗼𝗻'𝘀 𝗶𝗺𝗽𝗼𝗿𝘁𝗮𝗻𝗰𝗲?
We introduce the idea of "importance" in terms of the extent to which a region's signals steer/contribute to brain dynamics as a function of brain state.
Work by @codejoydo.bsky.social
elifesciences.org/reviewed-pre...
Brain dynamics and spatiotemporal trajectories during threat processing
elifesciences.org
April 27, 2025 at 5:17 PM
It's kinda obvious. #AGIComics has already figured out which brain region is the most important. 😇
April 27, 2025 at 8:56 PM
ohh...yes...this is exactly what I think after reading some of the "deep research" reports. ....written by a committee
Worthwhile reading.

Some of the features that John rightly criticizes in AI writing are shared by the sort of committee reports and consensus papers that emerge from workshops and symposia because someone felt that there had to be a “product“ associated with the meeting.
I wrote a very long blog post about AI writing. I hope you'll read it.

meresophistry.substack.com/p/the-mental...
March 30, 2025 at 1:30 AM
Reposted by Dileep George @dileeplearning
jumping on the Gemini 2.5 bandwagon... it's an incredible model. really feels like an(other) inflection point. talking to Claude 3.7 feels like talking to a competent colleague who knows about everything, but makes mistakes. Gemini 2.5 feels like talking to a world-class expert with A+ intuitions
March 28, 2025 at 5:16 PM
Give me 10 billion dollars and I’ll do it. 1 billion for developing the hardware and 9 billion to pay for my opportunity cost 😇
Okay, random example:

Can you give me hardware for creating 100B parameter Boltzmann machines?
March 26, 2025 at 10:15 PM
Nope. It is an engineering problem. Give me an algorithm you think is not being scaled because of a hardware mismatch, I can make the hardware (chip + interconnect + datacenter) given enough money. Purely an engineering problem.
No you can't - that is super non-trivial.

Building hardware to run a specific algorithm is hard enough.

Building hardware that can run a specific algorithm *and* scale up to billions of parameters is super duper hard.

It's not just a matter of money... It's a scientific problem!
March 26, 2025 at 10:00 PM
ok...in that case which other existing model would you make a bet on scaling up? Pick one. I'd be happy to raise money for it.
Well, that's the kind of thing one would have to demonstrate.

As to SOTA - transformers are likely SOTA because they scale so well, which is largely because they won the "hardware lottery".

I am willing to bet other models would be SOTA if we could train them at similar scales.
March 26, 2025 at 7:29 PM
This is great....this is a hypothesis.

What other such hypotheses are there? Is there a space of such hypotheses, or is this just one?

(Also, how does this match with Konrad's @kordinglab.bsky.social argument about being hypothesis-free?)
There are numerous differences at the algorithmic level between cortical circuits and transformers (and you know that, of course).

I'll give just one example:

In the brain, attention mechanisms often operate in a top-down manner, something missing from most transformer models.
March 26, 2025 at 6:42 PM
Reposted by Dileep George @dileeplearning
But why would you stop there?

Why aren't you convinced this is how it works? It's SOTA on all the tasks....

You are not convinced because it doesn't match some brain data?
March 26, 2025 at 6:10 PM
Here's my Q to @kordinglab.bsky.social who argues brain is too complex to understand....

How do you know that current systems AI are NOT how the brain works?
March 26, 2025 at 6:07 PM
Wait... so you think brain is NOT a large transformer?

How do you know that? The current empirical studies strongly support the idea that brain is a large transformer.....
Yeah, that's not the point, but you do you... 👍
March 26, 2025 at 6:01 PM
TLDR: We need large amounts of data to finally show that brain is a large transformer that we cannot understand. 😇😝
Eva Dyer and I wrote an opinion piece for @thetransmitter.bsky.social on why neuroscience needs to embrace complexity and accept the "bitter lesson" by using a data-driven regime at scale.

With commentary from several wonderful researchers!

🧠📈 #NeuroAI 🧪
How can we make progress in developing a general model of neural computation rather than a series of disjointed models tied to specific experimental circumstances, ask Eva Dyer and @tyrellturing.bsky.social in the latest entry in our NeuroAI series.

www.thetransmitter.org/neuroai/acce...
March 26, 2025 at 5:53 PM