Curt Welch
curtwelch.bsky.social
Curt Welch
@curtwelch.bsky.social
Robotics, AI, RL, Applied Impact Robotics, Nova Labs Metal Shop Steward, Software Engineer, Blacksmith, Maker, KZ4GD, old fart
I’m surprised it’s only around 20x. Tesla has far more employees than I would have guessed.
January 29, 2025 at 2:42 PM
I’ve got one stuck on my lap as well!
December 2, 2024 at 2:15 AM
As obvious as that idea is to most of us, the truth is that the voters don’t agree with it. In a democracy, ignorant voters get their turn to learn from there mistakes so they can become less ignorant. It’s their turn now.
November 23, 2024 at 9:54 AM
The media didn’t do this. The people did it. They stopped paying for real news and decided all they wanted was sensationalism, rage and fires. All the real journalists lost their jobs; media was bought out by capitalists; the free press is now only a capitalistic click bait factory.
November 22, 2024 at 1:29 PM
My prime interest is solving the AGI puzzle and since human intelligence exists in a brain that is a real time reactive learning system I see no other options to duplicating it other than signal processing parallel networks. Aka neural networks.
November 21, 2024 at 1:17 AM
So do you understand how LeCun’s belief that the brain uses supervised learning works? I don’t but I have not followed his work that closely. Or do you have ideas of how that would work in the brain?
November 18, 2024 at 2:59 PM
Interesting that i misread “ruin” as “run” and thought it was an odd idea. Then caught my error.

But in hindsight I see deeper truth in my error. People who aggressively run their own lives are the most blind to the the needs of others around them. And blame everyone else for their problems.
November 18, 2024 at 2:52 PM
Of course that’s how it work. And anything bad that happens will because of some dem. Or brown skinned dude.
November 17, 2024 at 6:04 PM
But what I have not had had the time to do is feed it more complex real word data and see if the features it creates are actually useful and valid feature for driving behavior. Or did I just create a useless random function map? I’d love to work on this full time but life hasn’t allowed it.
November 17, 2024 at 5:39 PM
I created this back in 2017, worked more on it during COVID but haven’t had time in my life to do more. The algorithm is fast an efficient for extracting features. And creating multilayers of nodes works great for creating higher level abstractions. It all seems good.
November 17, 2024 at 5:39 PM
It is using a Bayesian-like inferences logic to define the meaning of each node so the inputs to each node are treated as evidence as to the truth of the feature being part of the current environment. The algorithm uses temporal correlation as its main learning tool. It’s totally generic.
November 17, 2024 at 5:39 PM
The network of these memory nodes are a self organizing map that balances activity across the net but in so doing it’s defining roughly equal probability features. Which means it’s information maximizing. It self organizes to represent as much low frequency state as it can.
November 17, 2024 at 5:39 PM
So what I’ve done for the first layer is to incorporate exponential decaying memory into the neurons that make them model a concept of recent past state. So if it’s a “cat” cell we can understand it to represents the odds that a cat is part of our environment.
November 17, 2024 at 5:39 PM
The hard work of the first layer identified we just pressed a button. The second layer learns if we should do more or less of that in the future.
November 17, 2024 at 5:03 PM
The first layer is temporal association learning. It’s classical conditioning and it’s what the cerebral cortex does for us. Once decided and simplified to a very large but sparse feature space the RL becomes simple and fast.

The second layer is the operant conditioning step.
November 17, 2024 at 5:03 PM
I don’t have a full system working but I’m suspecting the RL step might be reduced to a single level linear map from state to actions. The first layer gives the brain its understand of how the world evolves but doesn’t decide how we respond. The RL level controls how we respond.
November 17, 2024 at 5:03 PM
But I’ve made good progress on that first layer and I’m seeing that it’s far more significant than I realized. After the first layer extracts a simplified representation of the environment from the raw data. The RL becomes simple.
November 17, 2024 at 5:03 PM
So I have for a very long time (20 years) believed we needed a two layer cake. But I was thinking the first layer would be something simple and small. Like the cake plate. But the real cake was all RL. or at best the two layers were equal in size and complexity.
November 17, 2024 at 5:03 PM
The first layer of the cake is the last missing piece of the AGI puzzle. And it’s why we could use RL so effectively to solve problems that have an already simplified environment like board games but not apply it well to high dimensional problems. Get the first layer right then RL becomes easy.
November 17, 2024 at 4:44 PM
The first step does all the heavy lifting of transforming the chaos of raw data into a highly simplified set of low frequency feature signals (grandmother neurons). Once the world is so simplified the RL step becomes so simple it is just a cherry on top. It’s why RL learning is so fast for us.
November 17, 2024 at 4:44 PM
As I see it we have two layers. The unsupervised perception snd prediction system which dynamically learns to translate raw sensory data into invariant state features and the second layer that maps these state signals into actions with RL.
November 17, 2024 at 4:44 PM
His second layer is just wrong however. We have no supervised learning hardware. He’s conflating how we use the hardware with what the hardware does. We teach each other in using supervision but the cake doesn’t have a mimic learning hardware layer.
November 17, 2024 at 4:44 PM
Being a big RL believer since the 80s I was put off with his cake theme reducing it to an insignificant cherry. To me it was the real meat of the problem. But I’m far more in tune with his thinking now.
November 17, 2024 at 4:44 PM
I mostly agree with the
cake idea but ChatGPT is not the cake. Not at all. But we aren’t far away from having the cake and ChatGPT 2.

What ChatGPT shows us is just how much human knowledge can be encoded in neural networks.
November 17, 2024 at 4:24 PM
But then the main training of ChatGPT is back prop training which the brain doesn’t use at all. Nothing is telling the brain that it computed the wrong muscle twitch

It’s all teal and error with rewards telling us what is right and wrong.
November 17, 2024 at 4:24 PM