Abel_TM
banner
abeltm.bsky.social
Abel_TM
@abeltm.bsky.social
Research Scientist. Implementing reasoning in AI. Theory and implementation of open ended reasoning algorithms for long term planning, robotics, math, protein design and science
Important questions on AI addressed by this series.
My comments shared with the -somewhat more quiet- bluesky audience:

- Should we teach AI like children? Learning like children needs the proper cognitive architecture, which AIs lack (similar to raising a chimp as a child)
1/
Sixth (and last) episode of our "Nature of Intelligence" podcast is out!

This time I'm in the hot seat -- Abha interviews me about "AI's changing seasons" and lots of other AI-related stuff.

www.santafe.edu/culture/podc...
sfiscience
www.santafe.edu
December 12, 2024 at 1:46 PM
AGI understood as “universal human replacement technology” is technically, socially and epistemologically rubbish
December 3, 2024 at 9:39 AM
Huge “foundation” models are the antithesis of a general problem solving intelligence: in its solipsistic thinking only one perspective is pushed, while new discoveries are based on novel approaches to data
December 1, 2024 at 6:17 PM
Any computational system is limited by its resources and architecture. Some (eg LeCun) say this means there is no such thing as general intelligence (GI), including in humans. In fact, GI is one capable of creating tools to overcome its own architectural limitations… like humans
December 1, 2024 at 2:49 PM
So glad that here there is much less of the hyped AGI/ASI positive and negative forecasting compared to the ex-Twitter. People seem more focused on research. 👍
November 30, 2024 at 5:21 PM
Almost AGI?

- Took a simple planning task in a chessboard (steps to reach a given square per piece)

- Ran it 3 times: original, mirrored positions (same result), again original

- None of:
GPT-4o
o1-mini/preview
Sonnet-3.5-new
Gemini-1.5-Pro-002

got it right in any of the trials and...
November 29, 2024 at 10:17 AM
AI systems that learn from first principles are good, systems that only learn from first principles are bad.
November 28, 2024 at 5:01 PM
.@swarat.bsky.social‬ do you have a different perspective on AI reasoning?
Reasoning is the goal-driven planning with guarantees of correctness.

This includes the what and the how in a way. It doesn't make sense doing open-ended exploration to solve a concrete problem. At the same time, introducing 'reasoning' mistakes at few steps invalidates the overall result
November 28, 2024 at 2:58 PM
🤔We are still taking shortcuts on AI reasoning with the mindset of stepwise progress. It will be useful for many tasks but will not solve the reasoning challenge.
Ultimately, we'll need to face it based on a conceptual understanding. It isn't as difficult as a road not taken
I had a great chat with Tim Scarfe of Machine Learning Street Talk (x.com/mlstreettalk) on AI reasoning, program synthesis, and AI for math and science. Here's the video -- please reach out if you would like to discuss these topics more! www.youtube.com/watch?v=XFMk...
What is “reasoning” in modern AI?
YouTube video by Machine Learning Street Talk
www.youtube.com
November 26, 2024 at 2:59 PM
Current AIs don't even have a criteria for deciding if it achieved its target! That's what you get with pure empirical approaches. Blind and futile reliance on right architecture being emergent
November 23, 2024 at 12:05 AM
Imagine having a definition of understanding and intelligence under which we could assess LLMs. Excluding or attributing those properties is relevant only as a way for estimating current and future capabilities of these systems under the applied definition and its underlying mechanisms.
November 21, 2024 at 4:50 PM
People say AI is about to reach and surpass human intelligence, measured by the way AI answer questions.

Wonder if intelligence isn’t better measured by the way you ask questions facing new problems.

How close are we there?
November 21, 2024 at 8:08 AM