Manuel Sánchez
manuel-sh.bsky.social
Manuel Sánchez
@manuel-sh.bsky.social
Building AI at scale in the enterprise world.

manuelsh.github.io
As someone deeply involved in AI, I wanted to take a closer look to the AI bubble. Are we heading towards a financial meltdown or just a potential price correction? What will be the potential impact in the Big Tech firms?

See my latest post: manuelsh.github.io/blog/2025/ra...
Rationalizing the AI bubble | Manuel Sánchez Hernández
An analysis of AI bubble through financial data: examining revenue gaps, circular deals, and whether we're heading for a meltdown or just a price correction
manuelsh.github.io
November 11, 2025 at 4:36 PM
I just published "Beyond Tokens: The Context-Window Perspective on LLMs, Memory, and Mind", which is a didactic exploration of the bridge between next-word prediction, agent frameworks, and the limits of current LLMs consciousness (very limited!)

manuelsh.github.io/blog/2025/un...

#LLM #AI #ML
Beyond Tokens: The Context-Window Perspective on LLMs, Memory, and Mind | Manuel Sánchez Hernández
Exploring the bridge between next-word prediction, agent frameworks, and the limits of current LLMs consciousness
manuelsh.github.io
July 3, 2025 at 11:24 AM
Launching TheorIA Dataset: if we want #AI models to reason about #physics, we first need to give them physics they can actually read.

manuelsh.github.io/blog/2025/la...
Launching TheorIA: A Machine-Readable Atlas of Theoretical Physics | Manuel Sánchez Hernández
If we want AI models to reason about physics, we first need to give them physics they can actually read.
manuelsh.github.io
May 25, 2025 at 9:31 PM
Reducing the level of hallucinations in LLM is key to make usable agents.

According to benchmarks, best model (Gemini 2.0 Flash-001) have a 0.7% level of hallucinations. Of course, this depends on the task, context, etc, real ones can be lower or higher. (1/3)
April 17, 2025 at 8:19 AM
There is a lack of curated datasets in theoretical physics to train better machine learning models. But what exactly is missing and how can we fill the gaps?
New post: manuelsh.github.io/blog/2025/datasets-for-advancing-Theoretical-Physics/
April 13, 2025 at 9:55 PM
February 3, 2025 at 8:42 PM
#NeurIPS 2024 just released videos of all events! To help you find the best ones, check out my freshly published post with a curated selection of top ideas.
manuelsh.github.io/blog/2025/Se...
Selected ideas from NeurIPS 2024 | Manuel Sánchez Hernández
NeurIPS 2024, the largest AI research conference, provides a glimpse into the next frontiers. Here are some of the most exciting ideas presented.
manuelsh.github.io
February 1, 2025 at 12:32 PM
Consider that we, humans, process an estimated 50-100 terabytes of raw data annually (through all our senses) with a brain that consumes only ~20 Watts.

This establishes a difficult to beat benchmark on the efficiency of intelligence.
January 21, 2025 at 2:55 PM
Such an amazing chart presented in #NeurIPS2024 by @phillipisola.bsky.social , where they show that the representations of images and text models converge, the more powerful they are. This has a lot of implications! It really deepens our understanding of intelligence.
January 12, 2025 at 11:12 PM
Just reviewed the talk that @eringrant.bsky.social gave at a #NeurIPS2024 tutorial where she beautifully and clearly explained how neural "universal" representations are built and what type of data is needed (non-Gaussian).

I hope the folks of NeurIPS publish it soon.
January 12, 2025 at 10:54 PM
See my notes on the amazing #NeurIPS2024 tutorial on building LLMs by @kylelo.bsky.social , @akshitab.bsky.social and @natolambert.bsky.social

Practical tips, key takeaways, and insights all in one place! 🚀

Dive in:
manuelsh.github.io/blog/2025/NI...
Opening the LLM pipeline | Manuel Sánchez Hernández
My notes on a great tutorial at NeurIPS 2024 on how to build a Large Language Model, with many practical tips.
manuelsh.github.io
January 7, 2025 at 1:47 PM