Stanislav Fort
banner
stanislavfort.bsky.social
Stanislav Fort
@stanislavfort.bsky.social
AI + security | Stanford PhD in AI & Cambridge physics | techno-optimism + alignment + progress + growth | 🇺🇸🇨🇿
What are they worried about?
October 15, 2025 at 6:53 PM
💯 this
October 3, 2025 at 7:40 PM
I doubt the AI overviews are a big deal in the total number tbh. Gemini is extremely useful and e.g. I'm running at least >1B tokens a day through it for sure.
July 26, 2025 at 10:46 PM
I totally disagree. Bluesky has an unproductive anti-AI mindset that is often propagated by people who are nominally experts (e.g. professors) but who have not kept up with the pace of change in AI and therefore are practically useless in judging its potential. It's surprisingly bad on here re AI
July 19, 2025 at 4:15 PM
This is obviously not correct. "The wealthy" are not responsible for climate change. The industrial civilization as a whole is, but because it also produces so much net positive value to humans it's a good trade-off to have made. The zero sum mindset you're displaying is misdiagnosing the issue.
May 8, 2025 at 1:13 AM
This is a very weak argument likely based on vibes. SpaceX is both very efficient (price per ton to orbit is very low => demand from customers) & they do things that no company or government is able to do (massive reusability of orbital rockets). You should check out the Falcon 9 track record.
March 8, 2025 at 2:03 AM
In a narrow subfield it generally correlates with that, yes. But that's off topic, you should address the point about functional equivalence I made if you want to continue the discussion.
February 13, 2025 at 10:08 AM
Successfully acting as if it had knowledge is functionally equivalent to having knowledge. The distinction you are making is a selective call for rigor that even humans would have a hard time passing.
February 13, 2025 at 10:05 AM
Yet you misread a simple plot, drew an obviously wrong conclusion, and ran with it because it supported your biases.
February 13, 2025 at 10:03 AM
I disagree and happen to be a co-author on an early paper addressing this very question: arxiv.org/abs/2207.05221
Language Models (Mostly) Know What They Know
We study whether language models can evaluate the validity of their own claims and predict which questions they will be able to answer correctly. We first show that larger models are well-calibrated o...
arxiv.org
February 13, 2025 at 10:01 AM
I think you are confusing knowing things and being sentient. These are very different concepts. In the end I do not practically care if the LLM has qualia as long as it is performing as if it knew things functionally (and it does exactly that)
February 13, 2025 at 9:59 AM
I literally use AI (mainly o1 pro) daily in my research. It is genuinely helpful on the level of a graduate student research assistant. Many highly technical people agree, see for example: marginalrevolution.com/marginalrevo...
o1 pro - Marginal REVOLUTION
Often I don’t write particular posts because I feel it is obvious to everybody.  Yet it rarely is. So here is my post on o1 pro, soon to be followed by o3 pro, and Deep Research is being distributed, ...
marginalrevolution.com
February 13, 2025 at 9:57 AM
It can and practically would just use a calculator or a python interpreter or something and just get 100%. Here they were just testing how well it can do math in its head. The fact that it struggles with 10-digit numbers and above is no surprise -- humans are even weaker in this.
February 13, 2025 at 6:29 AM
You're reading the graph wrong. These are the **numbers of digits** in the numbers. It's multiplying two numbers each of which has more than 10 digits. Can you do that in your head?
February 13, 2025 at 6:24 AM
Can you multiply 10-digit numbers in your head while also having PhD-level knowledge in basically any field? If anything, this mind seems superior to essentially any human in almost anything, including mental math. And of course it can always make a tool call to a calculator and get 100% accuracy.
February 13, 2025 at 6:22 AM
Nothing wrong with that, that's why I'm on social media in the first place. If they've done a great job, I want to hear about it!
February 9, 2025 at 1:34 PM
What do you teach? Chances are that whatever it is a typical student will be much better off knowing how to use AI than whatever minimal factual knowledge of your field they'll actually remember long term.
February 5, 2025 at 6:55 PM
If you really think that "Al is almost entirely a scam" your opinion on anything technical can safely be discarded. I know that bluesky is a bit of an echo chamber in its anti-AI sentiment. The "explosion" comment is just a (misinformed) cherry on top. Falcon9s land 20x times, no one else even once
January 20, 2025 at 6:28 AM
Sure, the fragmentation argument is a solid one. It has many downsides, e.g. lower "state" capacity of Europe as a whole, but it certainly limits the reach of powerful individuals.
January 19, 2025 at 10:28 PM
How well did humans do on this tho?
January 19, 2025 at 10:25 PM
A strong European example: A billionaire was the Czech prime minister. He'll likely be one again the next election over. That is an even more direct level of influence.
January 19, 2025 at 10:23 PM
Why do you think Europe is better in terms of oligarchy? It doesn't seem meaningfully different from the US to me and both perform among the best in the world on this metric anyway. It certainly isn't "worth" Europe's lack of future defining industries.
January 19, 2025 at 10:02 PM
It's a huge issue! Look at the technologies that will define the future: batteries, access to space, AI. Europe is not leading in any of them. We need a strong wealth generation engine in order to sustain social welfare. Regulation is also much easier if its targets are home grown.
January 19, 2025 at 9:58 PM
There is the practical problem of teachers being expensive. AI is way cheaper. No wonder it looks like a more plausible approach to many.
January 19, 2025 at 2:23 PM