Henrik Warpefelt
banner
warpefelt.com
Henrik Warpefelt
@warpefelt.com
Science mercenary and magical thinking rock enthusiast. I mostly talk about data, AI, and games.

Consulting inquiries: https://www.warpefelt.com/
Overall I think we'll continue to see a lot of productivity increases from genAI, but as it stands it's not going to overthrow entire industries. Like any technological revolution it is going to change how people do their jobs and which ones they do. Think plate setting in the analog vs digital era.
September 29, 2025 at 8:45 PM
One thing that I think is missing from this report is also the impact of AI on small- to medium business. AI can assist in many ways (marketing etc as above) but rolling out dedicated solutions for smaller businesses is still problematic. A non-giant company can't really bet big on a 5% chance
September 29, 2025 at 8:45 PM
Finally: I remain moderately bearish on generative AI as a complete disruptor of every single industry. It's having a big impact on tech and media, but I strongly suspect we'll see that wind change in the next few years as we discover the limits of current genAI tech.
September 29, 2025 at 8:45 PM
8. Employment in most industries isn't actually that affected by generative AI. Tech and media are as mentioned being hit heavily, but other industries aren't expected to see much change. The promised AI revolution is still not truly on the horizon. AI seems to be a good tool but not a killer app.
September 29, 2025 at 8:45 PM
7. The way to win in the AI race is to land small and visible wins in narrow workflows. Fast deployment and integration are preferable to massive systems. Extrapolating from the report this is likely connected to the wide shadow usage of LLMs: it's super easy to just open a webpage and "LLM stuff"
September 29, 2025 at 8:45 PM
6b. This really hammers home an important message: Data is a commodity and having it leak can be disastrous for both people and companies. What happens to stuff put into LLMs is a huge security concern, which makes the potential shadow usage a lot more problematic.
September 29, 2025 at 8:45 PM
6. Trust is a key concern for enterprises. Companies want to trust the vendor, trust that their data is handled properly, and that the vendor understands the company's workflow and adapt as needed. Companies also want to see improvement over time and minimal disruption to existing tools.
September 29, 2025 at 8:45 PM
5. The main blocker for AI tool adoption is that the tools are kind of bad. The UX is poor, the output is of poor quality, or tools don't work as expected. The report also highlights the lack of adaptability in tools as a major issue. Basically, ChatGPT is still better than internal tools.
September 29, 2025 at 8:45 PM
4. Most AI investment (50%) is in sales and marketing. This tracks anecdotally: LLMs are good at generating derivative text and advertising is pretty repetitive. It's probably easier for companies to get good ROI on investments here.
September 29, 2025 at 8:45 PM
3. There's a HUGE shadow economy for AI usage! Only about 40% of companies surveyed have an enterprise LLM subscription, but workers from 90% of the companies surveyed use LLMs regularly. People seem ready to adopt LLMs as tools, but corporations are lagging. This could be a massive infosec problem!
September 29, 2025 at 8:45 PM
2b. General purpose AI projects (think GPT/Claude wrapper) do much better - about a 40% success rate. That actually beats the overall software project success rate, although these are measured using different metrics so comparability is difficult.
September 29, 2025 at 8:45 PM
2. AI projects just aren't very successful. As per the report, only 5% of custom enterprise AI tools actually go into production. The Standish Group in 2020 that about 31% of software projects succeed, so 5% is pretty dire even by software standards and accounting for different measurement modes.
September 29, 2025 at 8:45 PM
1. Most industries are seeing low disruption, with the exceptions of Tech & Media. Not entirely shocking considering what kinds of AI services exist, i.e. text and media generation. Tech is also not super regulated, and has a tradition of moving fast and breaking things.
September 29, 2025 at 8:45 PM
Paper tl;dr: We construct 3 nested concepts (landmarks, monuments, beacons) that help us describe how complex generated artifacts are perceived by players, and how these artifacts can be composed to support player understanding of the game world.
September 24, 2025 at 2:09 PM
Holy crap. That's a really, really good point. That makes this even more of an ethics nightmare.
May 21, 2025 at 5:46 PM
The ethical way of doing this would be as some variant of participant observation, possibly with some kind of digital aid like scraping. However, acquiring consent from these communities is CRITICAL for a study like this. In this case consent was obviously not acquired, which is deeply problematic.
May 21, 2025 at 5:18 PM
In essence, the social contract is that you join a Discord *community*. The idea is that you participate in equal terms with the other people in that community. However, these researchers didn't participate. They just scraped the data and didn't contribute to the community.
May 21, 2025 at 5:18 PM
In their defense the authors say that they used public Discord servers and anonymized user data, but this only prevents part of the harm to the users in these servers. There could be an argument here for this being publicly available data, but I don't think that holds water.
May 21, 2025 at 5:18 PM
To clarify the problem: This paper uses scraped data from a bunch of Discord server in violation of Discord's data scraping policies. The authors claim to have gotten consent as per the ArXiV paper checklist, but the word "consent" doesn't appear in the paper.
May 21, 2025 at 5:18 PM