Luiza Jarovsky, PhD
luizajarovsky.bsky.social
Luiza Jarovsky, PhD
@luizajarovsky.bsky.social
Co-founder of www.aitechprivacy.com (1,300+ participants). Author of www.luizasnewsletter.com (87,000+ subscribers). Mother of 3.
Actually, THERE IS an easy and cheap way for Sam Altman to make Elon Musk drop the lawsuit and maybe save OpenAI from bankruptcy: 😱
January 19, 2026 at 11:46 AM
The autocompletion of the soul 😓
January 18, 2026 at 1:37 PM
I sincerely didn't think OpenAI would go bankrupt soon.

But with the launch of ChatGPT ads, Sam Altman's announcement of his Neuralink-like Merge Labs, and Musk's Friday filing in his lawsuit against OpenAI, all in the SAME WEEK...

I think they might not survive 2026.
January 17, 2026 at 7:33 PM
The state of AI detection in 2026:
January 17, 2026 at 6:18 PM
BREAKING: OpenAI has officially launched ads on ChatGPT.

In light of the clip below, is the company... dying?
January 17, 2026 at 4:31 PM
It is
January 16, 2026 at 12:11 PM
BREAKING: According to the Wall Street Journal, Meta laid off 1,500 people in its Metaverse division.

I remember when we were told that the future of work was meetings with avatar participants.

A lot of what we hear today about 'the future of AI' is wishful projection.
January 15, 2026 at 12:31 PM
Maybe Meta should now rename itself "superintelligence"?

They should keep renaming the company until it works...
January 14, 2026 at 4:18 PM
Daniel is right
January 14, 2026 at 3:05 PM
The state of AI hype in 2026:
January 14, 2026 at 2:15 PM
🚨 Most people did not pay attention, but China's proposed law on AI anthropomorphism is one of the world's STRICTEST AI laws.

(and it helps demystify the idea that China does not regulate AI or that the only way to be a competitive player in the AI race is through deregulation)

My article below.
January 13, 2026 at 9:56 PM
Unpopular opinion: fragmenting general-purpose AI systems is a POSITIVE development.

The future of AI chatbots lies in niche, targeted chatbot applications that can address the risks of specific contexts and the vulnerabilities of specific user groups.

My article below.
January 12, 2026 at 9:43 PM
Existing guardrails are insufficient to embed AI chatbots into toys.

The proposed ban makes sense.
January 12, 2026 at 8:29 PM
In 2026, AI companies will need to prove that AI is not a sophisticated marketing gimmick and that it can meaningfully support people and society.

The year has barely started, and we are already observing an interesting manifestation of AI pragmatism.

More details below.
January 12, 2026 at 4:10 PM
Many in AI don't realize that the mantra "the fault is of the user" ignores the very reason the Law exists.

Yes, there will be liability if a user acts illegally, but technology is an enabling factor.

Rules, barriers, and oversight shape behavior and DRASTICALLY reduce harm.
January 11, 2026 at 6:25 PM
This is about as BIZARRE as it gets
January 11, 2026 at 1:21 PM
Update on the Grok scandal:
January 9, 2026 at 1:37 PM
AI is a tool
January 7, 2026 at 4:16 PM
Most people haven't realized it yet, but the time to regulate AI is now.
January 7, 2026 at 2:09 PM
The age of scam is here.
January 6, 2026 at 8:00 PM
The case for AI regulation (and why you should care).

Link below.
January 6, 2026 at 4:32 PM
The AI flood.

Ironically, the account in the screenshot below is also AI.

Welcome to 2026
January 5, 2026 at 7:10 PM
Many in AI seem to think that liability rules do not apply to AI systems.

Also, I often read comments assuming that AI companies are immune and never have to compensate anyone.

This is wishful thinking.

Here's all you need to know about AI liability in 2026 (link below).
January 5, 2026 at 2:37 PM
Correct
January 4, 2026 at 5:44 PM
Isn't it strange that he is an obsessive notetaker (a good habit), but for everyone else, he wants to push AI devices that remember everything?
January 4, 2026 at 4:07 PM