The New Oil
banner
thenewoil.org
The New Oil
@thenewoil.org
Practical #privacy and simple #cybersecurity for everyone.

Articles posted =/= endorsement/agreement.

This account no longer monitored. Please contact us […]

🌉 bridged from ⁂ https://mastodon.thenewoil.org/@thenewoil, follow @ap.brid.gy to interact
arstechnica.com
February 11, 2026 at 10:01 AM
techcrunch.com
February 11, 2026 at 8:01 AM
#apple and #google agree to change app stores after 'effective duopoly' claim

https://www.bbc.com/news/articles/c626rng1v63o

#uk #cma #politics
Apple and Google agree to change app stores after 'effective duopoly' claim
The UK's markets regulator says the proposed commitments "will boost the UK's app economy".
www.bbc.com
February 11, 2026 at 7:01 AM
February 11, 2026 at 6:00 AM
techcrunch.com
February 11, 2026 at 1:00 AM
February 10, 2026 at 4:00 PM
February 10, 2026 at 1:00 PM
February 10, 2026 at 9:02 AM
Agents of Misfortune
People are putting way too much trust in AI systems, giving them the ability to act on their behalf. It’s a recipe for disaster. Don’t do it… at least not yet. Artificial Intelligence, as a concept, is old. I wrote about this in a recent article that laid out the privacy risks associated with AI chat bots (including some solutions). But these large language models are now being given agency – that is, people are deputizing these systems to act _on their behalf,_ with little or no supervision. We’ve gone beyond asking ChatGPT to help us plan a trip and are now asking AI agents to book everything. We’re empowering AI bots to go shopping for us. To be clear, we’re giving AI our payment methods and allowing them to buy stuff for us. What could possibly go wrong? Now let me just make a quick aside here. I would **love** to have a true AI assistant. I grew up with Star Trek, Star Wars and The Jetsons. I read most of Isaac Asimov’s robot books. I watched the movie _Her_ (which was very prescient and well worth a watch) and just thought how cool it would be to have that technology (minus the romantic aspects). I get why this stuff is compelling. And I actually believe (because I am an optimistic technologist) that some day we’ll realize the dream of safe, private, helpful AI assistants. But we’re not there yet. ## Agentic AI The rise of the Large Language Model (or LLM) has been stunning. In just a couple short years, chat bots like ChatGPT, Gemini, Claude and Co-Pilot – and generative AI, generally – have taken the world by storm. Everyone is racing to embed “AI” features in their products. It’s a tech gold rush like we’ve never seen – and I don’t believe I’m being the least bit hyperbolic when I say that. Tech companies are investing _hundreds_ of billions of dollars in AI tech. The massive rush for AI tech is starting to have serious secondary impacts. Computer memory costs have skyrocketed. Electricity demand for AI data centers is soaring. People are being laid off and entire professions are being threatened. And tech stock prices have become extremely volatile based on AI news. But we’re just getting started. Even with the frenzy of new ways to use existing AI tech, we’re still coming with novel uses for AI. Buckle up – it’s going to be a bumpy ride. AI is a truly disruptive technology and we’ve only seen the tip of this iceberg. Today I’m going to focus on one particular use of LLMs: agentic AI. This means giving AI _agency_ – allowing it to act autonomously and effect change in the real world. We have already developed protocols that allow chat bots like ChatGPT to not only interact with other applications on our computers and smartphones, but to actually control them and _do things_. (Did we learn nothing from decades of sci-fi movies and books?) ## AI Gone Wild Have you heard of OpenClaw? (Previously ClawdBot and Moltbot.) It’s been all the rage since it debuted last November. You install this AI software and give it permissions to control your entire computer. Then you give it access to your online accounts and credit cards. This AI agent can buy things, send messages, read emails, tweak system settings, and control other apps, including web browsing – all on your behalf, _as you_. People are buying dedicated computers to run it. And it produced a whole market, almost overnight, for plugins (called skills) to give it even more abilities. People are raving about it and actively throwing all caution to the wind, embracing the YOLO mindset. As with every other viral product, the bad guys were quick to capitalize on the frenzy by creating malicious skills. And get this… someone created a social network for these bots called Moltbook, which was riddled with security problems that exposed people’s credentials. (I joked with my friends that I tried to join Moltbook, but failed the CAPTCHA to prove that I _was_ a bot.) There’s even a sketchy report (not verified that I know of) that an instance of OpenClaw actually created a phone number for itself and synthesized speech so it could call its master. But here’s the thing… even if that particular story is not true, it actually _could_ be. ## Just Say No (for Now) The AI space is crazy right now. All caution has been thrown to the wind. We’re moving way too fast and breaking lots of things. I’m not saying you should totally avoid all AI, though some people have definitely taken that stance. These new generative AI tools are truly game-changing. They’re powerful tools for productivity and I use them all the time. But you need to understand the risks and use them with care. See my previous article for helpful tips. However, I would probably avoid most _agentic_ AI features until they can make them much safer. For example, I would not empower AI tools to send emails or messages, manage financial accounts, buy things, make stock trades, or manage your device settings. I wouldn’t be surprised if credit card companies and banks begin to alter their terms of service to specifically carve out liability exceptions if you use AI agents, for example. That is, if you lose a lot of money or run up a huge bill because your AI bot went off the rails, you’ll still be accountable for the transactions. I do believe that some day this will be safe and even normal… but we’re not there yet. ## AI Agent as a Separate Persona As I was thinking seriously about setting up my own ClawdBot instance on a dedicated Mac mini, I had a plan for making it safe… or _safer_. Again, as a software engineer, I love automating things. If I have to do something more than twice, I’ll find some way to automate it. It’s in my nature. And having a “smart” system that can understand complex commands written in plain English and also have a near-perfect memory is extremely compelling. But there was no _way_ I was going to give this system direct access to all my accounts and devices with full permissions. My plan was to create a whole new persona. Not another “me”, but a new entity that would be given limited permissions to act on my behalf – like a virtual executive assistant. For example, I could create a new Apple iCloud account, associated with a new email and maybe even a new phone number, and then maybe add that virtual user to my Apple Family group. I could then communicate with my agent via Messages or iCloud email. I could share read-only access to some of my calendars, share some files, etc – just like I could with another human. Many online services, like WordPress websites and social media accounts, already have the ability to give access to other humans with various levels of permissions, allowing them limited abilities to do things for you and sometimes _as_ you. Why not formalize the notion of AI agent access? And AI agents could actually have _programmatic_ interfaces – machine interfaces that would be more efficient that graphical user interfaces geared towards humans. And frankly we should formalize interfaces for bot agents to talk to each other while we’re at it. (“Have your bot call my bot.”) I think we need a new paradigm for our systems that specifically takes AI agents into account. Even before agentic AI, we’ve had financial aggregation services like Mint.com that we wanted to give limited, read-only access to our bank and credit account balances and transactions, so they could tell us what we’re spending and present a unified view of our finances. We need to extend this paradigm for AI agents and build in guardrails, and I think we should treat these agents as distinct personas separate from ourselves. #### Need practical security tips? Sign up to receive Carey's favorite security tips + the first chapter of his book, _Firewalls Don't Stop Dragons_. Don't get caught with your drawbridge down! **Get started**
firewallsdontstopdragons.com
February 10, 2026 at 7:00 AM