Running a finished LLM is very cheap. For the same energy cost as an hour of streaming video, you could ask ChatGPT 300 questions. Training models uses more, but when you amortize across how much usage they get they're still cheap.
Running a finished LLM is very cheap. For the same energy cost as an hour of streaming video, you could ask ChatGPT 300 questions. Training models uses more, but when you amortize across how much usage they get they're still cheap.
In the past I've written at length about how much good we can do with the technology despite the bad character of the people in charge. (Kind of like Edison and electricity.) These days I mostly keep my mouth shut. Tired of anger, pessimism, and outrage.
August 18, 2025 at 11:17 PM
In the past I've written at length about how much good we can do with the technology despite the bad character of the people in charge. (Kind of like Edison and electricity.) These days I mostly keep my mouth shut. Tired of anger, pessimism, and outrage.
Maybe it depends on the extent to which you and the other person are part of a shared community—how much you expect to see them again, the degree of trust shared between you? Arguing with strangers feels unproductive if you're not an Influencer.
August 18, 2025 at 4:12 PM
Maybe it depends on the extent to which you and the other person are part of a shared community—how much you expect to see them again, the degree of trust shared between you? Arguing with strangers feels unproductive if you're not an Influencer.
I am pondering how much I should engage with people spreading misinformation about AI on the internet. I commonly see people repeating untruths about how much water & electricity they use. But I've kind of burned myself out having arguments with people being Wrong On The Internet.
August 18, 2025 at 4:09 PM
I am pondering how much I should engage with people spreading misinformation about AI on the internet. I commonly see people repeating untruths about how much water & electricity they use. But I've kind of burned myself out having arguments with people being Wrong On The Internet.
But companies have also achieved success by being trustworthy and acting in the customer's interest, and they've built massive customer loyalty by doing so (e.g Valve, Costco, Apple). For a product that handles extremely sensitive personal information, I think that's the way to go.
May 7, 2025 at 2:25 PM
But companies have also achieved success by being trustworthy and acting in the customer's interest, and they've built massive customer loyalty by doing so (e.g Valve, Costco, Apple). For a product that handles extremely sensitive personal information, I think that's the way to go.
AI companies optimizing for addictiveness in a race to the bottom is definitely a major risk. Many companies have achieved success by doing that (e.g. Facebook, TikTok).
May 7, 2025 at 2:15 PM
AI companies optimizing for addictiveness in a race to the bottom is definitely a major risk. Many companies have achieved success by doing that (e.g. Facebook, TikTok).
(It could be that training an LLM on an ethics textbook to create an AI conscience is overkill. We have guardrails, the OpenAI Model Spec, and Claude's Constitution, and those work…most of the time. Maybe stronger measures are needed?)
May 7, 2025 at 2:15 PM
(It could be that training an LLM on an ethics textbook to create an AI conscience is overkill. We have guardrails, the OpenAI Model Spec, and Claude's Constitution, and those work…most of the time. Maybe stronger measures are needed?)
You don't want to be overbearing with AI ethics. AI shouldn't be preachy and it should defer to human judgement up to a certain point. But there must be lines it won't cross. We trust humans based on our assessment of what they do and what they refuse to do, and I think the same will be true of AI.
May 7, 2025 at 2:15 PM
You don't want to be overbearing with AI ethics. AI shouldn't be preachy and it should defer to human judgement up to a certain point. But there must be lines it won't cross. We trust humans based on our assessment of what they do and what they refuse to do, and I think the same will be true of AI.
Having AIs that are trustworthy could be an important competitive advantage, especially given the sensitivity of the information they'll work with and the power they'll have in people's lives.
May 7, 2025 at 2:14 PM
Having AIs that are trustworthy could be an important competitive advantage, especially given the sensitivity of the information they'll work with and the power they'll have in people's lives.
Maybe the same is true for AI ethics. People want AIs that will follow their every command. But AIs without a strong moral compass will repeatedly fail their owners. They'll get bamboozled into revealing secrets and doing harm, and people will regret using them.
May 7, 2025 at 2:14 PM
Maybe the same is true for AI ethics. People want AIs that will follow their every command. But AIs without a strong moral compass will repeatedly fail their owners. They'll get bamboozled into revealing secrets and doing harm, and people will regret using them.
I've heard it said that "security *is* capability". If you release an insecure system, before too long someone will exploit it and you'll be worse off than the competitor that took the time to get it right.
May 7, 2025 at 2:14 PM
I've heard it said that "security *is* capability". If you release an insecure system, before too long someone will exploit it and you'll be worse off than the competitor that took the time to get it right.
I wonder if we could design agents to be virtuous. Like, train an LLM on some particular school of moral philosophy and stick it in the agent's decision-making loop. If a plan is unethical, require the agent to make a different plan.
May 7, 2025 at 2:14 PM
I wonder if we could design agents to be virtuous. Like, train an LLM on some particular school of moral philosophy and stick it in the agent's decision-making loop. If a plan is unethical, require the agent to make a different plan.
I'm subscribed to his Google group. groups.google.com/g/komoroske-... I don't remember how I originally found him. But he publishes weekly updates that are kind of abstract and speculative but also thought-provoking and sometimes inspiring. He has big ideas.
I'm subscribed to his Google group. groups.google.com/g/komoroske-... I don't remember how I originally found him. But he publishes weekly updates that are kind of abstract and speculative but also thought-provoking and sometimes inspiring. He has big ideas.
Addictive & unhealthy AI will certainly happen. Facebook will build it if no-one else does. All the more important that prosocial AI seize the initiative and try to outcompete it.
Addictive & unhealthy AI will certainly happen. Facebook will build it if no-one else does. All the more important that prosocial AI seize the initiative and try to outcompete it.