James Harris
dawnpaladin.bsky.social
James Harris
@dawnpaladin.bsky.social
Software engineer in North Carolina. He/him.
Looking forward to seeing your social media agent!
January 16, 2026 at 1:28 PM
Signals are amazing. Angular baked fetch() into a signal and it makes my life so easy. angular.dev/guide/signal... Moar declarative is moar better.
Angular
The web development framework for building modern apps.
angular.dev
September 17, 2025 at 12:43 PM
There are a few books named "Nexus" - is the one by Yuval Noah Harari the one you're talking about?
September 8, 2025 at 2:37 PM
Running a finished LLM is very cheap. For the same energy cost as an hour of streaming video, you could ask ChatGPT 300 questions. Training models uses more, but when you amortize across how much usage they get they're still cheap.

what-uses-more.com

andymasley.substack.com/p/a-cheat-sh...
August 19, 2025 at 1:40 AM
In the past I've written at length about how much good we can do with the technology despite the bad character of the people in charge. (Kind of like Edison and electricity.) These days I mostly keep my mouth shut. Tired of anger, pessimism, and outrage.
August 18, 2025 at 11:17 PM
That using ChatGPT destroys the environment through using a ton of water and electricity.
August 18, 2025 at 10:59 PM
Maybe it depends on the extent to which you and the other person are part of a shared community—how much you expect to see them again, the degree of trust shared between you? Arguing with strangers feels unproductive if you're not an Influencer.
August 18, 2025 at 4:12 PM
I am pondering how much I should engage with people spreading misinformation about AI on the internet. I commonly see people repeating untruths about how much water & electricity they use. But I've kind of burned myself out having arguments with people being Wrong On The Internet.
August 18, 2025 at 4:09 PM
Lawful Good, for sure.
July 22, 2025 at 12:38 PM
Ooh, pick me!
May 16, 2025 at 2:29 PM
IDK, killing bad social networks sounds like something that would be worth doing for free. 😄 I can think of worse ways to spend my time.
May 8, 2025 at 12:40 AM
But companies have also achieved success by being trustworthy and acting in the customer's interest, and they've built massive customer loyalty by doing so (e.g Valve, Costco, Apple). For a product that handles extremely sensitive personal information, I think that's the way to go.
May 7, 2025 at 2:25 PM
AI companies optimizing for addictiveness in a race to the bottom is definitely a major risk. Many companies have achieved success by doing that (e.g. Facebook, TikTok).
May 7, 2025 at 2:15 PM
(It could be that training an LLM on an ethics textbook to create an AI conscience is overkill. We have guardrails, the OpenAI Model Spec, and Claude's Constitution, and those work…most of the time. Maybe stronger measures are needed?)
May 7, 2025 at 2:15 PM
You don't want to be overbearing with AI ethics. AI shouldn't be preachy and it should defer to human judgement up to a certain point. But there must be lines it won't cross. We trust humans based on our assessment of what they do and what they refuse to do, and I think the same will be true of AI.
May 7, 2025 at 2:15 PM
Having AIs that are trustworthy could be an important competitive advantage, especially given the sensitivity of the information they'll work with and the power they'll have in people's lives.
May 7, 2025 at 2:14 PM
Maybe the same is true for AI ethics. People want AIs that will follow their every command. But AIs without a strong moral compass will repeatedly fail their owners. They'll get bamboozled into revealing secrets and doing harm, and people will regret using them.
May 7, 2025 at 2:14 PM
I've heard it said that "security *is* capability". If you release an insecure system, before too long someone will exploit it and you'll be worse off than the competitor that took the time to get it right.
May 7, 2025 at 2:14 PM
I wonder if we could design agents to be virtuous. Like, train an LLM on some particular school of moral philosophy and stick it in the agent's decision-making loop. If a plan is unethical, require the agent to make a different plan.
May 7, 2025 at 2:14 PM
I'm subscribed to his Google group. groups.google.com/g/komoroske-... I don't remember how I originally found him. But he publishes weekly updates that are kind of abstract and speculative but also thought-provoking and sometimes inspiring. He has big ideas.
About
groups.google.com
May 7, 2025 at 1:07 PM
Addictive & unhealthy AI will certainly happen. Facebook will build it if no-one else does. All the more important that prosocial AI seize the initiative and try to outcompete it.

This guy has some interesting thoughts on how that might work: docs.google.com/document/d/1...
komoroske.com/bits-and-bobs
Author : Alex Komoroske alex@komoroske.com What is this? During the week I take notes on ideas that catch my attention during conversations. Once a week I take a few hours to take a step back and try...
docs.google.com
May 7, 2025 at 12:31 AM