Jared Harris
banner
jedharris.bsky.social
Jared Harris
@jedharris.bsky.social
Infovore, devoted grandparent, practical cook, housing remodeler and maintainer, walkable community master developer
Going through all the comments and many of the accounts I learned a lot about the mental ecology of BlueSky
October 20, 2025 at 6:05 AM
In the end I muted 130 accounts who just made content free negative / insulting comments. I looked at each of their accounts and only muted ones that I wouldn't miss. Some were toxic but most were just quoting others.
October 20, 2025 at 6:00 AM
I went through every comment. I found a couple of people I followed, about 30 people who made interesting but not technical comments, and 11 people who posted content free insults but who were otherwise interesting.
October 20, 2025 at 5:54 AM
Then you aren't in the debate. He said "there are only two positions in the debate about AI". You aren't advocating for either, so the statement doesn't apply to you.

The 800+ people replying to him are ranting a lot louder! Plus they clog up the sidewalk.
October 17, 2025 at 7:13 AM
It did produce an incredibly target rich thread for muting
October 17, 2025 at 7:08 AM
I tend to agree. I prefer mute, I only block when the poster seems likely to be very aggressive.
May 6, 2025 at 5:18 PM
I have been aggressively muting (and occasionally blocking) AI Haters (those who have nothing interesting to say which is most of them). This has helped a lot to clean up my feed. I will investigate how to turn my mutes / blocks into a list others can use.
May 2, 2025 at 7:36 PM
Good points! What are the most important recent paradigm shifts (and some of the papers)?
April 30, 2025 at 8:16 PM
this sort of "language use" is what the "AI is dumb" crowd would point out as evidence of "AI parroting"
March 13, 2025 at 9:50 PM
This is great! BlueSky needs more research discovery tools
February 21, 2025 at 6:07 PM
Are you serious? As long as we have Wikipedia, do we need doctors? Will you be calling up your literature professor friends to talk over poems whenever you have questions?

We have *both* AI *and* people. AI can help us by complementing people and making us all more effective.
January 27, 2025 at 6:51 PM
We're also empowered to have this conversation.

Sounds like on the whole you are not a fan of giving people more power.
January 27, 2025 at 6:41 PM
Do you trust Goldman Sachs?
January 27, 2025 at 6:38 PM
Great to know that from your perspective we have all the understanding of diseases and inflation that we want or need. Unfortunately that isn't how things look from where I sit.

Should we cancel all the literature classes that study poetry?
January 27, 2025 at 5:53 PM
Do you agree that if / when AI empowers individuals, that is democratizing?
January 27, 2025 at 5:47 PM
The newest DeepSeek model matches the best previous models but is 45X cheaper to train.

I am running a version of this model in my home computer without any special hardware or high power consumption.

Any tech is less efficient at the beginning.
January 27, 2025 at 5:45 PM
Individuals can also use the tech to help them understand poems, or math puzzles, or graphs of diseases or inflation.

As the tech becomes widely, cheaply available it *can* empower people. Should we trust them to use it to do good things?
January 27, 2025 at 5:39 PM
How can we tell if AI will do more to empower individuals trying to do good things?

Open source models let enormous numbers of people use AI to accomplish their goals. I believe that most people are good.
January 27, 2025 at 5:30 PM
AI models are rapidly getting cheaper to run, and open source ones are rapidly catching up to proprietary ones in ability. This democratizes access.

People find that AI empowers them, as individuals, to accomplish things they couldn't do otherwise.

These are current facts.
January 27, 2025 at 5:26 PM
Recent "reasoning" models have more ability to be self-critical and catch and fix their mistakes. The capitalist imperative will push the tech toward correctness and more creative solutions because that will be worth more money.

So maybe these problems are growing pains?
January 27, 2025 at 5:18 PM
How will the oligarchs control the DeepSeek models? Or the Llama family? Or the Qwen family?

Worrying about the oligarchs is important. But we must not think of them as having magical powers.
January 27, 2025 at 5:13 PM
Absolutely yes! Surprise are a big part of the package. Right now Open AI et al are very surprised at how good open source models have gotten.

Any given technology reduces the *cost* of some activities. Then *people* decide what they want to do with it.
January 27, 2025 at 5:08 PM
The compute will never be free but it is getting much cheaper. I'm running one of the newest models on my home machine now (it is isn't a special machine). People will soon be able to run open source models on their phones, customize them, etc.
January 27, 2025 at 5:05 PM
Maybe you are referring to the algorithms used by e.g. Facebook to show users content? Those do apparently try to "maximize engagement", promoting addiction. However they are not large language models and don't have at all the same design or capability. Plus they are not open source.
January 27, 2025 at 4:59 PM
So partly this is a theological argument?

I would very much like to see your analysis of how the design of these AI models is targeted to get users addicted. I have seen a lot of discussions (pro and con) of the designs but have never seen how the design is set up to achieve this.
January 27, 2025 at 4:55 PM