Tiarnán de Burca
banner
nycdubliner.bsky.social
Tiarnán de Burca
@nycdubliner.bsky.social
Building a Twitter lifeboat
As it ever is. (Though I'm not sure letting Peter Thiel speak is the win he thinks it is. :))

I think CS could do with a Foreign Affairs style publication, Wired with Notions. Not quite academic, not quite the NYtimes.

(Notions Definition: evoke.ie/2017/01/20/s...)
Why Are Irish People So Obsessed With Having Notions?
Having notions is an Irish obsession but is it time we stopped caring about what other people think of us?
evoke.ie
September 14, 2025 at 2:04 PM
I'll try and take another swing during the week, reading it on the phone at a music festival probably not the attention it deserved.

Enjoy the rest of your weekend.
September 14, 2025 at 1:58 PM
Maybe I'm hanging too much on 'colonial', 'extractive', etc.
September 14, 2025 at 1:56 PM
My areas are professional tech (paper should tell me why this is useful), and International Relations (plenty of moral arguments, but usually isolated from specific technological means unless it's land mines or nuclear weapons, and even then it's trying to make the most generalized statement).
September 14, 2025 at 1:55 PM
I'm looking for the answer to "what are we to do with these new tools?". One side is "use them everywhere!" One side is "they're evil!"

Neither side seems to help me.

Educating/Helping me, probably not the author's intent, but I'm still disappointed. :)
September 14, 2025 at 1:51 PM
The point I was going for:
The AI boosters seem to aim broad and high.
These tools are amazing! Capabilities unbound!
Hark public! Listen to me!

The intelligent critiques are densely written, self referential, and appear to be aimed at a narrower, specialist audience.

The public will only see one.
September 14, 2025 at 1:51 PM
Could have omitted it from the sentence I suppose, "inward looking" would have sufficed.
September 14, 2025 at 1:51 PM
Those are quoted from someone else, but I think I take your point. Arguments on technology from moral positions are a little unusual, so maybe I'm parsing the paper wrong.
September 14, 2025 at 1:41 PM
Fair enough. I still think it'd be better to come up with a different term than name-squat on something with another meaning, but I see how you get there.
September 14, 2025 at 1:37 PM
Very.
September 14, 2025 at 1:16 PM
It takes one to know one I suppose. :)
September 14, 2025 at 1:15 PM
Bluesky/Twitter have this problem, interested civilians can wander into an academic discussion not aimed at them. The ideas in your paper are interesting, if more pugnacious than I think is useful, but it's not my paper.

Have a good day.
September 14, 2025 at 1:11 PM
I don't know the answers.
I was hoping to find some in your paper.
Either through my failing or yours, I didn't.
Maybe I was hoping for a non-niche paper.
Maybe I was hoping for too much.
September 14, 2025 at 1:08 PM
This definition. Including "the number of parameters exceeding a certain threshold", again I don't think it applies to Siri.
September 14, 2025 at 1:05 PM
Have a great day. I'm just sceptical that the point you want to make will matter when it's densely written for a very narrow audience.

I'm not a big AI fan, I'm here to read about the dangers and I'm getting a screed. It's not terribly useful.

I could be wrong, good luck.
September 14, 2025 at 12:43 PM
Sorry if this is a term of art I'm not familiar with, I read "The Affiliations" as a bunch of Dutch AI academics?

The stuff I've read from that group, mostly via Iris, it all seems to have the same general feel.
(Kinda similar to the post modern stuff in IR, meant only to be read by peers.)
September 14, 2025 at 12:41 PM
Reading the text of your table, I don't think it describes Siri, and nobody who works with the technology will either. Very odd choice.
September 14, 2025 at 12:22 PM
Yeah, but I see that as a suggestion that this bit of academia appears to be an inward looking cult with limited interest in engaging with the world, and you think of it as a badge of honor.

There are lots of worthwhile critiques of "AI" technologies, this presentation communicates none of them.
September 14, 2025 at 12:20 PM
Refusing to use the words with meanings makes your paper unreadable outside your existing community. Is that the point?
September 14, 2025 at 12:16 PM
I used to think enough of us were the best, but it's getting harder to maintain.
September 14, 2025 at 11:54 AM
Excited to read this, but in Fig 1. Siri is listed as an LLM. As far as I know, it's not.

Apple have been trying (and failing) to launch a new apple intelligence version, but the only one anyone's used, is traditional rules based.

I could be wrong, but it's worth checking.
September 14, 2025 at 11:45 AM
It seems deeply unlikely in this day and age that fairies can keep a secret... like, it'd be on their instagram.
September 4, 2025 at 8:44 AM