Per Arne Godejord
banner
pagodejord.bsky.social
Per Arne Godejord
@pagodejord.bsky.social
A Norwegian senior lecturer in Social Informatics, who focuses on Social Cybersecurity and Digital Preparedness, finds the idea that chatbots are AI hysterically ridiculous. #NAFO member since 2023.

https://www.nord.no/om/ansatte/per-arne-godejord
Pinned
The current idea that chatbots are "AI" and claims that computers can now truly think is a reflection of the imbalance between rapid technological progress and the widespread lack of scientific literacy. In the past, people fervently believed in demons and UFOs. Today, it's AI.
Vi står i en tekno utvikling som svekker vår individuelle dømmekraft og devaluere den politiske og kulturelle verdien av selvstendig menneskelig refleksjon. Og våre politikere og deler av akademia er tilnærmet bevisstløs i møtet med dette...

www.forskning.no/data-etikk-i...
– Bare tanken på at politikere kan tenkes å stole på KI, er ganske skremmende
Trodde du at KI har god etikk og moral? Det har forskere undersøkt.
www.forskning.no
November 12, 2025 at 12:23 PM
So-called "Generative AI" was never a path towards the theoretical dream of AGI, but I suspect that even this short and to the point read by @garymarcus.bsky.social will not move the enthusiasts ...

garymarcus.substack.com/p/5-recent-o...
5 recent, ominous signs for Generative AI
November isn’t even half over
garymarcus.substack.com
November 12, 2025 at 8:22 AM
Kjedelig og amatørmessig fra Forsvaret? Ja, bevares. Dette husker jeg også fra egen tjeneste i Sivilforsvaret. Løsning? miltrad.no! Riktignok koster det noen kroner, men likevel. Litt personlig initiativ er ikke av veien...

www.forsvaretsforum.no/bardufoss-fo...
Soldater må lage navnelappene sine selv
Åsmund Hessevik Eikland kom inn som vernepliktig i april. Sju måneder senere mangler han fortsatt navnelapp.
www.forsvaretsforum.no
November 12, 2025 at 7:44 AM
Amidst AI hype in higher education, we must move beyond efficiency talk to civic and existential reflection. As Carl Sagan urged, critical thinking is key. Without it, we risk raising users, not citizens. #AIethics #HigherEd #BaloneyDetection #CriticalPedagogy

hogreutbildning.se/index.php/hu...
Beyond the Hype: Towards a Critical Debate About AI Chatbots in Swedish Higher Education | Högre utbildning
hogreutbildning.se
November 7, 2025 at 9:53 AM
Carl Sagan’s baloney‑detection—logical consistency, proportional evidence, willingness to revise—should guide research on So-called "AI". "Understanding what generative AI is really like requires rigorous critical thinking" writes C. Louro in this excellent read
theconversation.com/generative-a...
Generative AI is not a ‘calculator for words’. 5 reasons why this idea is misleading
Big tech wants generative AI systems to seem like neutral, reliable tools – but the reality is far more complicated.
theconversation.com
November 7, 2025 at 7:10 AM
Agentic so-called "AI browsers", where LLMs act across sites, collapse the browser’s control and data planes, enabling practical prompt‑injection and data‑leakage attacks. Avoid them like the plague!

www.xda-developers.com/please-stop-...
Please stop using AI browsers
Agentic AI browsers are dangerous, and even some of the biggest browser companies think so.
www.xda-developers.com
November 6, 2025 at 5:07 PM
"The collision of ballooning budgets and researchers quick to label chatbots as ‘intelligent’ is a powerful thing" wrote University of Cambridge researcher Harry Law in 2023.
Now its 2025 and the Science Fiction circus is still active...

aibusiness.com/ml/untitled#...
There’s No Such Thing as ‘Generative AI’
What, exactly, do we mean by generative AI?
aibusiness.com
November 5, 2025 at 11:38 AM
"We should not simply assume that a task is suitable to be performed by Generative AI just because the people selling it say so. We should demand empirical evidence. Check that this stuff actually works before spending all your money on it" wrote Colin Fraser 1 yr ago

medium.com/@colin.frase...
Generative AI is a hammer and no one knows what is and isn’t a nail
This analogy is going to seem a bit tortured but bear with me. Imagine a world without hammers. You’re driving nails into the wall with…
medium.com
November 5, 2025 at 11:32 AM
Språkmodeller lager ikke tekst basert på fakta og årsakssammenhenger, de lager tekst som tilsynelatende kan være riktig skriver @jilltxt.bsky.social i en glimrende kronikk i @aftenposten.no og peker på noe flere av oss har prøvd å løfte frem siden 2022

www.aftenposten.no/meninger/deb...
Teknologien truer kunnskapssystemet vårt. Men den norske staten har slukt tekgigantenes PR rått.
Les kronikken.
www.aftenposten.no
November 5, 2025 at 7:00 AM
Chatbots pushing Russian propaganda? Since there is no such thing as a thinking machine, this is classic: Garbage in, garbage out. #AI #Propaganda #CriticalThinking

www.wired.com/story/chatbo...
Chatbots Are Pushing Sanctioned Russian Propaganda
ChatGPT, Gemini, DeepSeek, and Grok are serving users propaganda from Russian-backed media when asked about the invasion of Ukraine, new research finds.
www.wired.com
November 4, 2025 at 1:52 PM
Stanford finds 24 leading AI models can’t tell belief from fact. Brilliant for law, medicine and journalism. I mean, what could possibly go wrong? It’s AI, isn’t it?

www.independent.co.uk/tech/chatgpt...
ChatGPT can’t tell the difference between beliefs and facts
New study exposes critical flaw that could have profound implications in high-stakes areas like law, medicine, or journalism
www.independent.co.uk
November 4, 2025 at 1:47 PM
ChatGPT offers linguistic fluency, not empathy. This is no surprise: AI does not “exist” as a conscious being. It’s pattern-matching code, not a mind. Expecting genuine care from algorithms is a category error. #AIethics #CriticalThinking

www.independent.co.uk/tech/chatgpt...
ChatGPT is not your friend. But people are behaving like it is
Hundreds of thousands of people are showing ‘possible signs of mental health emergencies’ in their chats with ChatGPT, and millions more are probably oversharing. But there is something fundamental ab...
www.independent.co.uk
November 4, 2025 at 1:43 PM
Western strategists underestimate China’s Unrestricted Warfare—a Maoist-inspired doctrine spanning finance, culture, tech and ideology. Xi’s neo-Maoism rejects liberal norms, exploiting Western openness as a strategic vulnerability.

www.stratagem.no/the-continua...
The Continuation of Mao - Beyond Western Imagination
Western strategists are bound by historical traditions. This acts as an intellectual hogtie in which the Chinese Communist Party (CCP) accepts as an invitation to instigate global perpetual havoc.[1] ...
www.stratagem.no
November 4, 2025 at 12:44 PM
«The talk about existential risk from AGI is a magician’s distraction from what’s going on right in front of us – not a mechanical uprising, but a silent campaign to devalue the political and cultural currency of humane thought.»
– Professor Shannon Vallor, 2023

www.sciencefocus.com/future-techn...
Why is AI so dangerous? | BBC Science Focus Magazine
What if ‘will AIs pose an existential threat if they become sentient?’ is the wrong question? What if the threat to humanity is not that today’s AIs become sentient, but the fact that they won’t?
www.sciencefocus.com
November 3, 2025 at 8:48 AM
ChatGPT’s scraping of Google can surface users’ full prompts as Google Search Console impressions, exposing potentially sensitive conversational queries. This raises urgent privacy, governance and accountability questions for so-called "AI" platforms.

www.quantable.com/ai/the-old-r...
ChatGPT Scrapes Google and Leaks Your Prompts - Quantable Analytics
Whether it’s the new masters of the internet like OpenAI and Anthropic, or the old masters like Google or Microsoft operating under a new playbook — the internet doesn’t work like it used to. That’s n...
www.quantable.com
November 3, 2025 at 8:18 AM
As a computer user I couldn’t agree more. As a lecturer in social informatics and fan of Sagan’s baloney‑detector: forced so-called "AI" risks cognitive overload and legitimacy loss. Perhaps prioritise controlled, user‑centred evaluation before widescale rollout?

decrypt.co/346170/ai-ba...
AI Backlash Is Here: Even Sophisticated Users Are Getting Sick of Tech's Latest Obsession - Decrypt
Forced AI software features are stirring real user frustration, while studies on fatigue are piling up. Is a full-blown revolt in our future?
decrypt.co
October 29, 2025 at 6:54 AM
Kravet om obligatorisk menneskelig inngripen i høyrisiko-KI gir ofte falsk trygghet. Virkelig sikkerhet krever å gjøre KI-systemene robuste og pålitelige, ikke symbolsk tilsyn. God kronikk fra @ckrogh.bsky.social og Morten Irgens

www.dagsavisen.no/debatt/vil-e...
Vil en lov gjøre at vi kan stole på kunstig intelligens?
Forslaget om den såkalte KI-loven er velment, men feiler på et viktig punkt.
www.dagsavisen.no
October 29, 2025 at 6:40 AM
Nord University supports Ukrainian universities and lecturers with hybrid teaching, micro‑credentials and targeted support for internally displaced students — building resilient higher education, pathways for veterans, and scalable models for European response.

www.forskning.no/krig-og-fred...
Universitetet til Kateryna har blitt bombet tre ganger: – Vi underviser videre
Ukrainske lærere får hjelp av norsk universitet til å holde undervisningen i gang under krigen.
www.forskning.no
October 29, 2025 at 6:28 AM
Som teknolog og tidligere stabsbefal kjenner jeg meg igjen i flere av poengene Aleksander Fredriksen løfter frem. Og at vurderingsevne og faglig dømmekraft er ytterst avgjørende når samtaleroboter trekkes inn som verktøy er spot on!

www.stratagem.no/ki-stabsoffi...
KI – stabsoffiserens styrkemultiplikator eller starten på kognitiv fallitt?
Kunstig intelligens er allerede en del av Forsvaret. Spørsmålet er ikke om vi skal bruke teknologien, men hvordan vi gjør det. For stabsoffiseren kan KI gi bedre tempo, struktur og språk, samtidig som...
www.stratagem.no
October 27, 2025 at 7:18 AM