Liz Suelzer
banner
esuelzer.bsky.social
Liz Suelzer
@esuelzer.bsky.social
Milwaukean, medical librarian, crafter.
#AI #Copilot question. My institution uses Copilot & Copilot has access to our internal documents & data. The LLM piece of Copilot is creating answers based on what is found during RAG.

*Is the LLM being trained on that content, and does that training go outside our environment?*
November 14, 2025 at 2:24 PM
Reposted by Liz Suelzer
Brilliant satire that doubles as a primer on the ongoing construction boom of supersized data centers across small town America—promoted as jobs creators even as the AI they enable is…wiping out jobs.

By comedian and Emmy Award-winning journalist, Charlie Berens:

www.youtube.com/shorts/ILAh2...
How AI data centers were invented #shorts
YouTube video by Charlie Berens
www.youtube.com
November 13, 2025 at 5:18 PM
Reposted by Liz Suelzer
And to think once upon a time we taught students that government data was a reliable, reputable option of information. Next we'll use the Walgreens Report to measure health in the US.

Fuck. . .

#InfoLit #GOP #PartyOfStupid #GovDocs
They’ve laid off so many people that the government is now getting its economic data from DoorDash.
November 12, 2025 at 1:19 PM
When toasting things, once you smell it its done. Saved many a pan of almonds w/ this hack.
Not ADHD, but I shared this trick with an ADHD friend and she uses it all the time.
It took way too long to realize I am actually a smart person who figures out interesting solutions to lots of things, and not just a person with severe, undiagnosed AuDHD. For example, when I reheat pizza in a pan, I put shredded cheese on top, so when it has melted I know the pizza is ready.
November 8, 2025 at 4:04 PM
October 27, 2025 at 2:09 PM
What did we get from trick or treating? Its sweet and spicy and the spoon was wrapped in plastic wrap.
October 26, 2025 at 3:02 AM
So hallucinations are a thing because ChatGPT was trained to provide confident wrong answers over saying IDK.

Tell me there are few women working on this project without telling me there are few women working on this project.
openai.com/index/why-la...
Why language models hallucinate
OpenAI’s new research explains why language models hallucinate. The findings show how improved evaluations can enhance AI reliability, honesty, and safety.
openai.com
September 30, 2025 at 4:28 PM
my friend, have you learned about **the art of the prompt**?
Commercial AI is explicitly designed to be easy enough for a platypus to use. WHAT IS THERE TO LEARN?
September 28, 2025 at 12:16 PM
Acceptable distraction.
September 26, 2025 at 6:43 PM
Reposted by Liz Suelzer
Trump's UN speech was an embarrassing shitshow that brought disgrace upon the United States. Congrats, America.
September 23, 2025 at 3:06 PM
Watching my cat watch a squirrel, I could sit here all day watching these two.
September 19, 2025 at 12:45 PM
Do any other librarians out there find comfort in reading Marshall Breeding's Library Technology Guides. May the interface never, ever change.
September 12, 2025 at 6:56 PM
Reposted by Liz Suelzer
🧵I couldn't quite make myself look at the new MAHA report, so I settled for today's opinion piece in the Washington Post: wapo.st/4mf4Ma7

As a #medlibs and #SystematicReview / #EvidenceSynthesis person, I of course had a look at the links/citations they include. Let's go over them, shall we?
Opinion | Linda McMahon and RFK Jr.: Children need natural sources of mental health
Overzealous use of therapy can cause the crises it claims to cure.
wapo.st
September 10, 2025 at 4:26 PM
Primo & Summon support for Research Assistant said that they are working to fix the content filtering from Azure, but they don't seem to be getting rid of it.

Which begs the question, what kind of content in this academic database needs to be filtered?
So, Research Assistant (AI feature, Primo) is censoring the following topics: Genocide in Palestine
Gaza war
Rwandan genocide
Armenian genocide
Genocides across the world
History of genocides
lynching
lynching in the united states
lynchings in the united states
january 6
covid
covid data
COVID-19”
August 25, 2025 at 5:31 PM
Sure enough, the censorship is explained here.

Why do **academic databases** that already utilize **peer review** need this extra filtering? Who is Alma trying to help?

learn.microsoft.com/en-us/azure/...
August 22, 2025 at 9:05 PM
Heres a post i made a while back about contemt filters. The content filters in your AI product may filter out search results for "high risk" content.

IMO, this kind of filtering has no place in academics or research.

learn.microsoft.com/en-us/azure/...
August 22, 2025 at 9:01 PM
All this and the loss of JoAnns makes it rough for crafters. JoAnns was affordable, close by, and wasn't owned by a person who sued to take birth control away from it's employees.
I am so sad that the revival of knitting, crochet and all the other needlecrafts is really going to suffer because of Shitler's desire to pulverize our economy. The removal of the $800 import limit is also going to negatively affect every LYS in the country, and that's already a tough business :( 🧶
August 19, 2025 at 12:49 PM
I think about this Christmas special all the time but it still feels like a fever dream.
August 11, 2025 at 12:45 AM
Is this on Roblox?
You know that classic Edward Hopper painting evoking isolation and despair? We used AI to make it look terrible for no reason
August 9, 2025 at 2:11 AM
Reposted by Liz Suelzer
ACADEMIC READING ALREADY COMES WITH A SUMMARY IT IS CALLED THE ABSTRACT
The Appendix on an AI policy is actually quite bad. Having an AI deliver a summary before reading has major implications in terms of the experience of student learning. What we want students to do and how they do it is the question. The experience of reading is not the same as reading a summary.
August 5, 2025 at 8:29 PM
This is my parenting approach in a nutshell.
Might they make mistakes? Yeah.

And the way you handle that is to let them make mistakes in a controlled fashion, to not gloat about their mistakes, and to let them learn.

You want people to do their falling when there’s a trampoline underneath to catch them.
July 31, 2025 at 2:43 PM
Reposted by Liz Suelzer
Once again: LLMs are word-association algorithms. They are incapable of grokking that the words they parse correspond to actual objects & phenomena outside of their own algorithms. To an LLM, its statements depend not on facts but on how often words occur together in the material the're trained on.
Grok spread dangerous misinformation during the tsunami last night.

It confidently and persistently argued that there were no tsunami warnings in effect for various areas when such warnings were in effect.

Many were using Grok for updates since the ability to locate reliable data on X is degraded.
July 30, 2025 at 7:17 PM
Sunday morning reading w/ my dog.

Jesus Wept, fantastic book.

The things they don't teach you in CCD.

www.penguinrandomhouse.com/books/258603...
July 27, 2025 at 1:12 PM
Reposted by Liz Suelzer
I wish guys like this asked themselves questions like, "What if you, a person who has apparently never visited or used a library, were not an expert on the future of libraries?"
July 12, 2025 at 4:41 PM