Dave Bloom
drumkitdave.bsky.social
Dave Bloom
@drumkitdave.bsky.social
Music stuff, librarian/info lit stuff, other duties as assigned
My bandmates and I were *really* into that show. A few years later, a club offered my ex-bandmates' new band an opening spot for either Flickerstick or the Arcade Fire, who were just about to receive their Pitchfork 10.0 for Funeral. Guess which show my friends picked . . .
April 29, 2025 at 2:35 AM
In classes where AI is an info lit instruction component, I cover ethics, but focus on pragmatic issues, e.g., databases and search engines are better tools for finding credible sources that you can assess for authority, saving you work & time. It undercuts overall assumptions about AI and ease.
April 4, 2025 at 10:31 PM
Would love a follow up that evaluates sourcing on more typical user queries! Anecdotally, your findings square roughly with what I've noticed about sourcing for more standard information seeking-type prompts, but 'find this exact quote' is pretty distinct from 'answer this and provide support'.
March 13, 2025 at 7:27 PM
Although this study would suggest that you might have a difficult time verifying the output for that prompt, because the sourcing may be incorrect, incomplete, or misleading. And if you're not an expert on drugs or genetics, you definitely shouldn't trust the output without verification.
March 13, 2025 at 7:03 PM
That phrasing got me at first, too, but the queries in the study were "here's a quote, find me the source," so it mostly just establishes that sourcing is garbage. This is a very big deal, but shouldn't be extrapolated out to reliability of all output.
March 13, 2025 at 6:59 PM
Definitely worrying that an expert in assessment and digital learning is so overwhelmed with the narrative around AI as an efficiency engine that he steps right past a well-established, more reliable solution.
February 28, 2025 at 5:40 PM
Excellent list! Would also recommend The Thin Man (1934) for both Christmas adjacency and William Powell-Myrna Loy chemistry.
December 24, 2024 at 1:00 AM
Can you say more about 'you want to know what some objections to an idea might be?' I'd think that to judge whether an objection to an idea is credible (rather than just possible), you need a source to evaluate, meaning this info need, too, would benefit much more from search than generative AI.
December 14, 2024 at 11:49 PM
3. Making searching and info lit best practices resonate with people isn't easy even in an academic environment where librarians can actually teach the stuff. People take cues from the tools, and if the tools encourage sloppy use, any counter programming is at a huge disadvantage. /end
November 27, 2024 at 5:22 PM
2. No rhetoric around generative AI best practices like 'you just need to get good at prompting' will overcome the messaging and appearances of the apps that encourage search engine-like use.
November 27, 2024 at 5:12 PM
Failing generative AI isn't noteworthy, but notice these things:

1. People have poor search and source evaluation skills. People seldom go past the first page of results and most would absolutely take that initial, unsourced output as credible.
November 27, 2024 at 5:07 PM
And, finally, I point out that the source it used was not one of the recommended sources, and it tries again, does poorly.
November 27, 2024 at 4:53 PM
And I ask it to look up real data in the sources mentioned and create a pie chart from it.

Here's the source it provides:
www.oberlo.com/statistics/u...
And since that source isn't much of a source, here's the source for the source: gs.statcounter.com/vendor-marke...
November 27, 2024 at 4:50 PM
So I inquire further . . .
November 27, 2024 at 4:44 PM
The output - a chart with no sourcing!
November 27, 2024 at 4:40 PM
Here's the list of suggested prompts currently included in the UW's institutional instance of Copilot. I went with the pie chart suggestion.
November 27, 2024 at 4:37 PM