Matt Keogh
banner
matsaukeo.bsky.social
Matt Keogh
@matsaukeo.bsky.social
Co-Director at Liquid Light, a strategic digital agency working with organisations who want to make a difference
Reposted by Matt Keogh
Useful site for #branding folk. if you don't already know about it. brandingstyleguides.com
The branding style guidelines documents archive
Welcome to the brand design manual documents directory. Search over our worldwide style assets handpicked collection, access to PDF documents for inspiration.
brandingstyleguides.com
December 17, 2024 at 4:46 PM
Bluesky feels like following Web design Twitter from the naughties (which I’m all for)…

Big question: Is there a fold? 😉
November 23, 2024 at 10:12 AM
Marketeers - I was reminded of this today: awareness is just the start. The real challenge is making your brand the one people think of when they need what you offer
November 21, 2024 at 5:57 PM
Who should I be following on Bluesky for good brand and design content?
November 14, 2024 at 10:47 PM
November 9, 2024 at 4:17 PM
AI isn't just digital noise now. On Halloween, crowds swarmed streets for a non-existent parade.

Police had to tell people to go home. Why? A content farm's AI stories hit Google's first page.

AI slop is now warping reality.​​​​​​​​​​​​​​​​

www.bbc.co.uk/news/article...
Dublin: Hundreds gather in streets for hoax Halloween parade - BBC News
It's understood a rumour circulated online that a parade was due to take place on O'Connell Street.
www.bbc.co.uk
November 2, 2024 at 5:57 PM
Reposted by Matt Keogh
Unsaid
I went to the UX Brighton conference yesterday. The quality of the presentations was really good this year, probably the best yet. Usually there are one or two stand-out speakers (like Tom Kerwin last year), but this year, the standard felt very high to me. But… The theme of the conference was UX and “AI”, and I’ve never been more disappointed by what _wasn’t_ said at a conference. Not a single speaker addressed where the training data for current large language models comes from (it comes from scraping other people’s copyrighted creative works). Not a single speaker addressed the energy requirements for current large language models (the requirements are absolutely mahoosive—not just for the training, but for each and every query). My charitable reading of the situation yesterday was that every speaker assumed that someone else would cover those issues. The less charitable reading is that this was a deliberate decision. Whenever the issue of ethics came up, it was only ever in relation to how we might use these tools: considering user needs, being transparent, all that good stuff. But never once did the question arise of whether it’s ethical to even use these tools. In fact, the message was often the opposite: words like “responsibility” and “duty” came up, but only in the admonition that UX designers have a responsibility and duty to use these tools! And if that carrot didn’t work, there’s always the stick of scaring you into using these tools for fear of being left behind and having a machine replace you. I was left feeling somewhat depressed about the deliberately narrow focus. Maggie’s talk was the only one that dealt with any externalities, looking at how the firehose of slop is blasting away at society. But again, the focus was only ever on how these tools are used or abused; nobody addressed the possibility of deliberately choosing not to use them. If audience members weren’t yet using generative tools in their daily work, the assumption was that they were lagging behind and it was only a matter of time before they’d get on board the hype train. There was no room for the idea that someone might examine the roots of these tools and make a conscious choice not to fund their development. There’s a quote by Finnish architect Eliel Saarinen that UX designers like repeating: > Always design a thing by considering it in its next larger context. A chair in a room, a room in a house, a house in an environment, an environment in a city plan. But none of the speakers at UX Brighton chose to examine the larger context of the tools they were encouraging us to use. One speaker told us “Be curious!”, but clearly that curiosity should not extend to the foundations of the tools themselves. Ignore what’s behind the curtain. Instead look at all the cool stuff we can do now. Don’t worry about the fact that everything you do with these tools is built on a bedrock of exploitation and environmental harm. We should instead blithely build a new generation of user interfaces on the burial ground of human culture. Whenever I get into a discussion about these issues, it always seems to come back ’round to whether these tools are actually any good or not. People point to the genuinely useful tasks they can accomplish. But that’s not my issue. There are absolutely smart and efficient ways to use large language models—in some situations, it’s like suddenly having a superpower. But as Molly White puts it: > The benefits, though extant, seem to pale in comparison to the costs. There are no ethical uses of current large language models. And if you believe that the ethical issues will somehow be ironed out in future iterations, then that’s all the more reason to _stop_ using the current crop of exploitative large language models. Anyway, like I said, all the talks at UX Brighton were very good. But I just wish just one of them had addressed the underlying questions that any good UX designer should ask: “Where did this data come from? What are the second-order effects of deploying this technology?” Having a talk on those topics would’ve been nice, but I would’ve settled for having five minutes of one talk, or even one minute. But there was nothing. There’s one possible explanation for this glaring absence that’s quite depressing to consider. It may be that these topics weren’t covered because there’s an assumption that everybody already knows about them, and frankly, doesn’t care. To use an outdated movie reference, imagine a raving Charlton Heston shouting that “Soylent Green is people!”, only to be met with indifference. “Everyone knows Soylent Green is people. So what?”
adactio.com
November 2, 2024 at 10:56 AM
Considering I’ve been on x for years without saying anything it seems strange to start now!

Glad to be off that app though!
November 2, 2024 at 1:02 PM