Giuseppe Mazzapica
banner
gmazzap.phpc.social.ap.brid.gy
Giuseppe Mazzapica
@gmazzap.phpc.social.ap.brid.gy
Captivated by WordPress development during his architecture studies, Giuseppe couldn't escape the web dev industry for the next two decades.

WP expert and eager […]

[bridged from https://phpc.social/@gmazzap on the fediverse by https://fed.brid.gy/ ]
Reposted by Giuseppe Mazzapica
The biggest challenge with most of the AI discourse is untangling what people are actually observing from what they want to be the case.
January 22, 2026 at 6:40 PM
Reposted by Giuseppe Mazzapica
January 18, 2026 at 10:59 PM
Reposted by Giuseppe Mazzapica
History is making fun of us
January 16, 2026 at 11:36 PM
Reposted by Giuseppe Mazzapica
"Software is no longer seen as an asset, as something to care for, to maybe even take pride in. It’s a throw-away product. Like a napkin. Just get one quick, wipe your mouth and throw it away. Like a novelty t-shirt."

(Original title: Software as Fast Fashion) […]
Original post on tldr.nettime.org
tldr.nettime.org
January 15, 2026 at 10:39 PM
Reposted by Giuseppe Mazzapica
All of this, right here…👇

Enjoy. 😎✨
December 9, 2025 at 6:27 AM
Reposted by Giuseppe Mazzapica
I'm thinking of getting a real job after years of freelancing. (The stress and overhead isn't worth it with a toddler, I'd rather have it calm.)

If you know someone who needs a #swiftlang dev with 10+ years of experience, macOS and iOS, also some C and #rust, who can self-manage and also […]
Original post on mastodon.social
mastodon.social
November 6, 2025 at 3:12 PM
Reposted by Giuseppe Mazzapica
While cleaning a storage room, our staff found this tape containing #unix v4 from Bell Labs, circa 1973

Apparently no other complete copies are known to exist: https://gunkies.org/wiki/UNIX_Fourth_Edition

We have arranged to deliver it to the Computer History Museum

#retrocomputing
November 6, 2025 at 8:50 PM
Reposted by Giuseppe Mazzapica
I know there are a lot of CSS magicians on the fediverse, is anyone open to freelance work? we're looking for some help with https://wizardzines.com

share a link to your site?
wizard zines
wizardzines.com
October 21, 2025 at 2:17 PM
Reposted by Giuseppe Mazzapica
**I'm looking for work!**

I'm a **high-level infra and devops engineer** and **team lead.**

I've previously **run my own team** , and previously worked at **Mozilla** and **Facebook**. I'm looking for **infra/devops lead** or **senior infra/devops engineer** positions.

I'm not looking for […]
Original post on basilisk.gallery
basilisk.gallery
October 3, 2025 at 6:28 PM
Reposted by Giuseppe Mazzapica
The proper way to drink whiskey - and live.
October 6, 2025 at 2:32 PM
Reposted by Giuseppe Mazzapica
Government Workers Say Their Out-of-Office Replies Were Forcibly Changed to Blame Democrats for Shutdown | WIRED – Autorisposte di propaganda […]
Original post on mastodon.uno
mastodon.uno
October 3, 2025 at 9:07 AM
Reposted by Giuseppe Mazzapica
AI-assisted Consensus Building – Mai avrei pensato che saremmo finiti nella top ten.. https://blog.quintarelli.it/2025/10/ai-assisted-consensus-building-mai-avrei-pensato-che-saremmo-finiti-nella-top-ten/
AI-assisted Consensus Building – Mai avrei pensato che saremmo finiti nella top ten..
.. e invece il nostro paper “E Pluribus Unum: AI-assisted Consensus Building”, lavoro svolto con Leonardo Becchetti, Giovanni Cerase ed Enrico Fagnoni sta scalando posizioni nella top ten degli articoli piu’ scaricati su SSRN Social Stratification, Social Mobility & Inequality eJournal Il lavoro verrà presentato domani sabato 4 ottobre al Festival nazionale dell’economia civile a Firenze nel Salone dei cinquecento a Palazzo Vecchio alle ore 1145 con la brava Elisabetta Soglio del Corriere della Sera che intervista Nando Pagnoncelli e me. Qui lo streaming: https://www.festivalnazionaleeconomiacivile.it/streaming/ questo è l’abstract This study explores the potential of Large Language Models (LLMs) to foster political consensus by simulating deliberation among AI-generated virtual citizens. We construct a virtual sample mimicking the Italian population and initiate iterative AI-led debates on politically divisive issuesmigration, environmental transition, and economic inequality. The process yields high-consensus statements among AI personas, which are then validated by administering them to a representative real-people sample. Our findings show that AI-derived statements among virtual citizens increase agreement rates among humans by more than 37 percentage points over initial seed statements, while policies inspired by them raise support by more than 25 percentage points. We find mild evidence that making AI authorship explicit slightly reduces consensus for some demographic groups, revealing a modest AI-aversion effect. Our findings contribute to the debate on pros and cons of AI by showing that the virtual consensus-building framework offers a scalable, efficient alternative to traditional human deliberation, providing actionable insights for policy design and digital democracy. If you like this post, please consider sharing it.
blog.quintarelli.it
October 3, 2025 at 10:45 AM
Reposted by Giuseppe Mazzapica
mai avrei pensato che il nostro paper “E Pluribus Unum: AI-assisted Consensus Building”, scritto con Leonardo Becchetti, Giovanni Cerase, Enrico Fagnoni è nella top ten degli articoli piu’ scaricati su SSRN Social Stratification, Social Mobility & Inequality eJournal
(è al sesto posto!)

Il […]
Original post on mastodon.uno
mastodon.uno
October 3, 2025 at 10:46 AM
Reposted by Giuseppe Mazzapica
To be frank, I’ve become extremely frustrated since the acquisition because now the environment combines the worst aspects of a make-it-up-as-you-go small company with the worst aspects of a faceless corporate overlord. I love my coworkers, I don’t love the general approach to projects and […]
Original post on infosec.exchange
infosec.exchange
September 18, 2025 at 8:59 AM
Reposted by Giuseppe Mazzapica
to my english mother-tongue friends, as I cannot understand the meaning, what does it mean when Mr. Hegseth says "The world and as the chairman mentioned, our enemies get a vote." ?

https://www.youtube.com/live/Jj3zJyHayIg?si=rcjyc_TTuvVy4749&t=931

Thanks!
October 2, 2025 at 10:31 AM
Reposted by Giuseppe Mazzapica
September 12, 2025 at 6:25 PM
Reposted by Giuseppe Mazzapica
September 7, 2025 at 10:14 AM
Reposted by Giuseppe Mazzapica
September 4, 2025 at 6:43 AM
Reposted by Giuseppe Mazzapica
You asked, I wrote. Here's how I do online mentorship. Please help me. I am booked until November. https://tisiphone.net/2025/09/01/open-online-mentoring-guide/
tisiphone.net
September 2, 2025 at 3:58 AM
Demystify: Artificial intelligence has its uses, but it is the harms that should concern us. Photo: Flickr Most of us know at least one slopper. They’re the people who use ChatGPT to reply to Tinder matches, choose items from the restaurant menu and write creepily generic replies to office emails. Then there’s the undergraduate slopfest that’s wreaking havoc at universities, to say nothing of the barrage of suspiciously em-dash-laden papers polluting the inboxes of academic journal editors. Not content to merely participate in the ongoing game of slop roulette, the botlicker is a more proactive creature who is usually to be found confidently holding forth like some subpar regional TED Talk speaker about how “this changes everything”. Confidence notwithstanding, in most cases Synergy Greg from marketing and his fellow botlickers are dangerously ignorant about their subject matter — contemporary machine learning technologies — and are thus prone to cycling rapidly between awe and terror. Indeed, for the botlicker, who possibly also has strong views on crypto, “AI” is simultaneously the worst and the best thing we’ve ever invented. It’s destroying the labour market and threatening us all with techno-fascism, but it’s also delivering us to a fully automated leisure society free of what David Graeber once rightly called “bullshit jobs”. You’ll notice that I’m using scare quotes around the term “AI”. That’s because, as computational linguist Emily Bender and former Google research scientist Alex Hanna argue in their excellent recent book, The AI Con, there is nothing inherently intelligent about these technologies, which they describe with the more accurate term “synthetic text extrusion machines”. The acronym STEM is already taken, alas, but there’s another equally apt acronym we can use: Salami, or systematic approaches to learning algorithms and machine inferences. The image of machine learning as a ground-up pile of random bits and pieces that is later squashed into a sausage-shaped receptacle to be consumed by people who haven’t read the health warnings is probably vastly more apposite than the notion that doing some clever — and highly computationally and ecologically expensive — maths on some big datasets somehow constitutes “intelligence”. That said, perhaps we shouldn’t be so hard on those who, when confronted with the misleading vividness of ChatGPT and Co’s language-shaped outputs, resort to imputing all sorts of cognitive properties to what Bender and Hanna, also the hosts of the essential Mystery AI Hype Theater 3000 podcast, cheekily described as “mathy maths”. After all, as sci-fi author Arthur C Clarke reminded us, “any sufficiently advanced technology is indistinguishable from magic”, and in our disenchanted age some of us are desperate for a little more magic in our lives. Slopholm Syndrome notwithstanding, if we are to have useful conversations about machine learning then it’s crucial that instead of succumbing to the cheap parlour tricks of Silicon Valley marketing strategies — which are, tellingly, constructed around the exact-same mix of infinite promise and terrifying existential risk their pro-bono shills the botlickers always invoke — we pay attention to the men behind the curtain and expose “AI” for what it is: normal technology. This, of course, means steering away both from hyperbolic claims about the imminent emergence of “AGI” (artificial general intelligence) that will solve all of humanity’s most pressing problems as well as from the crude Terminator-style dystopian sci-fi scenarios that populate the fever dreams of the irrational rationalists (beware, traveller, for this way lie Roko’s Basilisk and the Zizians). More fundamentally, it also means taking a step back to examine some of the underlying social drivers that have caused such widespread apophenia (a kind of hallucination where you see patterns that aren’t there — it’s not just the “AI” that hallucinates, it’s causing us to see things too). Most obviously in this regard, when confronted with the seemingly intractable and compounded social and ecological crises of the current moment, deferring to techno-solutionism is a reasonable strategy to ward off the inevitable existential dread of life in the Anthropocene. For many people, things are as the philosopher of technology Heidegger once said at the end of his late-life interview: “Only a God can save us.” Albeit in this case a bizarre techno-theological object built from maths, server farms full of expensive graphics cards and other people’s dubiously obtained data. Beyond this, we should acknowledge that the increasing social, political, technological and ethical complexity of the world can leave us all scrambling for ways to stabilise our meaning-making practices. As the rug of certainty is pulled from under our feet at an ever-accelerating pace, it’s no wonder that we tend to experience an increased need for some sense of certainty, whether grounded in fascist demagoguery, phobic responses to the leakiness and fluidity of socially constructed categories or the synthetic dulcet tones of chatbots that have, here in the Eremocene (the Age of Loneliness), become our friends, partners, therapists and infallible tomes of wisdom. From the Pythia who served as the Oracles of Delphi, allaying the fears of ancient Greeks during times of unrest, to the Python code that allows us to interface with our new oracles, the desire for existential certainty is far from new. In a time where a sense of agency and sufficient grasp on the world has been wrested from most of us, however — where our feeds are a never-ending barrage of wars, genocides and ecological collapses we feel powerless to stop — the desire for some source of stable knowledge, some all-knowing benevolent force that grants us a sense of vicarious power if we can learn to master it (just prompt better) — has possibly never been stronger. ChatGPT, Grok, Claude, Gemini and Co, however, are not oracles. They are mathematically sophisticated games played with giant statistical databases. Recall in this regard that very few people assume any kind of intelligence, reasoning or sensory experience when using Midjourney and other early image generators that are built using the same contemporary machine learning paradigm as LLMs. We know they are just clever code. But if we don’t regard Midjourney as some kind of sentient algorithmic overlord simply because it produces outputs that cluster pixels together in interesting ways, why would we regard LLMs as more than maths and datasets just because they produce outputs that cluster syntax together in interesting ways? Just as a picture of a bird cannot fly no matter how realistically it is drawn, so too is a picture of the language-using faculties of human beings not language and thus not reflective of anything deeper than next token prediction, hence Bender and Hanna’s delightful term “language-shaped”. In light of the above, I’d like to suggest that we approach these novel technologies from at least two angles. On the one hand, it’s urgent that we demystify them. The more we succumb to a contemporary narcissism-fuelled variation of th Barnum effect, the less we’ll be able to reach informed decisions about regulating “AI” and the more we’ll be stochastically parroting the good-cop, bad-cop variants of Silicon Valley boosterism to further line the pockets of the billionaire tech oligarchs riding the current speculative bubble while they bankroll neofascism. On the other hand, we should start paying less attention to the TESCREALists (trans­humanism, extropianism, singularitarianism, cosmism, rationalism, effective altruism and longtermism — you know the type) and their “AI” shock doctrine and focus more on the current real-world harms being caused by the zealous adoption of commercial “AI” products by every middle manager, university bureaucrat or confused citizen who doesn’t want to be left behind (which if left unchecked tends to lead to what critical “AI” theorist Dan MacQuillan terms algorithmic Thatcherism). These two tasks need to be approached together. It’s no use trying to mitigate actual ethical harms — the violence caused by algorithmic bias, for instance — if we do not have at least a rudimentary grasp of what synthetic text extrusion machines do, and vice-versa. In approaching these tasks, we should also challenge the rhetoric of inevitability. No technology, whether laser discs, blockchain, VR or LLMs, necessarily ends up being adopted by society in the form intended by its most enthusiastic proselytes and the history of technology is also a history of failures and resistance. Finally, and perhaps most importantly, we should take great care not to fall into the trap of believing that critical thought, whether at universities, in the workplace or in the halls of power, is something that can or should be algorithmically optimised. Despite the increasing neoliberalisation of these sectors, which itself encourages the logic of automation and quantifiable outputs, critical thought — real, difficult thought grounded in uncertainty, finitude and everything else that makes us human — has perhaps never been so important.
mg.co.za
August 18, 2025 at 10:54 PM
Reposted by Giuseppe Mazzapica
Medic! Man down! Man down!
August 14, 2025 at 10:23 PM
Reposted by Giuseppe Mazzapica
#phanpysocial changelog ✨

💬 Better display support for Mastodon v4.4's native quote posts
🧮 Math formatting for LaTeX
🐛 Bug fixes

🔗 https://phanpy.social/
💬 https://matrix.to/#/%23phanpy:matrix.org
Phanpy
Minimalistic opinionated Mastodon web client
phanpy.social
July 18, 2025 at 12:50 PM