Luca Pezzullo
techatlas.bsky.social
Luca Pezzullo
@techatlas.bsky.social
AI, Security studies, Cognitive Security, and technical stuff... University of Padua; President - Veneto Board of Psychologists
Reposted by Luca Pezzullo
Your dignity honors the bravery of the Ukrainian people.

Be strong, be brave, be fearless.
You are never alone, dear President Zelenskyy.

We will continue working with you for a just and lasting peace.
February 28, 2025 at 9:06 PM
Reposted by Luca Pezzullo
A new large language of life model (LLLM) for the transcriptome that predicts gene expression within and across human cell types www.nature.com/articles/s41... @nature.com
For more on the remarkable surge of LLLMs see erictopol.substack.com/p/learning-t...
A foundation model of transcription across human cell types - Nature
A foundation model learns transcriptional regulatory syntax from chromatin accessibility and sequence data across a range of cell types to predict gene expression and transcription factor interactions...
www.nature.com
January 8, 2025 at 4:27 PM
Reposted by Luca Pezzullo
Working paper studying freelancers argues AI creates an "inflection point" for each job type.

Before that, AI boosts freelancer earnings (web devs saw a +65% increase). After, AI replaces freelancers (translators saw -30% drop). They suggest that once AI starts replacing a job, it doesn't go back.
January 5, 2025 at 6:40 PM
Reposted by Luca Pezzullo
“Through the screen, I found something unexpected: the chance for technology to offer a different — and sometimes deeper — interaction with patients.”
—@helenouyang.bsky.social
@nytopinion.nytimes.com
gift link www.nytimes.com/2024/12/27/o...
January 2, 2025 at 7:22 PM
Reposted by Luca Pezzullo
A message for our time from Homer’s Iliad.
December 22, 2024 at 8:55 AM
Reposted by Luca Pezzullo
‼️"o1-preview demonstrates superhuman performance in differential diagnosis, diagnostic clinical reasoning, and management reasoning, superior in multiple domains compared to prior model generations and human physicians."

And this is using vignettes, not multiple choice. arxiv.org/pdf/2412.10849
December 17, 2024 at 5:52 PM
Reposted by Luca Pezzullo
Human behavior happens at a surprisingly slow 10 bits/second or so, even though our sensory systems gather 8 orders of magnitude more data. Plus, we can only think about one thing at a time. We don’t know why

(In LLM terms, human behavior happens at less than a token/sec). arxiv.org/abs/2408.10234
December 20, 2024 at 2:49 PM
Reposted by Luca Pezzullo
Basically think of the o3 results as validating Douglas Adams as the science fiction author most right about AI.

When given longer to think, the AI can generate answers to very hard questions, but the cost is very high, it is hard to verify, & you have to make sure you ask the right question first.
December 21, 2024 at 5:33 AM
Reposted by Luca Pezzullo
Among ~9 million Americans, representing over 440 occupations, who had the lowest proportion of #Alzheimer's disease related deaths?
Ambulance and taxi drivers
www.bmj.com/content/387/...
? benefit of spatial and navigational processing
December 17, 2024 at 2:35 PM
Reposted by Luca Pezzullo
Endlessly terrible news cycle around this company. Previously:

Can A.I. Be Blamed for a Teen’s Suicide? www.nytimes.com/2024/10/23/t...

An AI companion suggested he kill his parents. www.washingtonpost.com/technology/2...
December 17, 2024 at 4:32 PM
Reposted by Luca Pezzullo
Just a reminder that none of the people who make LLMs, no matter how smart, actually know what specific tasks LLMs will be good or bad at. We are barely benchmarking these systems at all on any sorts of tasks.

You should explore in areas of your expertise to try to figure it out for your use cases.
December 16, 2024 at 3:14 AM
"Quantity", when at sufficient high levels, becomes "Quality" (a change in the underlying processes): Generative AI produces functional and operational changes in this way.
A whole bunch of systems that depend on effort being costly are going to be breaking.

Academic journals are seeing this happen already.
December 13, 2024 at 8:54 PM
Perfect explanation :-)
Misinformation?
Disinformation?
What's the difference?

A very simple and festive explainer. 🎅

Credit:
🎨 @felixuncia.bsky.social
🧠 @brentlee.bsky.social
December 5, 2024 at 10:42 AM
Perfect 😅
December 1, 2024 at 4:25 PM
Very good observation from @emollick.bsky.social : the technical / procedural competence and the functional / context-focused one are on different levels, in LLM use cases...
An observation is that managers and teachers are often much better at “getting” LLMs than coders.

Coders deal with deterministic systems. Managers and teachers are very experienced at working with fundamentally unreliable people to get things done, not perfectly, but within acceptable tolerances.
November 28, 2024 at 7:34 PM
So, this result seems to confirm the previous observations: general LLM frontier models produce similar results, in terms of quality, as medically adapted ones when used in medical field...
Medically adapted foundation models (think Med-*) turn out to be more hot air than hot stuff. Correcting for fatal flaws in evaluation, the current crop are no better on balance than generic foundation models, even on the very tasks for which benefits are claimed.
arxiv.org/abs/2411.04118
Medical Adaptation of Large Language and Vision-Language Models: Are We Making Progress?
Several recent works seek to develop foundation models specifically for medical applications, adapting general-purpose large language models (LLMs) and vision-language models (VLMs) via continued pret...
arxiv.org
November 26, 2024 at 10:42 PM
These open questions are absolutely valid also for the AI impact on Psychological practice...
When the Alpha Omega Alpha medical society
asks about the impact of #AI on the medical profession
alphaomegaalpha.org/wp-content/u...
November 25, 2024 at 10:26 PM
Reposted by Luca Pezzullo
Just wait until conspiracy theorists discover they’re part of a conspiracy to use conspiracy theorists to spread disinformation via conspiracy theories.
November 24, 2024 at 8:12 AM
The "Personal Tutor" use case is one of the most powerful LLM application to date.
It has the potential to substantially restructure and innovate the Education field.
AI can help learning... when it isn't a crutch.

There are now multiple controlled experiments showing that students who use AI to get answers to problems hurts learning (even though they think they are learning), but that students who use well-promoted LLMs as a tutor perform better on tests.
November 23, 2024 at 6:29 AM
Scientific communities are shifting their own "Ideas & News exchange forum" from X to Bluesky... and seemingly with a rapid pace.

www.science.org/content/arti...
Like ‘old Twitter’: The scientific community finds a new home on Bluesky
After recent changes to Elon Musk’s X, a gradual migration turns into a stampede
www.science.org
November 22, 2024 at 9:04 PM
Reposted by Luca Pezzullo
I mostly agree.

It’s definitely an escalation and designed to send a message to Kyiv

No, it’s not nuclear armageddon

Ukraine doesn’t have exoatmospheric interception capability, but they do have some terminal phase interceptors that may (heavy emphasis on may) be of some use vs Russian IR/ICBMs.
November 21, 2024 at 5:34 PM
Interesting, but this isn't the first time...
November 20, 2024 at 9:37 PM
Reposted by Luca Pezzullo
The fact that UCC hid a 3000-year-old mummy under the floorboards of a lecture theatre on a campus where two and a half tonnes of uranium rods were being stored in a basement nearby and DIDN'T end up with a zombie-pharaoh apocalypse is both a mystery and endlessly disappointing
November 18, 2024 at 9:17 AM