Barry Pollard
banner
tunetheweb.com
Barry Pollard
@tunetheweb.com
Web Performance Developer Advocate at Google Chrome helping to make the web go faster! All opinions my own.
🆕 The 202601 Chrome User Experience (CrUX) release is now live on BigQuery!

Check out the announcement post for the full info:
groups.google.com/a/chromium.o...

Highlights below 👇

🧵 1/4
Sign in - Google Accounts
groups.google.com
February 10, 2026 at 3:03 PM
Reposted by Barry Pollard
🛠️ This feature was surprisingly very fun to build:

Using existing Synthetic Chrome monitoring, I built a web spider that builds a graph of the most important pages on each domain (based on internal links, appearance in navigation, and several other factors)
🆕 Calibre now spiders your Site and collects Google Chrome User Experience Report data for all your pages!

Zero config & no manual checks.

Metrics for all your most important pages.

calibreapp.com/changelog/ar...
CrUX Pages: Automatic page discovery
Calibre automatically discovers and tracks your most important pages in CrUX
calibreapp.com
February 9, 2026 at 6:22 AM
Reposted by Barry Pollard
This was actually some of my first contributions to the area of web browsers, just finding and fixing errors in the support data. Normally just ones I stumbled across as a web developer too, rather than anything deliberate.
Btw you don’t need to be a web spec author or work for a browser vendor to contribute to this.

If you spot some incorrect data (either says it supports web it doesn’t, or says it doesn’t when it does) then open an issue — or better yet a PR! Make things better for everyone, including future you!!
The browser-compat-data project (used by MDN, caniuse, and tools) now has:

20,000 commits
1,123 contributors
465 releases
19,148 data entries

That's what comprehensive web compat data looks like.

And it takes well funded teams at @openwebdocs.org and @mozilla.org plus amazing contributors.
February 9, 2026 at 6:33 AM
Reposted by Barry Pollard
It's no surprise open source maintainers are rethinking their contributions process. Maintainers can barely keep up with code contributions by humans, bots will make it worse.
February 7, 2026 at 5:26 PM
Reposted by Barry Pollard
Blink: Intent to Prototype: Lazy loading for video and audio elements
Blink: Intent to Prototype: Lazy loading for video and audio elements
Blink: Intent to Prototype: Lazy loading for video and audio elements
groups.google.com
February 5, 2026 at 7:00 PM
Reposted by Barry Pollard
Blink: Intent to Prototype: External <script type=speculationrules>
Blink: Intent to Prototype: External <script type=speculationrules>
Blink: Intent to Prototype: External <script type=speculationrules>
groups.google.com
February 4, 2026 at 10:30 PM
Reposted by Barry Pollard
Announcing The CSS Selection!

📰 www.projectwallace.com/the-css-sele...

📚 100,000 websites
⏱️ 100+ metrics
🔖 7 chapters

The biggest deep-dive ever into real-world use of CSS across the globe. Dive in and find out some hidden gems. Also, see how much of 'the new CSS' is actually used!
The CSS Selection - 2026 Edition - Project Wallace
The CSS Selection shows real-world CSS usage from over 100,000 websites and looks at the most important metrics.
www.projectwallace.com
February 6, 2026 at 10:08 AM
Something I've been helping @yoav.ws with. A proposal to allow you to measure Speculation Rules API and usage.

Are you over-speculating? Under-speculating?

Until now you've only really been able to measure secondary impact, but with this proposal you'll can have better data.
February 4, 2026 at 1:43 PM
From Chrome 145 (on general release next week!), DevTools we will start to show so called "soft" navigations and "Soft LCP" in the Performance Panel traces.

These are for SPAs which don't do a full page load, but instead "fake it" by updating the current page and pushing a new history entry.

1/5 🧵
February 4, 2026 at 9:41 AM
Reposted by Barry Pollard
Been saying this for a while but the really exciting stuff on the web right now is all about how things we used to need JS for can now be done with pure CSS.

There are a lot of opportunities to cut out client side bloat.
February 4, 2026 at 4:32 AM
Reposted by Barry Pollard
“Imagine if we had to follow the law”
February 3, 2026 at 5:16 PM
Ah GitHub will you ever learn to updat that PR number in the tab after merges?

Are you sure being an MPA wouldn't be a better option for you? I honestly think it might...
February 3, 2026 at 4:33 PM
Reposted by Barry Pollard
The latest from Anthropic: using Anthropic's products makes you worse at your job
How AI assistance impacts the formation of coding skills
Anthropic is an AI safety and research company that's working to build reliable, interpretable, and steerable AI systems.
www.anthropic.com
January 30, 2026 at 10:42 PM
Reposted by Barry Pollard
Token usage in LLM applications is a huge topic. It dictates response speed, cost efficiency - or the quota left on your plan 😄 🪫 Learn how the Chrome DevTools team squeezed a full Performance trace into Gemini in DevTools developer.chrome.com/blog/designi...
Designing DevTools: Efficient token usage in AI assistance  |  Blog  |  Chrome for Developers
Learn how the Chrome Tooling team optimized token usage for AI assistance, and discover techniques for more efficient data usage with LLMs.
developer.chrome.com
January 30, 2026 at 11:44 AM
Reposted by Barry Pollard
Videos from our meetup yesterday are now live!

www.youtube.com/playlist?lis...

#webperf
Episode 9 | PerformanceObserver - YouTube
In this episode, Robin Marx will explain how TTFB isn't what you think it is, and Morgan Murrah will dig into the compositor.
www.youtube.com
January 29, 2026 at 9:39 PM
Say what you like about Medium, this is good:
January 29, 2026 at 10:03 PM
Reposted by Barry Pollard
I wrote this song on Saturday, recorded it yesterday and released it to you today in response to the state terror being visited on the city of Minneapolis. It’s dedicated to the people of Minneapolis, our innocent immigrant neighbors and in memory of Alex Pretti and Renee Good.

Stay free
Bruce Springsteen - Streets Of Minneapolis (Official Audio)
YouTube video by Bruce Springsteen
youtu.be
January 28, 2026 at 5:02 PM
Reposted by Barry Pollard
My book, Accessibility For Everyone, is now free and online as a website.

accessibilityforeveryone.site

The book was first published by A Book Apart in 2017 but it holds up! It covers web accessibility for designers, developers, content folks, and really everyone who works in tech.
Accessibility For Everyone by Laura Kalbag
Read the book online for free.
accessibilityforeveryone.site
January 27, 2026 at 1:14 PM
Reposted by Barry Pollard
The end of the curl bug-bounty
tldr: an attempt to reduce the _terror reporting_. **There is no longer a curl bug-bounty program.** It officially stops on January 31, 2026. After having had a few half-baked previous takes, in April 2019 we kicked off the first real curl bug-bounty with the help of Hackerone, and while it stumbled a bit at first it has been quite successful I think. We attracted skilled researchers who reported plenty of actual vulnerabilities for which we paid fine monetary rewards. We have certainly made curl better as a direct result of this: **87 confirmed vulnerabilities and over 100,000 USD** paid as rewards to researchers. I’m quite happy and proud of this accomplishment. I would like to especially highlight the awesome Internet Bug Bounty project, which has paid the bounties for us for many years. We could not have done this without them. Also of course Hackerone, who has graciously hosted us and been our partner through these years. Thanks! ## How we got here Looking back, I think we can say that the downfall of the bug-bounty program started slowly in the second half of 2024 but accelerated badly in 2025. We saw an explosion in AI slop reports combined with a lower quality even in the reports that were not obvious slop – presumably because they too were actually misled by AI but with that fact just hidden better. Maybe the first five years made it possible for researchers to find and report the low hanging fruit. Previous years we have had a rate of somewhere north of 15% of the submissions ending up confirmed vulnerabilities. Starting 2025, the confirmed-rate plummeted to below 5%. Not even one in twenty was _real_. The never-ending slop submissions take a serious mental toll to manage and sometimes also a long time to debunk. Time and energy that is completely wasted while also hampering our will to live. I have also started to get the feeling that a lot of the security reporters submit reports with a _bad faith attitude._ These “helpers” try too hard to twist whatever they find into something horribly bad and a critical vulnerability, but they rarely actively contribute to actually _improve_ curl. They can go to extreme efforts to argue and insist on their specific current finding, but not to write a fix or work with the team on improving curl long-term etc. I don’t think we need more of that. There are these three bad trends combined that makes us take this step: the mind-numbing AI slop, humans doing worse than ever and the apparent will to poke holes rather than to help. ## Actions In an attempt to do something about the sorry state of curl security reports, this is what we do: * We no longer offer any monetary rewards for security reports – no matter which severity. In an attempt to remove the incentives for submitting made up lies. * We stop using Hackerone as the recommended channel to report security problems. To make the change immediately obvious and because without a bug-bounty program we don’t need it. * We refer everyone to submit suspected curl security problems on GitHub using their _Private vulnerability reporting_ feature. * We continue to immediately _ban and publicly_ _ridicule_ everyone who submits AI slop to the project. ## Maintain curl security We believe that we can maintain and continue to evolve curl security in spite of this change. Maybe even improve thanks to this, as hopefully this step helps prevent more people pouring sand into the machine. Ideally we reduce the amount of wasted time and effort. I believe the best and our most valued security reporters still will tell us when they find security vulnerabilities. ## Instead If you suspect a security problem in curl going forward, we advise you to head over to GitHub and submit them there. Alternatively, you send an email with the full report to `security @ curl.se`. In both cases, the report is received and handled privately by the curl security team. But with _no monetary reward offered_. ## Leaving Hackerone Hackerone was good to us and they have graciously allowed us to run our program on their platform for free for many years. We thank them for that service. As we now drop the rewards, we feel it makes a clear cut and displays a clearer message to everyone involved by also moving away from Hackerone as a platform for vulnerability reporting. It makes the change more visible. ## Future disclosures It is probably going to be harder for us to publicly disclose every incoming security report in the same way we have done it on Hackerone for the last year. We need to work out something to make sure that we can keep doing it at least imperfectly, because I believe in the goodness of such transparency. ## We stay on GitHub Let me emphasize that this change does not impact our presence and mode of operation with the curl repository and its hosting on GitHub. We hear about projects having problems with low-quality AI slop submissions on GitHub as well, in the form of issues and pull-requests, but for curl we have not (yet) seen this – and frankly I don’t think switching to a GitHub alternative saves us from that. ## Other projects do better Compared to others, we seem to be affected by the sloppy security reports to a higher degree than the average Open Source project. With the help of Hackerone, we got numbers of how the curl bug-bounty has compared with other programs over the last year. It turns out curl’s program has seen more volume and noise than other public open source bug bounty programs in the same cohort. Over the past four quarters, curl’s inbound report volume has risen sharply, while other bounty-paying open source programs in the cohort, such as Ruby, Node, and Rails, have not seen a meaningful increase and have remained mostly flat or declined slightly. In the chart, the pink line represents curl’s report volume, and the gray line reflects the broader cohort. Inbound Report Volume on Hackerone: curl compared to OSS peers We suspect the idea of getting money for it is a big part of the explanation. It brings in real reports, but makes it too easy to be annoying with little to no penalty to the user. The reputation system and available program settings were not sufficient for us to prevent sand from getting into the machine. The exact reason why we suffer more of this abuse than others remains a subject for further speculation and research. ## If the volume keeps up There is a non-zero risk that our guesses are wrong and that the volume and security report frequency will keep up even after these changes go into effect. If that happens, we will deal with it then and take further appropriate steps. I prefer not to overdo things or _overplan_ already now for something that ideally does not happen. ## We won’t charge People keep suggesting that one way to deal with the report tsunami is to _charge_ security researchers a small amount of money for the privilege of submitting a vulnerability report to us. A _curl reporters security club_ with an entrance fee. I think that is a less good solution than just dropping the bounty. Some of the reasons include: * Charging people money in an International context is complicated and a maintenance burden. * Dealing with charge-backs, returns and other complaints and friction add work. * It would limit who could or would submit issues. Even some who actually find legitimate issues. Maybe we need to do this later anyway, but we stay away from it for now. ## Pull requests are less of a problem We have seen other projects and repositories see similar AI-induced problems for pull requests, but this has not been a problem for the curl project. I believe for PRs we have better much means to sort out the weed with automatic means, since we have tools, tests and scanners to verify such contributions. We don’t need to waste any human time on pull requests until the quality is good enough to get green check-marks from 200 CI jobs. ## Related I will do a talk at FOSDEM 2026 titled Open Source Security in spite of AI that of course will touch on this subject. ## Future We never say never. This is now and we might have reasons to reconsider and make a different decision in the future. If we do, we will let you know. These changes are applied now with the hope that they will have a positive effect for the project and its maintainers. If that turns out to not be the outcome, we will of course continue and apply further changes later. ## Media Since I created the pull request for updating the bug-bounty information for curl on January 14, almost two weeks before we merged it, various media picked up the news and published articles. Long before I posted this blog post. * The Register: Curl shutters bug bounty program to remove incentive for submitting AI slop * Elektroniktidningen: cURL removes bug bounties * Heise online: curl: Projekt beendet Bug-Bounty-Programm * Neowin: Beloved tool, cURL is shutting down its bug bounty over AI slop reports * Golem: Curl-Entwickler dreht dem “KI-Schrott” den Geldhahn zu * Linux Easy: cURL chiude il programma bug bounty: troppi report generati dall’AI * Bleeping Computer: Curl ending bug bounty program after flood of AI slop reports * The New Stack: Drowning in AI slop, cURL ends bug bounties * Ars Technica: Overrun with AI slop, cURL scraps bug bounties to ensure “intact mental health” * PressMind Labs: cURL ko?czy program bug bounty – czy to koniec jako?ci zg?osze?? * Socket: curl Shuts Down Bug Bounty Program After Flood of AI Slop Reports Also discussed (indirectly) on Hacker News.
daniel.haxx.se
January 26, 2026 at 7:25 AM
Reposted by Barry Pollard
The killing of Alex Pretti is a heartbreaking tragedy. It should also be a wake-up call to every American, regardless of party, that many of our core values as a nation are increasingly under assault.
January 25, 2026 at 5:39 PM
Reposted by Barry Pollard
Don't be disappointed only 20 In-Person tickets left for #SotB26

2026.stateofthebrowser.com/tickets/
Tickets | State of the Browser
More information and buying options for tickets.
2026.stateofthebrowser.com
January 23, 2026 at 7:07 PM
I just merged the code change for this. We'll now have new Device Memory API limits from Chrome 146 and other Chromium-based browsers:

- Android: 2, 4, 8
- Others: 2, 4, 8, 16, 32

Replacing the old values of 0.25, 0.5, 1, 2, 4, 8 which have grown outdated.

Next up, updating MDN documentation...
January 23, 2026 at 6:56 PM
Reposted by Barry Pollard
Debugging web performance with Chrome DevTools. @nucliweb.net shows how to triage big assets in Network, read CrUX live Core Web Vitals, and use Performance Insights and traces to improve LCP, CLS, and INP. #performance #devtools #chrome

calendar.perfplanet.com/2025/chrome-...
January 7, 2026 at 12:00 PM
👀
I'm asking because I'm building an alternative to npmjs.com, including the admin ui piece

I have a working mvp, although of course it's very 🚧

if this is something you'd like to contribute to, and you've experienced any of these pain points, let me know - always more fun to build together! 🙏
🙋‍♂️ so ... for reasons:

I would love to know people's frustrations with:

- the current npmjs.com
- admin user flows on npm web ui (and cli, locally)

🙏
January 23, 2026 at 6:04 PM