David Nowak
banner
davidnowak.me
David Nowak
@davidnowak.me
I bridge technical expertise with human understanding. Built solutions for millions. I help organizations question assumptions before costly mistakes. Connecting dots, creating impact.

🌐 davidnowak.me
🗞️ thestrategiccodex.com
Googlers gotta Google 🤡
November 13, 2025 at 4:53 AM
When you've survived decades in a system that never stopped abusing you, you recognize the pattern: detection isn't the problem. Willingness to act is. Technology can't automate courage. We need governance reform, not algorithmic distraction.
November 13, 2025 at 2:02 AM
Real accountability requires: lay-majority medical boards with enforcement power, mandatory reporting with whistleblower protection, transparent discipline outcomes, individual liability for documented abuse, accessible complaint pathways. None involve AI.
November 13, 2025 at 2:02 AM
AI doesn't fix accountability—it provides cover. Deploy automated scribes, efficiency metrics climb, leadership points to innovation. Denials spike via algorithms, but the algorithm becomes the target instead of policy. Technology substitutes for governance.
November 13, 2025 at 2:02 AM
Government healthcare creates another shield: qualified immunity. Sue for negligence? You're suing the state, not the clinician. The government pays quietly. The individual faces minimal consequence. The board moves slowly or not at all. Pattern repeats.
November 13, 2025 at 2:02 AM
Hospitals fail to capture half of all patient harm events. When adverse events are documented, reporting rates to authorities: 0%. Staff coordinate obstruction. The system isn't broken—it's designed this way: silence costs nothing, speaking costs everything.
November 13, 2025 at 2:02 AM
State medical boards took just 0.81 disciplinary actions per 1,000 physicians in 2021-23. But Ohio's rate was 11x higher than Indiana's. Physician misconduct doesn't vary by state—willingness to act does. Boards are run by doctors protecting their own.
November 13, 2025 at 2:02 AM
There are a lot more horrendous things happening in NZ that no one wants to talk about. 👽
November 12, 2025 at 7:06 PM
Here's the question I keep turning over: does Meta donate that scam revenue to fraud education and reset the narrative? Or do they wait for the regulator to force disgorgement? Either way, the next move reveals whether we've learned anything about accountability.
November 12, 2025 at 1:39 PM
42 state attorneys general sent Meta a letter demanding action. CFPB opened an investigation. Forrest's lawsuit survived dismissal. EU issued preliminary DSA violations. The pressure is building from multiple directions because $16B in scam revenue creates unavoidable scrutiny.
November 12, 2025 at 1:39 PM
What gets me about Rob Leathern and Rob Goldman leaving to build CollectiveMetrics: they knew the incentives wouldn't align. External pressure isn't a nice-to-have for accountability. It's the only mechanism that works when your core model depends on the problem persisting.
November 12, 2025 at 1:39 PM
Their system learns which users click scams. Then serves them more. So a victim becomes a repeated target for the same attack. Meta profits. The scammer profits. The victim loses again. That's not a moderation failure. That's a business model with externalities baked in.
November 12, 2025 at 1:39 PM
The internal memo said: don't enforce fraud if it costs more than 0.15% of revenue. That's not a guideline. That's a profit floor. And what kills me is how explicit it is. They didn't have to write it down. They chose to. That's where the real story lives.
November 12, 2025 at 1:39 PM
The pattern's clear and scholars have the word for it: authoritarian. Mass surveillance without judicial oversight, weaponized enforcement against dissent, systematic constitutional violations. The infrastructure's being built right now.
November 12, 2025 at 1:49 AM
Tucker's framing cuts to the bone: "Immigration powers are being used to justify mass surveillance of everybody." This administration is willing to break whatever laws exist to build surveillance infrastructure that can be weaponized for any purpose those in power decide.
November 12, 2025 at 1:49 AM
Who's speaking up? Not politicians playing games. Emily Tucker at Georgetown Law's Privacy Center, Jeramie Scott at EPIC, community organizers like Ron Gochez running defense patrols—people whose work actually protects others without seeking power or profit.
November 12, 2025 at 1:49 AM
The Fourth Amendment gap is real. Because facial recognition monitors public spaces without physical intrusion, courts struggle to call it a "search" requiring protection. That legal gray zone is being weaponized to justify mass surveillance of the entire population.
November 12, 2025 at 1:49 AM
Here's what gets under my skin: Georgetown Law found ICE can already locate 3 out of 4 U.S. adults through utility records. They've scanned a third of Americans' driver's license photos. This dragnet hits everyone, not just immigrants facing deportation.
November 12, 2025 at 1:49 AM
The tech itself is chilling: Mobile Fortify scans faces against DHS databases instantly. Paragon's Graphite spyware accesses everything on your phone—encrypted messages, location, contacts—just by sending you a text. You don't even need to click anything.
November 12, 2025 at 1:49 AM
The Cyber Threat Alliance nailed it: GenAI isn't making adversaries smarter—it's making them more efficient. That's a different problem. Lower barriers to entry, higher volume, but not higher sophistication. The question is how long that gap remains open.
November 11, 2025 at 7:40 PM
One finding stands out: threat actors successfully bypassed Gemini's guardrails by posing as white-hat researchers in capture-the-flag competitions. Google has since patched this. But it raises questions about how safety measures hold up against persistent, creative adversaries.
November 11, 2025 at 7:40 PM
The real tension: AI companies like Anthropic and OpenAI trumpet these threats while seeking funding rounds. Their reports often bury disclaimers about limitations. Meanwhile, independent researchers and Google's own technical analysis show no breakthrough capabilities—yet.
November 11, 2025 at 7:40 PM