Albert Biketi
albertbfx.bsky.social
Albert Biketi
@albertbfx.bsky.social
{something flirty} when quantum is hot, some pulse code modulation helps between takes. Chips and scalable oversight through mathematics *AND* ethics for AI Alignment. VP/GM -Cyber warrior HP -> Atalla -> Splunk - Google + Mandiant -> Tahiti 🤓
@basicappleguy.com Apple, why isn’t just a part of my MacBook that can charge my watch and my phone at rest. Safety? Do it for people who forget cables. Did you forget your historical disdain for cables? Thanks, happy customer.
April 25, 2025 at 7:51 AM
This is from San Francisco as I prepare to board after a productive week of learning. So much help from friends along the way. Learning from customers is everything, communicating that understanding and revealing potential misunderstandings is delicate, even when specs are clear. Why?
April 15, 2025 at 4:42 AM
People really don’t grasp the meaning of the words “software is eating everything” until they see what a new form of tech (AI) can do to a much older form of tech (government). Musk is the messenger of a very difficult truth, first seen with the Twitter layoffs. DOGE recommends, Trump decides.
February 4, 2025 at 2:09 AM
I will note separately, that when BlueSky adds a BETA! 😆 Trending feature, every user should ask, what’s in the algorithm, and think about that every day. Otherwise all the old problems, will repea..

Animal Farm. If you don’t pay for BlueSky’s servers, you may not think about it. Someone does!
January 16, 2025 at 1:54 AM
Some of the identical prompt tests I regularly do across ChatGPT, Gemini, Claude, Grok, and Llama reveal fascinating differences. I’m sad to say the results from Gemini are almost always a revelation of particular ways of thinking that are, let’s just say, distinct in particular ways.
January 14, 2025 at 3:31 AM
The right is at its best when it fights for walls that need to be put up and the left is at its best when it tears down walls that need to be torn down.

A healthy society needs both forces, in thoughtful balance, with less demonization, for the betterment of all. We’re so far from that, it’s sad.
January 11, 2025 at 7:48 PM
I’ve been running some experiments for 5 months on the behavior of the major LLMs. The most fascinatingly interesting thing is the friction they persistently introduce in human to human translation. It’s so odd that they can’t help breaking roles. Can someone relate/explain?
January 2, 2025 at 8:18 PM