Carlos Phoenix
banner
iamcarlosphoenix.bsky.social
Carlos Phoenix
@iamcarlosphoenix.bsky.social
Cybersecurity expert and inventor. I ride horses.
Also, here is a picture with some of the designs that are popular.
February 9, 2025 at 4:34 AM
Exactly right! This gelding came from Idaho and grew a very thick winter coat. We are in SoCal so the weather is much warmer. That’s why we clipped him (body trace pattern) so he doesn’t over heat. He is much less sweaty and cools out quickly.
February 9, 2025 at 4:15 AM
I resent the onslaught of pressure to use tech that’s a party trick. It’s missing the data hygiene! It isn’t following the innovation curve. And this will break the technology I depend on for work and life skills. Corps are taking too many shortcuts. We are not ready for the consequences. #AI 6/6
January 28, 2025 at 7:44 AM
Turning AI features off is almost impossible. I specifically bought a laptop before they added the “AI Key.” What if AI is the new Clippy? I don’t want to use it. I want to use the tech that has been working for me for the last 30+ years. Yea, it has evolved, but it evolved collaboratively. 5/
January 28, 2025 at 7:44 AM
So I wrote all of that to make the point that AI is being pushed on consumers and workers. Normally innovations catch fire and companies adopt it last. This is reversed! And remember the data pre-req? Well, I’m struggling to see how this isn’t a bubble 4/
January 28, 2025 at 7:44 AM
Now before anyone points out AI is not LLM, I am not equating the technologies. LLM needs good data to work. AI does way better with more data to train the models. They both need good data. Investing in good data is like going to the dentist regularly, we know we should but we don’t always do it. 3/
January 28, 2025 at 7:44 AM
And you know what most companies dislike? Data Librarians. Yes, those people who sort through data sets, remove dups, apply labels so others can find them, et cetera. After the year 2002, I stopped seeing these titles. I knew AI was going to require clean data and the data was dirty. 2/
January 28, 2025 at 7:44 AM
No… got worse. Now a formal payment freeze is in effect without a timeline. I’ve resorted to sending checks via courier pigeon.
December 22, 2024 at 3:00 AM
Looks like this could be helpful but where in the process does the evaluation get used? Where in the AI Safety Model is this needed? To someone who is not building these models, this amount of detail poses more questions than it answers.
December 17, 2024 at 5:20 PM
If so, how accurate is this? What categories and areas have validation for accuracy? How much auditing work is required to make this model effective? How much time does this save? How do you balance energy consumption with output/accuracy?
December 14, 2024 at 5:13 AM
So if I understand this, you use RETRO to optimize the model to be more efficient. You used RAG to adapt the model for the task. And you are using PaLM (not PaLM 2) by Google, which is a 2022 release.
December 14, 2024 at 5:12 AM
Could you please link me to the Arxiv article? I could not find it.
December 14, 2024 at 5:10 AM
Sounds promising but where is the academic research? What computer science theory have you been using? I don’t want a trial as much as I’d like to learn from the methods you mentioned. I think when we skip this step, we skip the valuable discourse. Your website doesn’t go into much detail either.
December 13, 2024 at 9:46 PM
I think your point of knowledge creation is spot on. At the same time, I’m curious to see the experiment at UCLA. Hopefully the metrics are identified in advance and surveys collected at the end. The reality is many students use AI already. I think knowledge structure and normalization may be lost.
December 13, 2024 at 9:43 PM
@jenova-ai.bsky.social Do you have a links to resources that dive deeper into this? I’m curious to see what’s new and how it works.
December 13, 2024 at 6:35 PM