Alex de Pablos
banner
alexdepablos.com
Alex de Pablos
@alexdepablos.com
🌟 Software Engineer at S|ngular
💡 Builder of small, clever tech tools that make life easier.
✍️ Sharing insights on AI, software development, and my pursuit of meaningful work-life balance.
🏋️‍♂️ Fitness enthusiast, lifelong learner, and family-first thinker.
Adapt or stay behind. Your move.
February 18, 2025 at 2:27 AM
The real question isn’t who can code, but who can build real solutions.

AI makes programming more accessible, but writing code was never the hard part. Structuring, scaling, and making it actually work is where the real challenge is.

If anyone can code, what really matters is who can think.
February 18, 2025 at 2:27 AM
Overall, Operator shows promise for focused, in-depth research. It might not replace bulk automation, but with clear prompts and human oversight, it could free you for higher-value work.

Thoughts on using AI this way?
February 3, 2025 at 2:10 AM
Some noted limitations: Operator may lose track when facing CAPTCHAs and can be slow for mass prospecting or CRM updates. Its deep research is impressive, but manual verification remains essential to ensure accuracy.
February 3, 2025 at 2:10 AM
Key uses include: competitor analysis (comparing prices, features & contract terms), prospect research (mining insights from sites & LinkedIn), crafting sales materials, market intel, and scheduling meetings. Each has its ups and downs.
February 3, 2025 at 2:10 AM
Operator works as a navigation agent that browses websites, LinkedIn, press releases, and more—opening tabs and extracting data in real time. It even handles CAPTCHAs with a little human help when needed.
February 3, 2025 at 2:10 AM
Bottom line: OpenAI’s o3‑mini outperforms o1/o1‑mini on coding benchmarks while remaining very cost‑efficient. Choose high mode when extra depth is needed (~20–30% extra tokens).
February 1, 2025 at 3:22 AM
DeepSeek R1 is extremely cost‑friendly with very low token prices, but may trade off some consistency & performance versus OpenAI’s models, which deliver higher accuracy and faster responses for STEM and coding tasks.
February 1, 2025 at 3:22 AM
All o3‑mini modes share the same per‑token cost. However, high mode “thinks” longer—using roughly 20–30% more tokens for complex prompts than medium mode.

That extra chain‑of‑thought yields improved detail/accuracy at a modest extra token cost.
February 1, 2025 at 3:22 AM
Pricing per million tokens (USD):
• OpenAI full o1: ~$15 input / ~$60 output
• OpenAI o1‑mini: ~$3 input / ~$12 output
• OpenAI o3‑mini: ~$1.10 input / ~$4.40 output
• DeepSeek R1 (deepseek‑reasoner): ~$0.14 input* / ~$2.19 output
February 1, 2025 at 3:22 AM
Coding performance on Codeforces (a proxy for coding skill):
• o3‑mini high: ~2130 Elo
• o3‑mini medium: ~2036 Elo
• o3‑mini low: ~1831 Elo
• Full o1: ~1891 Elo
• o1‑mini: ~1650 Elo
• DeepSeek R1 (est.): ~1900 Elo
(Source: Simon Willison’s Weblog )
February 1, 2025 at 3:22 AM
I’m no AI expert, but as a software engineer, I’ve seen how tools like GitHub Copilot make coding faster, better and less frustrating. $500B for AI is cool and all, but for me, the real game-changer is when it solves the annoying stuff, and when it does that good. That’s where the magic is.
January 23, 2025 at 4:08 AM
Por el camino, seguiremos necesitando formas de comprobar que lo que se genera cumple realmente con la intención original.
January 18, 2025 at 3:49 PM
Creo que el futuro va más allá, y que llegará un momento en el que no tengamos que ´programar´ nada; solo especificar en un lenguaje natural (evitando ambigüedades) lo que queremos y la propia IA se encargará del resto.
January 18, 2025 at 3:49 PM
La verdad es que esto me recuerda mucho a TDD ´de toda la vida´. Definimos el funcionamiento y luego implementamos. Lo que cambia ahora es que la IA puede ayudarnos a “picar el código” tras definir las especificaciones o requisitos.
January 18, 2025 at 3:49 PM
Just something to think about.

Efficiency is great and all, but shouldn’t we have more clarity about the tools we’re using? Or maybe it’s just me overthinking things…
January 18, 2025 at 2:21 AM
Why would they do this?

Running giant models is expensive. Using a big one to improve the smaller ones could cut costs. It’s logical—but it also makes me curious about what else they’re not telling us.
January 18, 2025 at 2:21 AM
The challenge for all of us:
AI can either amplify our thinking or replace it.

Tools like ChatGPT shouldn’t be shortcuts to avoid effort, but partners that push us to think deeper.

The question isn’t just what AI can do, but how we’re using it to shape the next generation
January 17, 2025 at 2:10 AM
When AI guides learning, the results can be transformative—like turning 6 weeks of study into 2 years of learning growth.

The catch? It works best when paired with human teachers who shape the process, not when AI is left to run the show.

blogs.worldbank.org/en/education...
From chalkboards to chatbots: Transforming learning in Nigeria, one prompt at a time
"AI helps us to learn, it can serve as a tutor, it can be anything you want it to be, depending on the prompt you write," says Omorogbe Uyiosa, known as "Uyi" by his friends, a student from the Edo Bo...
blogs.worldbank.org
January 17, 2025 at 2:10 AM