AI By the Bay
banner
aibythebay.bsky.social
AI By the Bay
@aibythebay.bsky.social
Developer Conference: Functional Programming Cloud | Data | AI | Open-Source Science: 60+ talks & panels http://functional.tv #scalebythebay
November 17-19, 2025, Oakland
CFP: https://sessionize.com/by-the-bay
Early Birds: https://ai.bythebay.io/register
𝘈𝘯 𝘦𝘹𝘤𝘪𝘵𝘦𝘥 𝘈𝘐 𝘥𝘦𝘷𝘦𝘭𝘰𝘱𝘦𝘳 𝘢𝘥𝘷𝘰𝘤𝘢𝘵𝘦 @jbaru.ch, and 𝘢 𝘤𝘺𝘯𝘪𝘤𝘢𝘭 𝘴𝘦𝘯𝘪𝘰𝘳 𝘦𝘯𝘨𝘪𝘯𝘦𝘦𝘳𝘪𝘯𝘨 𝘮𝘢𝘯𝘢𝘨𝘦𝘳, Leonid Igolnik, take the stage to debate whether AI-driven development is finally ready for prime time or just another way to get things wrong.
April 3, 2025 at 4:44 PM
Maybe we’ve been 👀 at this the wrong way. AI might be trustworthy but only if we rethink how we guide it. What if there were a way to ensure it understands intent before it writes a single line of code? A way to catch mistakes before they happen instead of fixing them afterward?
April 3, 2025 at 4:44 PM
That leaves us manually checking everything. 🧐 The safest bet is to assume it’s wrong and review every line yourself, which doesn’t exactly scream “𝘱𝘳𝘰𝘥𝘶𝘤𝘵𝘪𝘷𝘪𝘵𝘺 𝘣𝘰𝘰𝘴𝘵.”

So what’s the alternative? 🤔
April 3, 2025 at 4:44 PM
Having AI test its own work doesn’t help. If we can’t trust it to write code, why would we trust it to write tests after the fact? That’s not verification; it’s an echo chamber.
April 3, 2025 at 4:44 PM
If your organization values ✅ correctness, ⚡ velocity, and 🔍 clarity in the age of AI, this is a conversation you need to be part of.

(the link is below. you know what to do)
ai.bythebay.io/register
Register | AI By the Bay
The conference always sells out. Do not hesitate and book your ticket now.
ai.bythebay.io
March 31, 2025 at 7:07 PM
“Human-in-the-loop” is no longer a sufficient standard.
This is about human in charge — because long-term maintainability, trust, and resilience depend on it.
March 31, 2025 at 7:07 PM
🧬 The use of Zig for ultra-efficient inference, including breakthroughs from ZL.
🔐 Type-safe systems, advanced patterns, and the enduring value of correctness.
🔁 The implications for CI/CD, observability, compliance — and developer accountability — in context-driven IDEs.
March 31, 2025 at 7:07 PM
This track addresses what it means to engineer software with rigor in an AI-native world.

We’ll examine:

🚀 The growing role of Rust-backed Python tooling like UV and Pydantic, reshaping performance and reliability.
👇
March 31, 2025 at 7:07 PM
Modern development is increasingly shaped by tools like Microsoft Copilot, Cursor, Codeium, and platforms like Replit that promise full-stack code generation. But these gains come with trade-offs. When the environment becomes opaque or fails unpredictably, developer autonomy is at risk.
March 31, 2025 at 7:07 PM
🎯 This is why we’re having the Thoughtful & AI-Native Coding track at AI By the Bay.

We’re not chasing trends. We’re focused on the foundational question:
🧠Can you still understand, debug, and take responsibility for the systems you build — even with AI in the loop?
March 31, 2025 at 7:07 PM
What happens when your test tab disappears right before launch?
That’s not hypothetical. In a recent demo from a major platform, the developer experience collapsed when key interface elements failed to appear.
❌ No tests.
❌ No control.
❌ No clarity
March 31, 2025 at 7:07 PM
We’re lucky to have him and even luckier to call him a regular. ❤️
March 26, 2025 at 4:11 PM
We invite everyone to join us in welcoming Julien Le Dem! We're thrilled to have him back in our program for the second time in a row. 👏
March 20, 2025 at 6:04 PM
Mary is a proven quality marker of outstanding conferences—if she’s involved, you know it’s an event worth attending! ❤️
March 18, 2025 at 9:13 PM
🌺 Leah McGuire, ML engineer at Faros AI and also a keynote speaker at our latest edition, explored operationalizing ML models and building trust in AI systems, challenging us to rethink our approach to LLMs.

…and many more incredible women shaping the future of AI! ❤️
March 11, 2025 at 5:46 PM
🌺 Holden Karau, an open-source and distributed computing champion, explored Apache Spark optimizations and scalable data engineering.
🎉 Special congratulations on Holden’s new role as co-founder of Fight Health Insurance, leveraging AI & LLMs to drive positive impact!
March 11, 2025 at 5:46 PM
🌺 Shreya Rajpal, founder of Guardrails AI, brought deep insights into AI safety and reliability, showing how to build LLM applications that are not just powerful but also trustworthy.
March 11, 2025 at 5:46 PM