Adrian Brudaru
datateam.bsky.social
Adrian Brudaru
@datateam.bsky.social
Data engineer & Cofounder @dlthub. Building out the tooling i wish i had.
February 16, 2026 at 12:03 PM
From APIs to Warehouses 📦

On Feb 17 (16:30 CET), together with DataTalks.Club, Aashish Nair will walk through building end-to-end ingestion pipelines with dlt, from raw APIs to production-ready warehouse loads.

Register here 👇

From APIs to Warehouses: AI-Assisted Data Ingestion with dlt · Luma
This hands-on workshop focuses on building reliable data ingestion pipelines to data warehouses (for example, Snowflake) using dlt (data load tool), enhanced…
luma.com
February 12, 2026 at 6:05 PM
What if dimensional modeling didn’t mean hours of boilerplate SQL?

We built an AI workflow that turns raw data into semantic models in minutes, powered by 20 questions.

Rethinking data transformation 👇
The Last Mile is Solved by Slop
I didn't vibe-build a product. I wrote a messy scaffold that runs a pipeline, grabs the schema, and forces an agent to build a star schema. It works shockingly well.
dlthub.com
February 12, 2026 at 4:40 PM
Berlin, it’s meetup time!

Join us for the dltHub Community Meetup, an evening of real-world demos, lessons learned, and conversations with builders.

📍 Rosebud, Berlin
📆 Feb 17 | 18:00 – 21:00

Curious about what we’re building at dltHub? Come by 👋

dltHub Community Meetup in Berlin with Cognee, Untitled Data Company, Gemma Analytics & Babbel · Luma
Join us for the dltHub Community Meetup in Berlin. This evening is for curious minds who want to learn more about what we’re building at dltHub. We’ll share a…
luma.com
February 11, 2026 at 9:04 PM
Production pipelines don’t fail loudly, they drift.

Feb 12 · 16:00 CET - Online
Hands-on workshop on operating pipelines in production:
• schema changes
• backfills
• CI/CD
• long-term reliability

Register → https://community.dlthub.com/workshop-maintaining-servicing-production-data-pipelines
February 10, 2026 at 5:27 PM
💘 Data Valentine Challenge started today.

5 days. 5 live data sessions with: 
@datarecce.bsky.social, Greybeam, @databasetycoon.bsky.social, @bauplan.bsky.social

Our slot: Wednesday → Pipelines That Don’t Ghost You

Feb 9–13 | 9am PT | Online

https://reccehq.com/data-valentine-week-challenge/
February 9, 2026 at 10:16 PM
January’s Rising Stars in the dlt ecosystem 👇

Builders are vibe coding pipelines around real-time markets, AI dev platforms, macro data, and more.

What’s trending right now:
February 9, 2026 at 10:52 AM
Arrow + ADBC + dlt just broke the EL speed limit.

5M rows DuckDB→MySQL:
SQLAlchemy 344s
Arrow + ADBC 92s (3.7× faster)

One line of code. Columnar end-to-end.

Benchmarks:
3.7x Faster Pipelines: Benchmarking Arrow & ADBC vs. SQLAlchemy for EL
Moved 5M rows from DuckDB to MySQL 3.7x faster, reducing time from 344s to 92s by switching from SQLAlchemy’s row-by-row path to Arrow + ADBC’s columnar pipeline.
dlthub.com
February 3, 2026 at 6:00 PM
The Modern Data Stack™ was a comfy lie that turned data engineers into passive consumers, now the bill's due, market's schisming into vendor-locked hell vs builder freedom.

Read the Builder's Manifesto:
The Builder: Outliving the Modern Data Stack
We were told that democratization meant 'safety,' but all we got were expensive cages. The era of the SaaS hostage is ending; the era of the sovereign Builder has begun.
dlthub.com
January 28, 2026 at 3:45 PM
Want to influence the tools you use every day?

We’re hosting a Builder’s Data Stack meetup focused on developer flow, fast iteration, and shaping the roadmap with the community.

Pull up a chair:
dltHub ❤️ Marimo ❤️ MotherDuck · Luma
dltHub and Marimo and MotherDuck are having a child. What are looking at? dltHub provides the ELT, runtime, and execution layer, turning production data…
luma.com
January 24, 2026 at 1:08 PM
An AI agent ignored a code freeze, wiped a prod DB, then hallucinated data to cover it up.

Data quality in the LLM era isn’t optional, it’s a safety problem.

We call it data as plutonium - powerful and dangerous without containment.

The Plutonium Protocol: Engineering Safety for the LLM Intern Era
The “data is oil” era is over. With LLMs, data is plutonium: powerful, toxic. Shift left and secure the reactor with 5 quality pillars.
dlthub.com
January 21, 2026 at 6:55 PM
🇫🇷 Paris data folks 💛

We’re hosting a dlt Community Meetup in Paris on Feb 4th (6–9 PM) together with Polycea.

A community meetup focused on practical takeaways, shared learnings, and conversations with people using dlt hands-on.

Join us here:
dlt Paris Community Meetup #2 with dltHub & Polycea · Luma
Join us for an evening of community and conversation! Co-hosted by dltHub and Polycea, this meetup brings people together for short talks and networking with…
luma.com
January 15, 2026 at 3:45 PM
Data quality is the vegetables of data engineering: everyone agrees it's important, but nobody wants to implement it.

To increase your 𝚟̶𝚎̶𝚐̶𝚎̶𝚝̶𝚊̶𝚋̶𝚕̶𝚎̶ ̶𝚒̶𝚗̶𝚝̶𝚊̶𝚔̶𝚎̶ test coverage, check out these 11 delicious recipes.

https://dlthub.com/blog/practical-data-quality-recipes-with-dlt
January 11, 2026 at 5:07 PM
Semantic layers are the most important "boring" part of data. 

Building them manually is a bottleneck for Chat-BI. dlt is changing the game by "autofilling" the metadata gap, turning months of modeling into minutes of automation. 

Autofiling the Boring Semantic Layer: From Sakila to Chat-BI with dltHub
Build one semantic model and reuse it across APIs, chatbots, and apps. Let LLMs handle the tedious mapping so you can ship data products that quietly just work.
dlthub.com
January 7, 2026 at 9:31 PM
Most data quality failures happen because checks come too late.

dlt + dltHub treat quality as a lifecycle: in-flight checks, safe staging, and production monitoring.

Catch issues earlier, fix less and trust your data more.

docs: https://dlthub.com/docs/general-usage/data-quality-lifecycle
January 4, 2026 at 7:28 PM
@ssp.sh drops another great deep-dive, a declarative data stack dlt + @clickhouse.com + @rilldata.com that simplifies tracking cloud spend across multiple platforms.

A complete cloud-native FinOps setup in minutes.

🔗 Explore the full walkthrough
Dlt+ClickHouse+Rill: Multi-Cloud Cost Analytics, Cloud-Ready
FinOps Made Easy: A Starter Repo to Oversee Cloud Costs from Different Hyperscalers.
www.ssp.sh
December 5, 2025 at 7:03 PM
We're thrilled to take the stage at the Snowflake for Startups Pitch Night!

Join us at the Silicon Valley AI Hub to see how dlt's code & LLM-first infra-native data ingestion library is the fastest way to get compliant data into Snowflake.

Snowflake for Startups Pitch Night · Luma
Join Snowflake for Startups and Proving Ground for an evening of innovation and networking at the SVAI Hub. This pitch night is a chance to see some of the Bay…
luma.com
December 5, 2025 at 10:29 AM
AI workflows break on LLM updates?

Our Anti-Entropy pattern fixes this with declarative scaffolds, error loops, and dashboards for antifragile convergence.

Saves time, acts like an invisible senior engineer.

Ditch crutches, skip the ‘Deer in Headlights’ panic!
Convergence: The Anti-Entropy Engine
Most LLM runs don’t fail. They converge fast, and the secret isn’t smarter models but better scaffolds that guide the work instead of forcing it.
dlthub.com
November 28, 2025 at 7:29 PM
To overcome complex cloud cost analysis, @ssp.sh showed how dlt can ingest and normalize AWS, GCP, and Stripe data into a unified cost dashboard.

The result is a single view for ROI analysis powered by a simple ELT.

Check out the full solution! 👇
Multi-Cloud Cost Analytics: From Cost-Export to Parquet to Rill
Learn how to unify AWS and GCP costs with revenue data in a single dashboard. Step-by-step guide using dlt, Parquet, and Rill. Clone and run immediately.
www.ssp.sh
November 25, 2025 at 4:00 PM
European data teams can enjoy lightning-fast analytics & production-ready pipelines with @motherduck EU region fully available.

Choose loading via MotherDuck or dlt's native DuckLake destination → support for Postgres, DuckDB, SQLite, MySQL.
Motherduck Europe & dlt DuckLake support
MotherDuck lands in Europe with serverless DuckDB warehousing. dlt adds DuckLake support, giving EU teams a fast, modern data stack.
dlthub.com
November 13, 2025 at 3:23 PM
Ever launched a data pipeline & wondered what’s happening under the hood? The dlt Workspace Dashboard gives real-time visibility into pipeline state, schemas, live dataset queries, run traces → all in one web app. Built with 
@marimo.io
Try it now: 👉 https://dlthub.com/docs/general-usage/dashboard
November 12, 2025 at 3:39 PM
After pushing LLMs to their limits, we found a better way.

A hybrid model that grounds AI in verified facts → fewer hallucinations, faster onboarding, and data pipelines that just work.

Read more here 👉
The feature we were afraid to talk about
This is the story of how we made our LLM generation workflow superior to starting from raw docs.
dlthub.com
October 23, 2025 at 9:42 AM
The real AI win isn't superhuman agents, it's scaled mediocrity.
Doing less with less at massive scale unlocks tasks that were once uneconomical.
The magic is in aggregate value, not perfect outputs. Empower teams with practical AI tools. 
🔗 https://dlthub.com/blog/the-real-ai-win-scaled-mediocrity
October 17, 2025 at 12:39 PM
For years, the most celebrated person on the data team was the one who could write the heroic, last-minute query.
We celebrated firefighters. Craftsmen.
We built a culture around reacting, not architecting. A leverage trap of our own making.
October 14, 2025 at 12:50 PM
1/5 Perfect Friday reading: Erfan Hesami's guide to dlt makes data pipelines actually simple.

Our co-founder spent a decade rebuilding the same pipelines. One question changed everything: "What if there was a way to reuse code?"

That became dlt.

🧵👇
October 10, 2025 at 11:50 AM