typedef
banner
typedef.ai
typedef
@typedef.ai
We are here to eat bamba and revolutionize the world of query engines. The Spark is gone, let's rethink data processing with a pinch of AI
Note: auto-routing is being explored; today you keep full control.
check the repo for more: github.com/typedef-ai/f...
GitHub - typedef-ai/fenic: Build reliable AI and agentic applications with DataFrames
Build reliable AI and agentic applications with DataFrames - typedef-ai/fenic
github.com
October 21, 2025 at 11:07 PM
Mix providers (OpenAI, Anthropic) with simple aliases

Use defaults for simple ops; override model_alias for complex ones

Balance cost/latency/quality without extra orchestration
October 21, 2025 at 11:07 PM
Teams often wire a single model and pay in either cost or quality.

With Fenic, you register multiple models once and select them per call.
October 21, 2025 at 11:07 PM
Thanks to @danielvanstrien.bsky.social and @lhoestq.hf.co for the collaboration and feedback that made this possible and to David Youngworth you built and maintains the integration!
October 21, 2025 at 6:57 PM
A few things you can do with this new integration.

1. Rehydrate the same agent context anywhere (local → prod)
2. Versioned, auditable datasets for experiments & benchmarks
fenic
We’re on a journey to advance and democratize artificial intelligence through open source and open science.
huggingface.co
October 21, 2025 at 6:57 PM
Common patterns: multi-step enrichment, RAG prep, nightly jobs with partial recomputes.

for more check the Github repo: github.com/typedef-ai/f...
September 24, 2025 at 1:38 AM
With fenic, it’s explicit and simple: call .cache() where it matters.

Protect pricey semantic ops (classify/extract) from re-execution

Reuse cached results across multiple downstream analyses

Recover from mid-pipeline failures without starting over
September 24, 2025 at 1:38 AM
Think of it as checkpointing for LLM workloads: cache after costly ops, restart from there if something fails.

Without caching, teams re-pay tokens and time on retries: flaky APIs, disk hiccups, long recomputes.
September 24, 2025 at 1:38 AM
Mix providers (OpenAI, Anthropic) with simple aliases

Use defaults for simple ops; override model_alias for complex ones

Balance cost/latency/quality without extra orchestration
September 22, 2025 at 11:07 PM
Teams often wire a single model and pay in either cost or quality.

With Fenic, you register multiple models once and select them per call.
September 22, 2025 at 11:07 PM
Common patterns: review mining, invoice parsing, lead enrichment, spec extraction.

for more, check the GitHub repo: github.com/typedef-ai/f...
September 20, 2025 at 1:38 AM
Define a Pydantic schema; get type-checked structs (ints, bools, lists, Optionals)

Auto-prompting via function calling / structured outputs (OpenAI, Anthropic)

Use unnest() and explode() to work with the data—no manual JSON wrangling
September 20, 2025 at 1:38 AM
Most teams hand-roll JSON parsing, brittle regex, and post-hoc validators. That’s slow and error-prone.

With fenic, you keep it declarative.
September 20, 2025 at 1:38 AM
Common patterns: doc mining, content ingestion, RAG prep, taxonomy extraction.

for more, including examples and documentation, check: github.com/typedef-ai/f...
September 18, 2025 at 1:38 AM