Matei Zaharia
matei-zaharia.bsky.social
Matei Zaharia
@matei-zaharia.bsky.social
CTO at Databricks and CS professor at UC Berkeley. https://people.eecs.berkeley.edu/~matei/
This is a joint effort across our engineering and research teams, based on new tuning methods we developed like TAO and ALHF. I think this type of declarative development is the future of AI -- help users build evals and auto-optimize based on those. Try it today!
June 11, 2025 at 5:08 PM
Moreover, to steer your agents in Agent Bricks, you can use natural language feedback; the system optimizes all components of the agent (e.g. retrievers, guardrails, etc) based on it -- something we call Agent Learning from Human Feedback (ALHF). More feedback = better agent.
June 11, 2025 at 5:08 PM
Agent Bricks automatically searches and combines the latest AI development techniques to give you a high-quality. It gets really great results quickly compared to DIY agents, e.g. state of the art performance on information extraction and question answering out of the box.
June 11, 2025 at 5:08 PM
Congrats Justine!
May 31, 2025 at 8:54 PM
Key to TAO is a search and scoring process that leverages test-time compute only during training, and new RL methods and models from our team. More details, in our blog: www.databricks.com/blog/tao-usi...
March 25, 2025 at 5:47 PM
TAO's trained model quality also scales with compute spent during training, not with human labeling effort, and the resulting models are always low inference cost.
March 25, 2025 at 5:47 PM
Our new method, Test-time Adaptive Optimization (TAO), only needs input examples of a task and can outperform supervised fine-tuning on thousands of human-labeled examples. It brings efficient OSS models like Llama to the quality of expensive larger models.
March 25, 2025 at 5:47 PM