Blob indexer + 3 custom skills for document extraction, figure processing, and text processing.
https://github.com/Azure-Samples/azure-search-openai-demo/releases/tag/2025-11-12
Blob indexer + 3 custom skills for document extraction, figure processing, and text processing.
https://github.com/Azure-Samples/azure-search-openai-demo/releases/tag/2025-11-12
Pydantic-AI agent connected to a FastMCP server (deployed on FastMCP cloud) using different LLM models from the new Pydantic AI Gateway, called via a Vercel AI React frontend, with both agent and MCP server sending Otel logs to Logfire.
Pydantic-AI agent connected to a FastMCP server (deployed on FastMCP cloud) using different LLM models from the new Pydantic AI Gateway, called via a Vercel AI React frontend, with both agent and MCP server sending Otel logs to Logfire.
Why? Agents are bad at polling- they over/under-check.
SEP-1686 moves orchestration to MCP itself:
github.com/modelcontext...
For Python devs, MCPClient in FastMCP will implement.
Why? Agents are bad at polling- they over/under-check.
SEP-1686 moves orchestration to MCP itself:
github.com/modelcontext...
For Python devs, MCPClient in FastMCP will implement.
I absolutely adore seeing behind the curtain of web-powered tools - so many Monaco references!
I absolutely adore seeing behind the curtain of web-powered tools - so many Monaco references!
group executions by pending/approved/denied, with state all stored in a PostgreSQL database.
https://github.com/dbos-inc/dbos-demo-apps/tree/main/python/agent-inbox
group executions by pending/approved/denied, with state all stored in a PostgreSQL database.
https://github.com/dbos-inc/dbos-demo-apps/tree/main/python/agent-inbox
Apparently by completely ignoring the question and just spitting back the random retrieved data.
If your app does this, you need
1) re-ranking model with a discard threshold
2) prompt addition to refuse off-topic questions
Apparently by completely ignoring the question and just spitting back the random retrieved data.
If your app does this, you need
1) re-ranking model with a discard threshold
2) prompt addition to refuse off-topic questions
Ideally I'd have a CI that runs on each PR that suggests AGENTS.md updates (that I can accept/edit/reject).
Anyone doing that already?
About to blast my app with thousands of ASCII art attacks.
https://azure.github.io/PyRIT/code/converters/0_converters.html
About to blast my app with thousands of ASCII art attacks.
https://azure.github.io/PyRIT/code/converters/0_converters.html
That means there are actual engineers working on Blogger still! 😱
Maybe I can stay on it forever and never have to write my own blogging engine.
That means there are actual engineers working on Blogger still! 😱
Maybe I can stay on it forever and never have to write my own blogging engine.
click.convertkit-mail2.com/gkumlz753lc5...
click.convertkit-mail2.com/gkumlz753lc5...
I put demos here that show multiple tools and structured outputs:
https://github.com/Azure-Samples/nim-on-azure-serverless-gpus-demos?tab=readme-ov-file#pydanticai
I put demos here that show multiple tools and structured outputs:
https://github.com/Azure-Samples/nim-on-azure-serverless-gpus-demos?tab=readme-ov-file#pydanticai
https://rich.readthedocs.io/en/stable/markdown.html
https://rich.readthedocs.io/en/stable/markdown.html
We set oids and groups fields on the chunks in the index, and then AI Search filters chunks based off access token of logged in user.
Learn more in release notes github.com/Azure-Sample...
We set oids and groups fields on the chunks in the index, and then AI Search filters chunks based off access token of logged in user.
Learn more in release notes github.com/Azure-Sample...
Optional[X] → X | None
Union[X, Y] → X | Y
Thanks to the "UP" option in ruff for finding everything that can be upgraded!
Optional[X] → X | None
Union[X, Y] → X | Y
Thanks to the "UP" option in ruff for finding everything that can be upgraded!
Come say hi!
Come say hi!
During ingestion, it uses LLM to extract structured data (entities/topics/verbs) and stores in standard DB, and then retrieves by structuring the user query as well.
Try it out at:
github.com/microsoft/ty...
During ingestion, it uses LLM to extract structured data (entities/topics/verbs) and stores in standard DB, and then retrieves by structuring the user query as well.
Try it out at:
github.com/microsoft/ty...