Wrapped @simonwillison.net’s llm library + had it ingest a config file specifying the models you want. It'll run them async (if the plugin supports it) and write the results out to JSON so you can review/eval/process.
Wrapped @simonwillison.net’s llm library + had it ingest a config file specifying the models you want. It'll run them async (if the plugin supports it) and write the results out to JSON so you can review/eval/process.
Do other libraries stand a chance? Do we get to try out new ideas?
I’m hoping the next big advance in LLMs enables more real-time knowledge (that isn’t just function calling).
Do other libraries stand a chance? Do we get to try out new ideas?
I’m hoping the next big advance in LLMs enables more real-time knowledge (that isn’t just function calling).
$ llm install llm-cloudflare
$ llm -m "@cf/meta/llama-3.3-70b-instruct-fp8-fast" <PROMPT>
$ llm install llm-cloudflare
$ llm -m "@cf/meta/llama-3.3-70b-instruct-fp8-fast" <PROMPT>
I find it especially useful when writing @cloudflare.social Workflows: each invocation of the mapper creates a unique step on-the-fly:
I find it especially useful when writing @cloudflare.social Workflows: each invocation of the mapper creates a unique step on-the-fly:
Also keen to see how the AI assistant integration evolves: I think that's the winning part but the space moves *fast*.
Also keen to see how the AI assistant integration evolves: I think that's the winning part but the space moves *fast*.