They run entirely on-device, they're fast, and they're surprisingly capable.
Here it is transcribing a letter — no API calls, just local inference on a Mac M2.
Watch that GPU usage!
They run entirely on-device, they're fast, and they're surprisingly capable.
Here it is transcribing a letter — no API calls, just local inference on a Mac M2.
Watch that GPU usage!
• Canvas context
• Tool calling
The first user message in the video demonstrates both of these.
• Canvas context
• Tool calling
The first user message in the video demonstrates both of these.
Below is a Latin ms from the CA State Library via @digitalscriptorium.bsky.social. The transcription isn't perfect, but for mss which may never be transcribed, these models can do a lot with little effort
Below is a Latin ms from the CA State Library via @digitalscriptorium.bsky.social. The transcription isn't perfect, but for mss which may never be transcribed, these models can do a lot with little effort