synesthesiam.bsky.social
@synesthesiam.bsky.social
An N100 would be a big step up from a Pi4, yes (tiny to base model). A much bigger step would be a machine with a GPU to run the large models.
December 26, 2024 at 11:54 PM
I've heard the tiny model is pretty snappy on the Pi 5 (1-2s). The transcription accuracy is still low. There are 2 plans to address this: my Rhasspy speech add-on, and a modification to Whisper to bias it towards our commands.
FYI with HA Cloud and OpenAI, people get a great experience on the Pi 4.
December 24, 2024 at 11:14 PM
I'm the main developer on Assist. This is the OOB experience locally with hardware below the recommened specs (N100).
I'm working on an add-on that will improve the response times on the RPI 4 for a limited set of commands (in beta).
The AI HAT is optimized for visual models not voice.
December 24, 2024 at 8:23 PM