Home Assistant Voice PE looks great, and fast to respond, but I can only half-use it because it won't accept my custom TLS CA. Let's see if this RFE gets traction.
#HomeAssistant
https://github.com/esphome/home-assistant-voice-pe/issues/258
Conversation
Notices
-
Embed this notice
Mauricio Teixeira 🇧🇷🇺🇲 (badnetmask@hachyderm.io)'s status on Wednesday, 01-Jan-2025 04:11:43 JST Mauricio Teixeira 🇧🇷🇺🇲
-
Embed this notice
Mauricio Teixeira 🇧🇷🇺🇲 (badnetmask@hachyderm.io)'s status on Thursday, 02-Jan-2025 08:17:05 JST Mauricio Teixeira 🇧🇷🇺🇲
Ha! I got the Home Assistant Voice PE working just fine(*), but still with limited capacity. I tried Ollama as an AI conversation agent, and the only model that can run on my Raspberry Pi 4 is not very "smart" (tinyllama). Looks like I need more hardware to make it work, or subscribe to a cloud AI service. Will think about it for a while.
* Except for the fact that I had to manually build the image to make it understand my custom CA.
In conversation permalink -
Embed this notice
Mauricio Teixeira 🇧🇷🇺🇲 (badnetmask@hachyderm.io)'s status on Thursday, 02-Jan-2025 08:30:40 JST Mauricio Teixeira 🇧🇷🇺🇲
@rachel
I'm just running Ollama in a container.In conversation permalink -
Embed this notice
Rachel (rachel@transitory.social)'s status on Thursday, 02-Jan-2025 08:30:45 JST Rachel
@badnetmask@hachyderm.io you're running the HA stack on the rpi4? How are your stt times/what stt model are you using?
In conversation permalink -
Embed this notice
Mauricio Teixeira 🇧🇷🇺🇲 (badnetmask@hachyderm.io)'s status on Thursday, 02-Jan-2025 08:38:58 JST Mauricio Teixeira 🇧🇷🇺🇲
@rachel
I'm using Home Assistant Cloud as the TTS.In conversation permalink -
Embed this notice
Rachel (rachel@transitory.social)'s status on Thursday, 02-Jan-2025 08:38:59 JST Rachel
@badnetmask@hachyderm.io oh I was wondering ho the whisper stt performance was, are you using HA plugins or running those in containers as well?
idk if I had a bad config but I had terrible response times before I ran that on the GPU, also the docs were..... not great
I have not looked at any conversation agents.In conversation permalink -
Embed this notice
Mauricio Teixeira 🇧🇷🇺🇲 (badnetmask@hachyderm.io)'s status on Thursday, 02-Jan-2025 12:31:10 JST Mauricio Teixeira 🇧🇷🇺🇲
Gave up on running Ollama on the Raspberry Pi, and moved it to a mini PC with an i7 CPU and plenty of RAM, so I can run bigger models. Seems to be working well and fast so far, I just need to figure out how to make it give me the information I want, the way I want. Seems like I need to tweak some knobs in Home Assistant for that.
In conversation permalink
-
Embed this notice