I built a private voice assistant for my smart home using Home Assistant and a local LLM


SOURCE: XDA-DEVELOPERS.COM
AUG 10, 2025

By Samir Makwana

While I enjoy tinkering with Home Assistant to make my smart devices work, I ensure all of them are manageable locally. While axing most cloud-dependent services, the only thing I miss is voice control. Recently, I hooked up a $13 ATOM Echo speaker to Home Assistant, and it has been a game-changer for my smart home. Tumbling down the rabbit hole, I managed to build a private voice assistant that works locally.

Building a private voice assistant requires extra hardware muscle to run Home Assistant and a local LLM, making the most of it. In short, I wanted to self-host a local LLM to manage a private, local voice assistant for controlling my smart home. To get started quickly, I managed to build a private voice assistant for my smart home using a local LLM, Home Assistant, and an inexpensive smart speaker.

Setting up a local LLM to work with Home Assistant

Picking an apt generative AI model

I ditched the supervised method to run Home Assistant and flashed it as the HA OS on a Raspberry Pi 4, making it a dedicated machine for that purpose. That helped since I got a mini PC (HP ProDesk 600 G6) to shoulder the self-hosting duties and ran an Ollama instance on it. While it didn’t have a dedicated GPU, I settled for using a local LLM with CPU support.

Getting Ollama to pull and run different generative AI models on a dedicated machine can be very enticing. Next, I installed the Ollama integration in Home Assistant and added conversation agents that would use these generative AI models. I also installed the Whisper add-on to handle the Speech-to-Text (STT) engine and Piper add-on to manage the Text-to-Speech (TTS) engine duties. That triggered the Wyoming Protocol integration in Home Assistant, which was also easy to set up.

When configuring generative AI models with Ollama integration, select the checkbox for Assist to enable LLM to control smart devices via Home Assistant.

Initially, I downloaded the LlaMa3 and Gemma3 models using the command line. To make them work properly with Home Assistant, I had to choose the ones that supported tool calling. So I downloaded Deepseek-R1 and Qwen3 models with tool calling support to experiment with the voice pipelines.