I built a private voice assistant for my smart home using Home Assistant and a local LLM
SOURCE: XDA-DEVELOPERS.COM
AUG 10, 2025
While I enjoy tinkering with Home Assistant to make my smart devices work, I ensure all of them are manageable locally. While axing most cloud-dependent services, the only thing I miss is voice control. Recently, I hooked up a $13 ATOM Echo speaker to Home Assistant, and it has been a game-changer for my smart home. Tumbling down the rabbit hole, I managed to build a private voice assistant that works locally.
Building a private voice assistant requires extra hardware muscle to run Home Assistant and a local LLM, making the most of it. In short, I wanted to self-host a local LLM to manage a private, local voice assistant for controlling my smart home. To get started quickly, I managed to build a private voice assistant for my smart home using a local LLM, Home Assistant, and an inexpensive smart speaker.
I ditched the supervised method to run Home Assistant and flashed it as the HA OS on a Raspberry Pi 4, making it a dedicated machine for that purpose. That helped since I got a mini PC (HP ProDesk 600 G6) to shoulder the self-hosting duties and ran an Ollama instance on it. While it didn’t have a dedicated GPU, I settled for using a local LLM with CPU support.
Getting Ollama to pull and run different generative AI models on a dedicated machine can be very enticing. Next, I installed the Ollama integration in Home Assistant and added conversation agents that would use these generative AI models. I also installed the Whisper add-on to handle the Speech-to-Text (STT) engine and Piper add-on to manage the Text-to-Speech (TTS) engine duties. That triggered the Wyoming Protocol integration in Home Assistant, which was also easy to set up.
When configuring generative AI models with Ollama integration, select the checkbox for Assist to enable LLM to control smart devices via Home Assistant.
Initially, I downloaded the LlaMa3 and Gemma3 models using the command line. To make them work properly with Home Assistant, I had to choose the ones that supported tool calling. So I downloaded Deepseek-R1 and Qwen3 models with tool calling support to experiment with the voice pipelines.
LATEST NEWS
Devices
Quantum tech breakthrough: China’s double-photon device breaks efficiency ceiling
MAR 08, 2026
WHAT'S TRENDING
Data Science
5 Imaginative Data Science Projects That Can Make Your Portfolio Stand Out
OCT 05, 2022
SOURCE: RESEARCH.GOOGLE
MAR 06, 2026
SOURCE: TECH.YAHOO.COM
FEB 26, 2026
SOURCE: MAKEUSEOF.COM
FEB 26, 2026
SOURCE: TESTINGCATALOG.COM
FEB 20, 2026
SOURCE: DIGIT.IN
FEB 12, 2026