Using Tailscale to access Home Assistant from everywhere
When you start using Home Assistant, you’ll quickly find yourself wanting to access it from everywhere, not just from your couch while being connected to the home wifi.
There are many ways to achieve this, but the easiest is problably the Home Assistant Cloud integration. They offer a subscription based service that also allows you to easily integrate Amazon Alexa and Google Assistant.
But the great thing about Home Assistant is that they allow you to do things more than one way, and there are actually even more ways to open your Home Assistant instance to the internet. I’ll just briefly mention that you can open a port on your router and use a service like DuckDNS to give you a URL to access Home Assistant, but this is hard to make right regarding security and is sometimes even impossible if your ISP only issues IP v6 addresses or has a double NAT setup. If you’ve never heard of these things, be prepared for a long journey - or use Home Assistant Cloud or the Tailscale method this post is about.
If you chose this method and don’t need the other features Home Assistant Cloud offers, consider subscribing anyway! By subscribing, you help fund the development of Home Assistant.
What is Tailscale
Their headline says “Tailscale makes networking easy”. And that’s not an understatement. If you’ve worked with router configurations, VPNs, secret keys, SSL certificates and all those things before, you’ll appreciate how easy and fast it is to connect devices and services using Tailscale.
The core feature of Tailscale is a VPN, where you can connect devices without actually having to do all the networking normally required. If you’ve never heard of Tailscale before, I recommend reading What is Tailscale? from their documentation. If, after reading this article, you decide that you want to use Tailscale, you can read their Quickstart guide which leads you through creating an account and adding a machine to it.
Here is a Screenshot of my tailnet:
From now on, I’ll assume that you have a Tailscale account with your computer connected to your tailnet (their name for the VPN all your devices are in).
A newer feature of Tailscale is Tailscale Funnel, which allows you to access a service from your tailnet via the public internet. This is what we’ll use to make Home Assistant accessible from the internet.
What to expect
We’ll setup Home Assistant to join your tailnet, configure Tailscale, and then make Home Assistant available on a URL that looks like this: https://homeassistant.yak-bebop.ts.net
.
Configuring Tailscale
We’ll need to change some things in the Tailscale admin console. First, you’ll have to enable HTTPS support, which you can do by following their guide Enabling HTTPS. Next, we will enable Tailscale Funnel. For this, follow their guide Tailscale Funnel on how to edit the tailnet policy file. For me, it was as simple as pressing a button on the right side of the code editor there, but editing the file by hand is not much harder.
That’s it already for this part.
Adding Home Assistant to your tailnet
Start by opening the addon store and install the Tailscale addon.
After installing, go to configuration, click on the three dots in the top right corner and edit as YAML. There, you can paste this config:
funnel: true
proxy: true
userspace_networking: false
Click save, then start the addon. Open the web ui of the Tailscale addon and authenticate.
Congratulations, your Home Assistant instance is now part of your tailnet and available from the internet.
Check the logs for an entry that looks like this to find out the address:
[20:51:32] INFO: Tailscale Funnel is enabled:
[20:51:32] INFO: Your Home Assistant instance is publicly available on the internet at
[20:51:32] INFO: https://homeassistant.yak-bebop.ts.net
You can now connect to Home Assistant from everywhere:
Wrapping up
We now have an easy and secure way to connect to a Home Assistant instance via the internet “without fiddling with router settings or ssl certificates” (although still more steps than Home Assistant Cloud).
You can also configure the Tailscale addon to act as an exit node, which enables you to access other devices on the same network that are not part of the tailnet via any machine connected to the tailnet. Check out the addon documentation for more info on that.
From Image to JSON: How to Automatically Convert Meal Plans with Python, OpenCV, and Tesseract
My vision was to have the meal of the day on my Apple Watch, but the meal plan is only available as a picture and not as structured data.
Starting point
Each week, an image of the meal plan gets posted on a Confluence* page. I download them with a python script, so that I can process it using OCR.
Images of the meal plan look like this (including the strange white borders!):
Each week the image is slightly different:
- sometimes it has white borders, sometimes it doesn’t
- the resolution is slightly different
- the font is not always the same
- rows and columns are not always the same size, rather determined by the text within
Glossary
I’ll use the following terms throughout the article:
- OCR: Optical Character Recognition, the process of converting images of text to text
- Tesseract: A popular open-source OCR engine
- OpenCV: Open Source Computer Vision Library, a library for image processing
- Python: A programming language
Preprocessing
To account for the image variances, we need to do some preprocessing.
- upscale the image
- find contours
- find rects
- use biggest rect as new smallest area
We can do that with this script:
# upscale, the factor upscales the image to ~300 dpi
scale = 4.2
scaled = cv2.resize(img, None, fx=scale, fy=scale,
interpolation=cv2.INTER_CUBIC)
# convert image to grayscale
grayscale = cv2.cvtColor(scaled, cv2.COLOR_BGR2GRAY)
# convert to binary
(_, tableCellContrast) = cv2.threshold(
~grayscale, 5, 255, cv2.THRESH_BINARY)
# find edges
edges = cv2.Canny(tableCellContrast, 5, 10)
The edges image looks like this:
We can use the biggest rectangle as the bounds to crop the image:
contours, _ = cv2.findContours(
edges, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE
)[-2:]
biggest_contour = None
max_area = 0
for contour in contours:
area = cv2.contourArea(contour)
if area > max_area:
max_area = area
biggest_contour = contour
x, y, w, h = cv2.boundingRect(biggest_contour)
cropped = scaled[y:y+h, x:x+w]
After these steps, the result is this:
Extracting cells
Using the same tricks as above, we can use contours and lines to find the cells in the table. I created a visualization to show you the bounds of the cells the algorithm found:
The code for this one is a little bit longer, and contains some quirks to get the cells reliably. If you still want to see it, you can expand the next section:
Code block: Extracting cells
grayscale = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
(_, tableCellContrast) = cv2.threshold(
~grayscale, 5, 255, cv2.THRESH_BINARY)
# start lines
imgLines = cv2.cvtColor(np.zeros_like(
tableCellContrast), cv2.COLOR_GRAY2RGB)
imgwidth, imgheight = img.shape[1], img.shape[0]
minLineLength = imgwidth // 10
lines = cv2.HoughLinesP(
image=~tableCellContrast,
rho=0.02,
theta=np.pi / 500,
threshold=10,
lines=np.array([]),
minLineLength=minLineLength,
maxLineGap=10,
)
a, _, _ = lines.shape
for i in range(a):
if abs(lines[i][0][0] - lines[i][0][2]) > abs(lines[i][0][1] - lines[i][0][3]):
# horizontal line
cv2.line(
imgLines,
(0, lines[i][0][1]),
(imgwidth, lines[i][0][3]),
(0, 0, 255),
1,
cv2.LINE_AA,
)
else:
# vertical line
cv2.line(
imgLines,
(lines[i][0][0], 0),
(lines[i][0][2], imgheight),
(0, 0, 255),
1,
cv2.LINE_AA,
)
cv2.rectangle(imgLines, (0, 0), (imgwidth, imgheight), (0, 255, 0), 2)
(thresh, table) = cv2.threshold(
cv2.cvtColor(imgLines, cv2.COLOR_BGR2GRAY),
128,
255,
cv2.THRESH_BINARY | cv2.THRESH_OTSU,
)
# end lines
# start contours
img_boxes = cv2.cvtColor(grayscale, cv2.COLOR_GRAY2RGB)
# create new cv image with same dimensions as cropped image
contours, hierarchy = cv2.findContours(
table, cv2.RETR_CCOMP, cv2.CHAIN_APPROX_SIMPLE
)[-2:]
idx = 0
cells = []
for cnt in contours:
idx += 1
x, y, w, h = cv2.boundingRect(cnt)
area = w * h
if w > imgwidth * 0.9 or h > imgheight * 0.9 or w < 10 or h < 10:
continue
roi = grayscale[y: y + h, x: x + w]
cells.append(Cell(roi, x, y, w, h, None))
color = list(np.random.random(size=3) * 256)
cv2.rectangle(img_boxes, (x, y), (x + w, y + h),
color, thickness=FILLED)
# end contours
cols = list(sorted(set([cell.x for cell in cells])))
rows = list(sorted(set([cell.y for cell in cells])))
for cell in cells:
x, y, w, h = cell.x, cell.y, cell.w, cell.h
cell.y = rows.index(y)
cell.x = cols.index(x)
Cell is a small dataclass that looks like this:
from dataclasses import dataclass
import numpy as np
@dataclass
class Cell:
image: np.ndarray
x: int
y: int
w: int
h: int
text: str
Preparing for OCR
OCR works best on high-contrast images that might look strange to humans but are easy to work with by computers.
To create this image, we’ll use dilation and erosion to remove artifacts from the letters:
grayscale = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
img = grayscale
kernel = np.ones((1, 1), np.uint8)
img = cv2.dilate(img, kernel, iterations=1)
img = cv2.erode(img, kernel, iterations=1)
img = cv2.threshold(cv2.medianBlur(img, 3), 0, 255,
cv2.THRESH_BINARY + cv2.THRESH_OTSU)[1]
This is what the image looks like now:
OCR and data transformation
Now we have an image to work with. We already have the cell information, so we can run ocr on each cell instead of the whole image to know exactly which part of the image contains what data. Otherwise, it would be extremely painful to assign the ocr data to the information we need.
custom_config = r"--oem 1 --psm 11 -l deu -c tessedit_write_images=true "
for cell in cells:
if not cell.text or forceExtract:
text = pytesseract.image_to_string(
cell.image, config=custom_config, lang="frak"
)
# remove line breaks and form feed chars
cell.text = text.replace("\n", " ").replace("\f", "").strip()
# sort cells by their coordinates
cells.sort(key=lambda cell: (cell.y, cell.x))
With the cells in the right order, we can transform the data to a dictionary:
table = {}
germanWeekdayToEnglish = {
"Montag": 0,
"Dienstag": 1,
"Mittwoch": 2,
"Donnerstag": 3,
"Freitag": 4,
"Samstag": 5,
"Sonntag": 6,
}
for cell in [c for c in cells if c.x == 0]:
table[cell.y] = [c.text for c in cells if c.y == cell.y]
categories = list(table[0])[1:]
days = {}
for y in range(1, len(table), 2):
weekday = germanWeekdayToEnglish[table[y][0]]
meals = [
{
"category": trim(categories[i]),
"meal": trim(meal),
"ingredients": [trim(x) for x in table[y + 1][i + 1].split(",")],
}
for i, meal in enumerate(table[y][1:])
]
days[weekday] = meals
The json file for each weekday looks like this:
[
{
"category": "Hauptgericht 1",
"meal": "Gebratene Streifen vom Rind mit Gemüse, abgerundet mit Sojasauce und Honig, dazu Risi-Bisi",
"ingredients": ["F", "G", "I"]
},
{
"category": "Veganes Hauptgericht",
"meal": "Spinatlasagne mit getrockneten Tomaten und veganem Käse überbacken und Tomatensauce extra",
"ingredients": ["A (Weizen)"]
}
]
REST API
My REST API is extremely simple. I just copy over the jsons to a webserver that hosts these jsons as static files.
Usage works like this: Do a GET
-Request for a date like this: GET /2022-05-03.json
Using the data
I’m using the API to show the current meal on my Apple Watch. After 1 p.m., it shows the meal for the next day.
DIY Ambilight with HDMI 2.1 support
Save songs from radio to a Spotify playlist
Let Alexa welcome you home
Save and restore light states using Home Assistant
Alexa connected to an audio receiver (with Home Assistant)
Monitor remaining water bottles with ESPHome and Home Assistant
Cover that only opens if there's nothing in it's way in Home Assistant
I have a screen that goes above my balcony door. If the door has been left open and I accidentally said “Alexa, turn on cinema mode” from downstairs, my screen probably wouldn’t be as smooth anymore. I control my screen via Home Assistant, you can read more in another article here. It’s a cover, which can be opened, closed or stopped.
My plan is to create a template_cover that only opens the real cover if the balcony door is closed.
The Template Cover
In the configuration.yaml file, I first of all created a template cover:
cover:
- platform: template
covers:
my_screen_safe:
friendly_name: "My Screen"
value_template: "{{ states('cover.my_screen') }}"
open_cover:
service: script.open_my_screen_if_balcony_door_closed
close_cover:
service: cover.close_cover
data:
entity_id: cover.my_screen
stop_cover:
service: cover.stop_cover
data:
entity_id: cover.my_screen
The value_template
is needed to show the state of the cover, which we can just copy from our real cover.
When we open the cover, a script is called, which I will go into more detail in the next step. If we close or stop the cover, I simply want the real cover to do exactly the same (the door can’t be open if the screen is already down).
Preventing Screen Damage
As you’ve seen above, when we open the cover, a script is called:
The script actions are called one by one. The first action is a condition, and the script stops if a condition isn’t true. In my example, the script should stop if the balcony door is open, and otherwise open the cover.
# the same script, but with yaml
script:
open_my_screen_if_balcony_door_closed:
alias: open cover.my_screen only if balcony door is closed
sequence:
- condition: state
entity_id: binary_sensor.balcony_door
state: 'off'
- service: cover.open_cover
data:
entity_id: cover.my_cover
Wrapping Up
Now the cover is finished! Make sure to only use cover.my_screen_save
in your frontend, automations and cloud/emulated_hue components. That way, the cover really only opens if the door is closed.
Maybe your cover should only close when there is nothing in it’s way, like the garage door and a light barrier checks the space? That’s exactly the same, just put the script call to the close_cover action.
Are there any other covers you can think of that could take use of this? Share your ideas in the comment section below.
TL;DR
- Create a template_cover that mirrors all actions and the state from the original cover
- Replace the open_cover action with a script that only opens the cover if the door is closed