Ad

Tuesday, January 21, 2020

Trying to make the most out of Home Assistant + rooted Xiaomi vacuum cleaner + Valetudo


After some time using HA (Hass.io), and even longer using the Xiaomi Roborock (first generation) vacuum cleaner, I kind of felt a certain discomfort in keeping this robot in a stock setup. Sticking to the original firmware meant having a device that is permanently connected to a chinese cloud service, and without the user awareness it sends many megabytes of data daily to this service. Also it meant that integration and features via a platform such as Home Assistant, are limited.


One feature that I fancied for some time was the ability to define zones, and instuct the robot to go to these predefined zones on demand.


Context

Using just the stock firmware and calling the robot original API from HA, it could be a challenge to apply predefined zones, given that starting a new cleanup overrides the previously built map. Only a zone cleanup after a complete cleanup would eventually be successful, but in this case, it kind of defeats the purpose of doing zone cleanups.

So I decided to adventure in replacing and rooting the original firmware, with something with a package such as Valetudo on top of it.

As initially found by the team from the Dustcloud project (https://github.com/dgiese/dustcloud), Xiaomi (or Roborock, the company that actually manufactures these robots) adopted the robust approach of creating separate partitions for the running firmware, and for a recovery image. As such even if one screws up with the device after rooting, by applying the factory reset key sequence (holding the Home button pressed, briefly pressing the Reset button, and releasing the Home button just after the hearing a chinese voice telling that the device will be factory reset), the device is put back into the original state as it was when it was taken out of the box for the first time.

This kind of allowed me to proceed with some confidence. Essentially I followed the decently put together steps from the dustcloud team:

https://github.com/dgiese/dustcloud/wiki/VacuumRobots-manual-update-root-Howto

I kind of adventured a bit further than the guidelines by doing the rooting on top of the firmware image that was previous to the one I had in my robot: I started by having the unrooted robot up-to-date with version 3.5.4_004004 (the latest provided by Xiaomi, as of the writing this post). For rooting, instead of selecting the version they recommended (version 3.3.9_003468), I decided to go and download the version before the latest. That is version 3.5.0_003532. The piece of reasoning behind it was the fact that in the Valetudo github page they have the announcement saying: "PSA: DO NOT UPDATE YOUR GEN2 TO 1910 & GEN1 TO 4004". Given that they only point the warning to this version in particular, this poses the likelyhood that previous versions were already used by the community without at least anyone having posted adverse effects of using the versions above 3.3.9_003468. and below 3.5.4_004004.

In case you are wondering, the firmware history can be checked here:

http://htmlpreview.github.io/?https://raw.githubusercontent.com/zvldz/vacuum/master/history.html

Rooting procedure

For following the rooting instructions, I have chosen to use a Raspberry Pi 3 that I had prepared with the software dependencies. Obviously this could also be done on a PC featuring a WiFi card, a Linux distro, Python, and so on.

Instead of preparing an image with Valetudo already on it, I preferred to do the installation of a clean but rooted image first, and install Valetudo on top of the running firmware.

Basically I have followed the steps in this guide:

https://github.com/dgiese/dustcloud/wiki/VacuumRobots-manual-update-root-Howto#instructions-on-linux

In my case, as I wanted to use the penultimate version, I have done the procedure using package v11_003532.pkg.

In a nutshell the main things the image builder shell script does in terms of modifying the firmware image are:
  • decrypts the firmware image;
  • extracts the files from the image (the decrypted image file is a .tar.gz file);
  • generates and replaces the existing keys with new SSH hosts keys;
  • imports your ssh public key (so that you can later have a root SSH shell);
  • changes iptables (firewall) so that the robot cannot connect to the chinese cloud;
  • adds entries in the /etc/hosts file with several Xiaomi cloud hostnames, in order to resolve to a local address instead of the true cloud public IP addresses;
  • a few more bits and bobs, also depending on the command line options specified by the user (e.g. disabling the tons of logs the robot produces - useful for letting the flash memory live longer..).
Once the image is generated we step into the flashing step. One aspect that I found was that with firmware image 3.5.4_004004 running (device not rooted yet), by just resetting the WiFi settings, the robot would not accept the flashing commands.

As such I had to perform the factory reset (where the robot replaces the current firmware with an image stored in a recovery partition). In order to do it, it is necessary to keep the home button pressed, and without releasing it, momentarily press the reset button. The home button should be held pressed until the chinese voice is heard (it informs that the factory reset will be started).


Then it is necessary to wait a few minutes until the factory firmware installation is complete.

With the factory image running, the device will accept the commands from the python script used for flashing (flasher.py), and the flashing (which takes a few minutes) should be done successfully.

Once it is finished, you will need to provision the robot using the regular procedure, i.e. via the Mi Home app. Select your WiFi network, input the credentials, and you are ready to go. Up until this point, your vacuum will still be a regular rockrobo vacuum.

Installing Valetudo

The next step is to use the root SSH access we have just obtained, for installing Valetudo (and theoretically do a lot of other things..).

If you followed the guidelines from the dustcloud project, you should have you SSH keypair at hand. 

Use your private key to configure you ssh session (e.g. via PuTTY) with the robot. Grab the IP address that was assigned to the robot (you may eventually get that information from your router), and use it to configure an SSH session. For the port number, select 22.


The private key file should be the one in PuTTY .ppk format. Specify it by going to the Connection > SSH > Auth section of PuTTY:



Regarding the Valetudo package, I have chosen to go with the fork from rand256, as it provides some features on top of the original Valetudo project and has frequent releases:

https://github.com/rand256/valetudo

Basically there are three ways the package can be installed, one of which is having it baked with the firmware image, and the remaining two are to either install a .deb package (available in the project binaries) or to install from the tarball which is also available in the releases section of the project.

I have chosen the .deb package as it is slightly more straightforward. Just run the command:

root@rockrobo:~# dpkg -i filename.deb

and you're done.

After rebooting the robot, Valetudo should be up and running:

root@rockrobo:~# reboot

Integrating with Hass.io

One of the motivations I had for rooting the robot, was to have a slightly more sophisticated integration with Home Assistant, by taking advantage of the definition of predefined zones, and use these in cleanup automations.

Initially I considered the approach that some have been following, which basically consists of enabling Valetudo to send map data via MQTT, and having a piece of javascript in Hass.io side (lovelace) that interprets the data and renders it on the browser. In order to limit the load on the robot, it is advised to use a separate Node.js (companion) service running on a different machine, called valetudo-mapper, in order to alleviate the load of generating the .png data periodically, from the vacuum cleaner itself.

However I found the entire approach complex to setup, and a bit cumbersome. One the other hand, the Valetudo web UI seemed minimally curated and functional:


So my first thought was the obvious: why not try to use this in Hass.io? The world wide web has this concept called "iframe", why not use the original web UI instead of trying to create a lovelace card from scratch with all the effort and limitations, and at the end of the day, the result being functionally inferior?

As such this was the route I have pursued: in lovelace there is the iframe card. This allows you to embed a website inside the lovelace web UI. By editing the lovelace configuration, and adding an entry of the type iframe:

      - aspect_ratio: 150%
        title: Aspirador
        type: iframe
        url: https://example.com

you can then specify the URL of where Valetudo is running.

It seems simple, right? Well, it's almost that simple.

In fact, there is a caveat: if your HA will be accessible via https from the Internet, you will also need to setup another URL accessible from the Internet, behind which the vacuum web UI will be behind.

In theory this could be done using the same public hostname, both for Hass.io and Valetudo. But as both applications have resources in the root context and in other paths, this makes it complicated to route requests based on rules if we were to setup a proxy in front of these applications (let's assume Nginx, as it is available as an addon in HA).

To avoid this complexity, the cleaner and simpler approach is to create separate names for the HA web UI (lovelace) and for the Valetudo web UI. Because it is easy to add new names for example in Duck DNS, there is no real implication in this approach.

Assuming we now have separate names for each of these endpoints, we need to have something to route the requests to the correct application depending on which hostname the request points to.

Before we dive deeper into configuration settings, let's take a peek at how our network would look like:

With this setup we are assigining Nginx a twofold responsibility: a) terminate/decrypt the HTTPS inbound traffic; b) route traffic to either HA or Valetudo, depending on which hostname the original HTTPS request points to.

Given that in practice we will be exposing the Valetudo web UI to the Internet, we will need to, just as with HA, setup authentication, but we will get to that later.

Assuming that at least some of us will be accessing Home Assistant from the Internet, and have it running in our premises (e.g. Hass.io on a Raspberry Pi), the typical setup will involve accessing the HA web UI through https on a hostname in a subdomain of some dynamic DNS provider (e.g. Duck DNS). When we use an iframe, the browser will need to point to the URL where the content is. This URL needs to follow CORS rules, otherwise the browser will not open it. As such, we need to at least make sure that the browser points to the same domain as the main site (not a big problem, if both are under duckdns.org), and (more importantly) that the x.509 certificate covers both subdomains. This can be achieved either by covering both names (e.g. wildcard) in the Common Name field, of by adding both entries to the Subject Alternate Name field of the certificate.

Regarding the DNS and the certificates, these two aspects are easy to put in place if you are using the Duck DNS addon in Home Assistant:


Registering DNS name for Valetudo

First we will need to register the two names (or add one, if we already have one for HA) in the Duck DNS page. In order to do so, navigate to https://www.duckdns.org/ and sign in with whatever authentication method you have chosen when you registered in Duck DNS for the first time (Github, Twitter, etc).
Next, enter the subdomain of your choice. This subdomain will be used for the Valetudo web UI:




In the Hass.io UI, under Hass.io > Dashboard > Add-ons, select the Duck DNS add-on. Scroll down to the end of the page, and edit the configuration section:


Save the settings, and click the Restart button, on the top of the Duck DNS add-on page. You may want to scroll down again in order to see the log to make sure the change was successful.

This addon will automatically recreate the certificate and include the subdomains everytime this configuration is changed, making the user experience very simple.

Configuring NGINX

As mentioned above, the proxy needs to route the requests to the correct application. To do this we need to configure NGINX.

The first thing to do is to change the configuration in the NGINX add-on, so that the proxy can accept external configuration files:

{
  "domain": "site1.duckdns.org",
  "certfile": "fullchain.pem",
  "keyfile": "privkey.pem",
  "hsts": "max-age=31536000; includeSubDomains",
  "cloudflare": false,
  "customize": {
    "active": true,
    "default": "nginx_proxy_default*.conf",
    "servers": "nginx_proxy/*.conf"
  }
}

Next with the help of the Configurator add-on, we need to create a file under /share/nginx_proxy/

You can give the file any name with the .conf extension. Let's assume vacuum-site2.conf. This file will contain the configuration for handling requests that point to site2, decrypt these, and route the HTTP request to the application node (the vacuum cleaner itself):

server {
  server_name site2.duckdns.org;

  ssl_certificate /ssl/fullchain.pem;
  ssl_certificate_key /ssl/privkey.pem;

  # dhparams file
  ssl_dhparam /data/dhparams.pem;

  listen 443 ssl http2;
  add_header Strict-Transport-Security "max-age=31536000; includeSubDomains";
  ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
  ssl_ciphers "EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH:!aNULL:!eNULL:!EXPORT:!DES:!MD5:!PSK:!RC4";
  ssl_prefer_server_ciphers on;
  ssl_session_cache shared:SSL:10m;

  proxy_buffering off;

  location / {
    proxy_pass http://site2.local:80;
    proxy_set_header Host $host;
    proxy_redirect http:// https://;
    
    set $auth_value "";
    
    if ($arg_authorization) {
        set $auth_value "$arg_authorization";
    }
    if ($cookie_authorization) {
       set $auth_value "$cookie_authorization";
    }

    proxy_set_header  Authorization "Basic $auth_value";

    proxy_http_version  1.1;
    proxy_set_header  X-Real-IP $remote_addr;
    proxy_set_header  X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header  Upgrade $http_upgrade;
    proxy_set_header  Connection "upgrade";
    proxy_set_header  X-Forwarded-Proto $scheme;
    proxy_set_header  Accept-Encoding "";
    proxy_read_timeout  90;
    if ($cookie_authorization = '') {
      add_header  Set-Cookie "authorization=$auth_value";
    }
  }
}

Besides the routing, there is a second aspect this configuration has to take care of: as it was referred above, the Valetudo web UI needs authentication, as it is going to be exposed to the Internet. On the other hand, as it is not great UX for the user having to enter credentials two times, both for entering Hass.io, and for seeing the iframe being populated with the Valetudo page, it is ideal that we automate this step, once the user is authenticated in Hass.io.

As such, what we do is: 
  • on the iframe card, we are going to define the URL, and add an authorization parameter to the request:
      - aspect_ratio: 150%
        title: Aspirador
        type: iframe
        url: https://site2.duckdns.org?authorization=dXNlcm5hbWU6cGFzc3dvcmQK#map.html

this authorization parameter contains the base64 encoded username and password that you have configured in Valetudo, separated by ':' (as specified for the HTTP Basic Authentication).
  • as we are only going to be able to call this URL once we are logged into Home Assistant (we are assuming it will be treated as a secret), once NGINX receives a request for site2.duckdns.org, it will parse the authorization query parameter, and map it to the Authorization header that Valetudo expects for a valid login. 
  • also, given that not every request to site2.duckdns.org will contain this authorization query parameter in the URL because there are several links in Valetudo pages and javascript, to other resources without passing this parameter (we don't have control over what the browser does), we need to set a cookie with the contents of the authorization parameter, to make sure that on every other request in the scope of the same session, that the Authorization header is set with the contents of that cookie. If the cookie already exists in the request, we map it to the Authorization header, otherwise we assume it is the first request, and set the cookie with the same value as the authorization query parameter.
This is essentially what we are doing in this NGINX configuration file. Restart the NGINX server after doing these configuration changes.

Enabling authentication for the Valetudo web UI

On the Valetudo UI, in to make sure that you have the authentication enabled, you will need to enter the UI, navigate to Settings > Access Control:



and in the "HTTP Authentication Settings", check the Enabled box, and define a username and password:


In order to calculate the base64 string that you need to place in the lovelace iframe card URL (the authorization header), you can for example use the base64 linux command to compute that string. You can use the vacuum cleaner itself or any other linux shell:

root@rockrobo:~# echo "username:password"|base64
dXNlcm5hbWU6cGFzc3dvcmQK
root@rockrobo:~#

Configuring lovelace

In order to edit the lovelace iframe, just click the top right button:


And click on Configure UI:


Go to the "Raw config editor" (also in the top right button), and add the entry for the iframe, as previously shown:

      - aspect_ratio: 150%
        title: Aspirador
        type: iframe
        url: https://site2.duckdns.org?authorization=dXNlcm5hbWU6cGFzc3dvcmQK#map.html

Once the changes are saved, the iframe should be added to lovelace as a new card, and the Valetudo web UI appear inside it:


With this setup we centralize the different tools, and the UX is reasonably smooth. So far I have not noticed any degradation in the vacuum cleaner performance when cleaning the house, while at the same time having this iframe being refreshed with the position an map updates. Monitoring the load using the top command, regardless of being or not with the Web UI opened, the load hovers around 1.2-1.3 during a cleanup.


The "player" process is by far the one which takes the most load on the robot CPU (it is the process responsible for all the planning, navigation, SLAM, etc).

3 comments:

zorgino said...

Hello.

Thanks for amazing tutorial.
Unfortunately, after thoroughly completing it, I get "Cannot GET /" on valetudo UI.
I've tried to remove "?authorization=***" by rewriting url, but then it prompts auth window.

Do you have any idea why?

zorgino said...
This comment has been removed by the author.
Lukas Lindner said...

Same over here. Did you find a solution?