Detection is getting worse

Kevin, thanks for the links. Understanding machine learning is above my pay grade, but to boil it way down, your point is that as machines learn, they change. This is logical, of course, but may be missing the point when applied to Sense detection of devices. Here is why.

The complaint was about devices that have already been identified being lost. Machine learning is about finding new devices in the noise. You might expect that once the artificial intelligence in the cloud has tagged a device and provided its parameters to the local Sense box, the learning part would be finished.

However, the way Sense is currently written, the AI in the cloud still occasionally intervenes and passes new parameters to the local Sense box. In principle, this is to fine-tune the device definitions and improve performance. The complaint is that in practice, sometimes those tweaks make things worse.

I have seen this myself. My water heater is on a Kasa plug and also has a native detection. The link below provides a graph of the relative performance of those two. Blue in that graph is where the two agree, orange is one kind of difference, and yellow another difference. Something changed around week 20 that decreased orange yet increased yellow, leaving blue mostly unchanged. Another change around week 44 decreased yellow but sent orange through the roof and blue towards nothing.

It would seem logical that before the AI in the cloud passes a new definition to the local Sense unit, it should first compare the performance of the tweaked and original versions. The new version should only be loaded into the local unit if it is actually better. Such a comparison would be its own layer, thus not subject to drift in machine learning. This comparison step is apparently not happening, which is what I take to be the point of the opening post.

For curiosity, I brainstormed one implementation of such a comparison two years ago in this post:

2 Likes