Human-dictated usage pattern ML vs detection ML

This may be a “Technical” or even “Meta” discussion but it’s fundamentally about detection so here goes:

I understand that the Sense device detection (algorithm) is happening at the monitor (the in-house Sense) based upon the cloud Brain and ML therein. What I don’t understand is how the usage patterns of learned (or pre-learned) device signatures comes into play in the detection.

Sharper English-speaking minds may have called this the Q-U problem?

Meaning:

Suppose an AC unit’s blower & compressor are detected because they have distinct electrical signatures in-and-of themselves … the Sense algorithm will eventually learn that the two are related if only because users (programmers) teach the algorithm by merging the two detected devices into one.

Meanwhile, without being taught, is it not the case that the electrical signatures are inextricably bound (Q-U) so in teaching the algorithm is essentially corrupted = dumbed down?

Along with that, and flipping the perspective somewhat, suppose in my house the garage door opening always proceeds the garage lights being switched on (manually). Q-U. Unlike device merging this is not something that the Sense can be told. It’s lost knowledge BUT of course Sense could see these patterns so perhaps it’s better not to be told!

So, question: Does Sense see and use these types of Q-U patterns?

And by extension, Q-mostlyU patterns?

1 Like

I’ll attempt an answer… In ML for image recognition, coders employ a layered hierarchy of recognition to find complex things like faces and animals. Early layers detect simple features like edges and lines, subsequent layers look for combinations of simple features (mouths, eyes, etc.). Later layers combine those recognitions into full face recognition and generate bounding boxes.

Sense has to do something similar, though the time serial nature of waveforms requires a different type of neural network structure. To wit, Sense first has to find simple patterns before learning to combine them into more complex patterns. To put it differently, my furnace blower on/off is a basic pattern that comes on in association with my AC compressor, or heating igniter, or just standalone. That blower component needs to be detected before making associations with AC or heat.

Your Q-U example is a little tricky since since machine learning isn’t really required to detect the Q or U since those are direct-coded in ASCII. Better analogy if you had to first use image recognition to detect the letter patterns.

FYI - I have posted this before, but here’s a great article on how “serial” ML works.

1 Like

Thanks, that’s a well-thought response.

The RNN article is great.

So following on, consider the letters as the quanta of the data input and not something that needs to be recognized as such (which is what I was trying to convey).

In the Sense environment the quanta at the lowest analysis scale are the smallest resolution electrical signals beyond which nothing is “recognizable”. On another (larger) scale, device detection and on/off delineation could be the quanta. And on a yet larger scale might be the daily overall energy usage and so on. At each scale beyond the base quanta (=letters or electrical measurement) there is different information that can be extracted from the data:

A power spike likely related to a motor.
An AC blower switched on.
An AC compressor switched on/off.
An AC unit switched on and off.
The AC unit was off for most of May but was used for all of July.
2019 was hotter than 2018.
It’s getting hotter.
Expecting an AC upgrade.

That kind of thing. At each scale of observation there are different quanta.

The text examples used in the RNN article all fall back on assembled letters (e.g. “in all the texts of Shakespeare”) as the base quanta just as Sense falls back on (in theory) the small resolution electrical signals it receives. However, there is additional information (training) being given to Sense in the form of device merging and so on that leads to determinations like the above.

What I question is how the ML is being given that additional data. If an end-user merges devices in theory there could be no feedback into the algorithm, and none needed, because it is post-detection. I would assume in fact that this information doesn’t get fed back into the algorithm. Right?

BUT, if there is additional data like a Q-U instance where, say, most users always switch the garage lights on after the garage door opens AND if Sense has only learnt to detect the garage door THEN is Sense in fact able to exploit that knowledge and look for a light switch … tweak it’s own algorithm to bias the detection?

Sorry if this is a clunky description and the answer is already self-evident. I need training!

I think I understand what you are asking. These are just my suspicions about the answers along with why I think so.

  • I think some kinds of device “merges” are used for training but no so sure about others, since “merges” are undoable. “Merges” of already detected devices with smartplugs need to be linked to training so that the smartplug activity is associated with the pre-existing detections to avoid double counting plus to match up “ground truth” (from the smartplug) with the already detected device. But I’m guessing that the case where you link several patterns into a single device might not be used for reinforcement training, mostly because it’s such a definitive input (these are connected each and every time, but can be turned off as well) that it really doesn’t fit the ML feature input process that is based on neural networks, weights and probabilities.
  • The human error feedback “Device is not on” is used for reinforcement training. That feedback is different from a merge, because you are tagging only a single instance of identification (instead of always)

The article I linked to gives some insight into how LSTM neural networks use feedback and weights become “sensitive” to patterns, and patterns of patterns, that eventually allow the network to approximate human grammar. So a ML system like Sense should learn to look for other time-associated device patterns surrounding the identified opening of a garage door, for two reasons:

  • Because it’s likely - many devices have multiple components.
  • So Sense can automatically merge the components if the hit ratio is extremely high.

ML is built to do this as long as the timing window under analysis (LSTMs have a finite time/# samples window), and the number of abstraction layers are large enough to “see” the whole pattern.

1 Like

Thanks @kevin1 this goes a long way in filling my knowledge gaps.

Part of the reason for posting the question was a thinly-veiled attempt at prompting Sense to do another video along the lines of the “What Is Machine Learning?” orchestra analogy (or maybe I missed that one?) but to go deeper into the idea of what training Sense means and how feedback works, and doesn’t.

My knowledge is sufficient at this point to understand the Sense process, including whether to install one (debate efficacy; buy; install; use; understand the limits; wait; be amazed by the power of ML, and sometimes frustrated) but I’m guessing many new users are frustrated from the get-go by the lack of what seems like a simple ability that Sense should have.

This is my way of feeding back into the system …

1 Like

This topic was automatically closed 365 days after the last reply. New replies are no longer allowed.