This may be a “Technical” or even “Meta” discussion but it’s fundamentally about detection so here goes:
I understand that the Sense device detection (algorithm) is happening at the monitor (the in-house Sense) based upon the cloud Brain and ML therein. What I don’t understand is how the usage patterns of learned (or pre-learned) device signatures comes into play in the detection.
Sharper English-speaking minds may have called this the Q-U problem?
Suppose an AC unit’s blower & compressor are detected because they have distinct electrical signatures in-and-of themselves … the Sense algorithm will eventually learn that the two are related if only because users (programmers) teach the algorithm by merging the two detected devices into one.
Meanwhile, without being taught, is it not the case that the electrical signatures are inextricably bound (Q-U) so in teaching the algorithm is essentially corrupted = dumbed down?
Along with that, and flipping the perspective somewhat, suppose in my house the garage door opening always proceeds the garage lights being switched on (manually). Q-U. Unlike device merging this is not something that the Sense can be told. It’s lost knowledge BUT of course Sense could see these patterns so perhaps it’s better not to be told!
So, question: Does Sense see and use these types of Q-U patterns?
And by extension, Q-mostlyU patterns?