Losing device detections

I’m going to give you a probably unsatisfying answer. Sense uses machine learning, which is one of the best automated tools for learning and detecting patterns and images, but also has a challenge - it really can’t diagnose why it didn’t detect something it presumably once “learned”. But here are some possible reasons using human vision and recognition as an analogy. I probably should have used hearing as the analogy, since that is closer to Sense in the type of analysis it uses, but we’re more self-aware of why we didn’t spot something than we are about why we misheard something.

Some examples of why people miss seeing a person they are looking for (and the Sense comparable):

  • He/she was wearing a hat - sometimes the Sense on-pattern might come adorned with an additional flourish. For instance, motor patterns look different depending whether they start with a light load vs. a heavy load.
  • I only met him/her once or twice so my picture of him/her was slightly different - machine learning needs lots of repetition. Sometimes, early in the recognition cycle, Sense might not have enough training to really recognize a pattern solidly.
  • It’s been a long long time, so my memory of him/her has faded - Machine learning has a challenge in what is called “concept drift” - Over time, as Sense learns how to remember more and more things, some of it’s “memory” for little-used patterns erodes. That’s what I think is what is partially to blame for why detection of my 2 AC units got conflated after a long idle period in the winter. Concept drift might be why your AC units don’t reliably get identified from season to season. My experience here:
  • I just got a new pair of glasses (this example is a bit of a push) - Sense will occasionally update their models based on new training. This will typically result in some new detections, but might also make some of the old patterns that were reliable “look” a little different to Sense.
2 Likes