This is just speculation based on my experiences plus a view of what’s exposed via Sense. I think there are really two levels of ML inference going on with Sense, one that relies integrally on the Sense monitor and another that relies solely on the mothership.
One foundational thought - due to limited home upstream data rates, the Sense monitor can’t send all the 4M samples/sec the monitor is capable of back the the Sense mothership. Therefore the Sense monitor has to do some very intelligent processing. I suspect that Sense sends a few of forms of data back to the mothership:
- A half second stream of power meter data that makes its way to our app.
- Every 2 second stream of data from each of the smartplugs
- A deep view of every short, fast transition that meets certain thresholds set by Sense. The transitions are tagged in the Power Meter (examples below). When I say deep, I mean that the Sense monitor sends a bunch of parameters / features related to that transition (current, voltage, and phase and timing/transition data for both mains). Those features are fed into a set of models for detection. If there is a “match” (really the model triggers), then Sense names it and logs that detection.
- But Sense also has to pay attention to slower transitions as well, that don’t get tagged by the monitor. Once example of that, is the charging of an EV which ramps over several minutes. In that case, Sense needs to use the 1/2 second stream of power data to do detection, rather than relying on transitions coming from the monitor. In some cases I have seen these detection show up as a nearly immediate bubble, but in others I have seen the detection appear well after the fact, in the form of “Backfilled” detections.