Other - Usage Percentage Totals

During “noisy” times, I’ve noticed detected devices showing within the “Other” category. Is there any method that can clearly identify the “true” percentage of usage (omitting detected devices)? This category currently represents about 25% of my total usage.

I don’t think it’s a correct representation of non-detected usage. My assumption is the percentage is much lower. I’d greatly appreciate the community’s insight and proposed actions that I should take.

I don’t think the bubbles can be 100% trusted for this. I often will do something, like turn the stove on (which is mostly reliably detected) and if I watch the bubbles, I might never see stove show up, but I’ll see the Other bubble grow and shrink as the burner turns on and off. If I check the devices tab, it won’t show as on. But… If I go back to the devices tab several hours later and look at the stove, I’ll see usage on the power meter and it’ll tell me “stove has been off for 4h 22m.” I’ve seen this happen with other devices too. So I think that sometimes, even though it shows as Other at the time, it will still be categorized.

2 Likes

I’m curious about the various states that exist in Sense’s device detection mechanism. The bubbles are a live, real time, feature. Whereas the per-device usage and power meter data can be updated after the fact, I imagine. Consider the following:

  1. Refrigerator turns on. The device detection works for the "on" signature, and a bubble appears for device refrigerator, and the device power meter shows the temporal power Sense thinks is being used by the refrigerator (including the unique "on" signature). Half an hour later, the refrigerator turns off, but Sense doesn't immediately notice this "off" event, so the bubble for refrigerator still exists and the refrigerator power meter still shows power being consumed. Maybe an hour later, Sense "detects" that the refrigerator is "off", though it's been off for an hour, and the bubble goes away. Perhaps now, or at a later time, Sense realizes that the refrigerator actually had turned off an hour early after analyzing the historical power data, and updates the refrigerator's usage and power meter to correctly show only half an hour of consumption.
  2. Refrigerator turns on. The device detection does not work immediately, and the 'other' bubble appears or gets larger to account for the uncategorized power of the refrigerator. Later, Sense detects the device, perhaps after the refrigerator turns off, or maybe when it is still on. The bubbles update accordingly, and the refrigerator device's usage and historical power meter data are retroactively updated.
Is Sense's device detection looking primarily at 'live' power data signatures, but also looking at 'past' power data samples to update/change the device detection? If so, can both 'live' and 'past' data be used to detect both device 'on' and 'off' events?
1 Like

Although I’ve been a Sense customer for some time, the underlying detection process remains outside of my understanding… I’ve become accustom to leveraging “trust” and “patience” that the machine-learning algorithms are functioning as intended! My real and tangible outcomes is where I focus my attention…

Your replies at this point, really helped me to understand why my monthly Sense Reports are always notably higher than my actualized energy bill.

To get perspective: AFAIK, the “live” graph you see (Mains Power Meter – not a Device meter) is a combination of historical data pulled from the cloud (clearly) and “realtime” data that is streaming directly to the local Sense device for the most recent data (not sure how big that local cache is). The graph is downsampled (to 1/2s) to make the graph rendering (processing) manageable, I believe.

Device detection meanwhile is based on matching ML models generated from the historical data. Model generation and revision is something that happens on a much slower time scale (“on the cloud”) vs the very high sample rate model matching (to current/phase/voltage) that out of necessity is happening at the edge, i.e. that is initiated by the Sense device itself. I assume models are stored locally and matched accordingly.

So essentially both “live” and “past” data are used in device disaggregation (“detection”) on different timescales. Witness that you can actually get retroactive detection to a certain extent. “Live” detection has more potential accuracy because the datastream stored in the cloud is heavily downsampled.

I’m referring to the inference process for a device, not the training process…so after a device detection model has been created.

I assume the little orange Sense box in the panel is doing neither the training nor the inference, rather I assume it is just sensing the voltage/current (and getting samples from any smart plugs on the LAN), and sending the raw data to the cloud where the training and inference is performed, likely with some mechanism to compress the high sample rate data. I have no idea how long Sense would keep the high sample rate data.

I assume the power meter data that the Sense app (iOS, web, etc) gets is from the cloud only, but only a down sampled version of the data.

So you’ve witnessed “retroactive detection to a certain extent”; I think I’ve witnessed my example 1, where a device had actually already turned off, but Sense at the time hadn’t yet realized it, but later when I checked the app again for the same time range, Sense then correctly had the device as pulling no power. I don’t think I’ve witnessed my example 2 before.

“Retroactive detection” manifests itself as data backfill. You can imagine this if the historical data is of sufficient resolution to go back and search for models … mileage will vary depending upon the nature of the model.

The local Sense box, from what I understand is actually doing what you call inferencing – let’s call it detection. The model, built on cloud data, is stored locally in the Sense memory I’m guessing and so the high frequency sample data can be compared and matched. Without that, the dependency on the cloud would likely have latency beyond workable.

@james_reilley,

This is just speculation based on my experiences plus a view of what’s exposed via Sense. I think there are really two levels of ML inference going on with Sense, one that relies integrally on the Sense monitor and another that relies solely on the mothership.

One foundational thought - due to limited home upstream data rates, the Sense monitor can’t send all the 4M samples/sec the monitor is capable of back the the Sense mothership. Therefore the Sense monitor has to do some very intelligent processing. I suspect that Sense sends a few of forms of data back to the mothership:

  • A half second stream of power meter data that makes its way to our app.
  • Every 2 second stream of data from each of the smartplugs
  • A deep view of every short, fast transition that meets certain thresholds set by Sense. The transitions are tagged in the Power Meter (examples below). When I say deep, I mean that the Sense monitor sends a bunch of parameters / features related to that transition (current, voltage, and phase and timing/transition data for both mains). Those features are fed into a set of models for detection. If there is a “match” (really the model triggers), then Sense names it and logs that detection.

  • But Sense also has to pay attention to slower transitions as well, that don’t get tagged by the monitor. Once example of that, is the charging of an EV which ramps over several minutes. In that case, Sense needs to use the 1/2 second stream of power data to do detection, rather than relying on transitions coming from the monitor. In some cases I have seen these detection show up as a nearly immediate bubble, but in others I have seen the detection appear well after the fact, in the form of “Backfilled” detections.
1 Like

@kevin1, Your speculation is as good as mine. Your first 3 assumption bullets provide more details of what I meant by “sending the raw data to the cloud where the training and inference is performed, likely with some mechanism to compress the high sample rate data”.

I would assume that many of Sense customers are techie nerds, and would appreciate more details on its implementation, who could help get more interest, blogs/articles, integration, hacking…more customers. Anyway, I understand the desire/need to keep technology secrets kept secret.

You might want to watch this video with excerpts from an NDA webinar Sense gave last year. There’s some clips and screenshots in there on how they do clustering when it comes to categorization of devices, and On screenshot reveals a couple of feature (phase0 and power). About 3 min in.

2 Likes

Indeed. Of course it isn’t impossible that Sense could modify the downsampling rates for the “juicy bits”. And that the rates can also change according to available processor cycles.

Something to ponder is the artificial plateauing of device signatures (as seen in the graphs) and what that means: the Sense device model has determined an averaged wattage is adequate to display … it would also imply that until there is an “off” recognition the network (and processor) load could be minimized for that device.

1 Like

This topic was automatically closed 365 days after the last reply. New replies are no longer allowed.