(This is just a curiosity of mine, and can be considered low priority)
Say the Sense team makes a change to an algorithm that’s believed to be an improvement in how the Sense monitor discerns individual devices power usage from the main power signature. Does it then rerun any (say 3 months back) or all of the previously recorded main power signature history through this improved algorithm, and update the Stats, Usage, and Power Meter information linked to each device based on this improved algorithm?
It feels like there are reasons this would be the case…
- Usage and power meter information for a device wouldn’t be consistent over time if there were differences in the detection and/or calculation method used for certain periods of time.
- I’ve also noticed that once a device is detected, the power meter data available for that device usually predates when the detection was reported by at least a few months. This seems to be evidence of past data being rereviewed through what is now believe to be a more intelligent ‘lens’ (or maybe the start of this past data indicates when Sense first believed it had found a discrete device, but the detection wasn’t reported to the user until a certain confidence level was reached month later).
- In general, this just feels like the proper approach… if you believe your algorithm is smarter now than it was when a dataset was initially ran through it, why not reprocess it through the improved algorithm?
There are also obvious reasons why changes to Sense algorithms would only be applied to data collected going forward…
- Practical limitations on processing power, especially as the Sense user base grows and the length of current users data history grows longer.
- Depending on how significant and frequent algorithm changes are, it may make the device-level Stats, Usage, and Power Meter information erratic and jumping all over the place. If the changes to the algorithm were in fact an improvement, technically the erratic changes in these metrics would mean they’re becoming more accurate. Even so, some users who monitor these metrics may not like the changes - especially if there’s any action driven by these metrics.
- Avoiding the need to support/troubleshoot unexpected issues that arise from reapplying updated algorithms to past data.
So what do we know about this? Maybe the answer to these questions is a combination of both. i.e., whether the algorithm reexamines past data, what subset(s) of past data are reexamined if not all (ex., may only look at A/C devices if algorithm update is specific to A/C detection), and how far back it looks all depend on the specifics of the algorithm change(s) being made.