I’ve just pulled the data again for all of Jan 2017 (the balance is the data pulled from Jan 22) and wanted to share what device learning performance looks like in my home. Note: Since Sense calculates it’s percentages based on the mix of devices used each day, this metric reflects the total percentage of daily electrical load that is learned, always on, and unknown. It does not represent the number of devices that have been learned, are always on, or unknown.
In January, load from learned devices dropped below 10% for the first time since late Oct 2016.
This, I’m told, is due to high load devices disappearing due to changed quality thresholds in ML models, and high load devices no longer having their expected load detected. In terms of the trends over time, load from learned devices is trending down since Nov 2016.
At the same time I’ve seen rolling averages from Always on growing through this month from a low of 9.5% to a high of 14.8%.
Talking about learned device load is just part of the picture. Drilling into the devices that Sense considers “learned” for Jan 2017 and which devices are correctly learned and trusted, of the 13.3% of total device load that Sense has identified, 4.4% of the total device load is trusted and 8.9% is untrusted.
The biggest impact is the loss of the Hot Water Heater which was nuked by an ML model quality threshold change, and the Kitchen Floor whose load stopped being detected on Jan 8th although it’s been on for the whole month.
For my top outcome of identifying power hogs, it’s hard to see this data in much of a positive light. I am less able to take informed action reducing my electricity bill than I have been since the end of October.
Very interesting! If you don’t mind me asking, how are you pulling this data, and what are you using to analyze it? What determines whether a device status is trusted or untrusted?
It’s disappointing to see the detection capabilities regressing some, but maybe they’re trying out different algorithms on different customers to get a sense for what works best and what doesn’t? I’d love to hear from the Sense team why previously learned devices are no longer “learned”.
I’m pulling the data from the Sense app using Usage > Trends > Day and I’m using Excel 2016 to record and chart it.
Ah, gotcha. I’m assuming you’re just manually pulling it then? My app doesn’t have a data export option as far as I can tell.
What about “trusted” vs “untrusted”. Is that just your own subjective evaluation?
Originally, I was just doing the metrics but quickly realized that Sense recognizing a device is just the first step in it becoming useful. Do we know what the device is? Is it all of the device? Is it combined with other devices? Does the load it reports for the device match it’s actual usage? Etc. 70% learned sounds great until you realize how much of the load comes from devices that we don’t trust are correct.
It required a subjective evaluation to determine the overall quality and trustworthiness of the learned devices.
And a download option of either the raw data or the chart/devices would be really useful, rather than manually transcribing it! It can’t be that hard.
The download option is the fallback, a consolation prize. What we need is device learning reporting across time series in the Sense app. I’ve already placed a product wishlist item for it. If this is a feature you want, please go to the link and press the heart icon on the post.
Hey @brianmur - take a look at my two replies on this thread: Long Running Devices - #3 by HilarioAtSense. I I think those will help answer some of your questions and observations. Your hot water heater being on almost all the time now sounds similar to the hot blower fan question of that thread. The points about any specific models degrading can be something you raise with our support team.
How did you get the raw data?
See my reply above on Feb 6th. I’m transcribing it and there is a wish list request to make this easier.