5+ Years with Sense

Doing a little New Year’s cleanup and archiving here and decided to do a retrospective of the past 5+ years of data.

How Accurate Has Sense Been vs my Utility - Mostly right on !

Here’s a view of hourly Net Usage data from Sense vs. my Utility (PG&E) Net Meter (NEMS 2.0). Perfect accuracy would be all reading right on top of the 45 degree line. Points below the 45 degree line represent hours where Sense sees less usage, point above, where Sense sees more usage than my utility.

When/Why was Sense grossly inaccurate ?
There are about 4 regions of note in this graph that highlight different kinds of issues.

  1. Not Visible in graph - Sense outages that cover a whole hour. There’s no way to plot these because there is no data from Sense for these 159 hours. That means 0.26% of the data is missing.

  2. Loss of Sense Solar configuration - Sense started producing negative Total Usage.

  3. Sense partial hour dropouts - Since the data drops lasted less than an hour, Sense produced some data, but not a summation for the complete hour. Most of this occurred in 2018 and early 2019.

  4. Sense metering more - I can’t remember the origin of this issue. I’ll need to dig it up.

There’s also one more interesting thing buried in this chart - infrequent peak usage. The region circled show that I only use more than 25kW a few times per year.

Error Distribution
If I assume my PG&E meter is the golden measurement and Sense is a secondary measurement, I can look at the absolute difference between the Sense measurement - PG&E measurement as the “error”. If I look a histogram of the errors, we find almost all are very close to 0 with 99.9% being within +/- 100Wh of zero.

A box plot is is also a nice histogram-like way to view the distribution of errors. It simplifies the histogram of a gaussian distribution (a pattern these errors seem to follow), into a simpler bar/box that is easier to digest and understand.

A box plot of error distribution for each year shows that the error distribution was wider and shifted towards the negative side from zero for 2017 and 2018 and got closer to being median-centered on zero and narrower (better) after that - Sense improvements in data handling, mostly in reducing dropouts for me.

If I look at the error distribution by time of day, there is also a definite trend. Errors are larger and positive during the daytime, near midday and taper off in variation and go negative through the night.

1 Like

D*mn, that was an impressive analysis of your last five years! Thanks for sharing.

Any analysis or commentary on device detection, and what you ultimately ended up doing for device level stats? Did you end up going with individual smart plugs for the appliances that mattered to you?

1 Like

Thanks @MikeekiM ,
Between a little luck with the devices in my house and judicious use of smart plugs and other integrations, I have moved from an Other that was about 55% of my usage in 2018 to about 20% nowadays. Part of the improvement was thanks to Sense learning enhancements and partially attributable to using a good strategy deploying smart plugs (learned from making mistakes).

The 2023 device breakdown gives you a view of the biggest device users including 8 of the biggest which are all native detections except for 1. The top 10 including the 21% Other account for over 80% of my total usage.

I’m thinking that the Ecobee integration might have helped learn the AC units. Sense’s Tesla detection has improved over time.

You can kind of see the progressively lower Other in this time series. I replaced a faulty Sense monitor in
April 2019 so my 2017-early 2019 isn’t visible on my account anymore. But my 2018 other was 55%.





1 Like