The first 3 months

I have been a Sense user for 3 months now. Overall, I am extremely satisfied! Technical support is superb and the user interface / app is beautiful. In the interest of sharing my Sense story with others, the only thing that really needs discussion is detection efficiency. As we all know, this technology works best when it can identify individual devices.

Sense has natively detected devices 18 times. Of those, 9 are working well. What about the others? Two were never heard from again so must represent a confused algorithm. One started getting conflated with something else, so technical support suggested that I deleted it and wait for re-detection (which did occur, and the new detection is better than the original). One detected device was only picking up 30% of usage so I put that device on an HS110 to get a better view. I also retired several of of the physical devices that had been detected, which makes the statistics look worse but has nothing to do with Sense technology.

Those 9 native devices pick up about 40% of my total usage. Sense is intentional about detecting those devices that have the most impact, and they have achieved their goal in my case. Since I wanted a more complete picture, I also invested in several TP-Link smart plugs with energy monitoring capability. Those devices are also located to provide as much impact as possible. The graph below shows how much of each day’s total usage is accounted for by each detection method over these first three months. The blue color represents native devices whose history is applied retroactively in certain cases. Uncolored (white) areas represent native detection.
Sense-1st3

I categorize my devices by their function in my house. Air conditioning is the largest usage by far, as my stove and hot water both run off natural gas. The graph below shows weekly averages over this same period.
Usage-1st3

I feel fortunate to know where 90% of my electricity is being used. Especially compared to other users for whom Sense has not been so informative. My intention in sharing is not to boast, but rather to congratulate Sense on a job well done. This post may also help new users to wait patiently for results, which do not happen instantly. Finally, this post shows how important integrations are for my Sense experience. Although one could say it is a weakness of Sense that their technology needed that much help from integrations to reach 90% accounting, I consider it a strength that they have made a flexible product that can work with other technology to fill in for their weakness. Teams are stronger than sum of their individual players.

2 Likes

Hey there @jefflayman! We love hearing feedback like this. If you’re a fan of Sense, there are a few ways you can help us share your story and experience.

  • Sense Saves: Have you saved money with Sense? Posting your story to Sense Saves lets folks outside the Community see how you saved with Sense!

  • Review Sense on Amazon: If you’re a fan of Sense, this is one of the biggest ways you can help more people find us!

I forgot to share another graphic from my first 3 months. The image below shows how frequently native detections occurred for me. It is plotted on logarithmic scale since most occurred near the beginning, with longer and longer times between new detections. Some detections came in clusters of two or three at a time. The vertical value in the plot is the number of days since the previous detection divided by the detection count in that cluster.
detection
A trend line curve is plotted with the data. I found that what Excel calls a power trend line fits best. The equation for this trend line is displayed in the graphic. That equation implies far more precision than is warranted since the coefficient of determination (R-squared, which measures fit quality) is not very close to one. Even if it were a perfect fit for my data, each Sense experience is unique.

I know the following will be wrong for a lot of people, but I offer it anyway since some idea is better than none. Hopefully it is like the weather man, to whom we still pay attention even though his forecasts are often wrong. The trend line equation above can be integrated and rounded. The result is an equation that expresses the number of native detections, D, expected after n days: D = 10n⁰·² - 10

For example, this equation indicates that after 100 days I can expect 15 native detections because one hundred raised to the power of zero-point-two, times ten, minus ten, is equal to fifteen. Another example is that after 8 days I can expect 5 native detections because eight raised to the power of zero-point-two, times ten, minus ten, is equal to five.

Let me repeat that this is only a guideline. The above equation doesn’t even fit my own experience exactly. Please expect that your mileage will vary.

2 Likes

To continue the story started above, here is the report after 1 year. In general, the trends have continued. The prediction equation given above for device detections said I would get 8 new devices, which is what happened. As before, half of those provide useful data. And I still know where 90% of my electricity goes, thanks to Sense and its integrations.


I eliminated the Devices category and added 4 new ones: Lights, Laundry, TV and Oxygen. The Oxygen category consists of a single device, which my family needs due to COVID. Although it uses a lot of electricity, we are happy to have a home oxygen concentrator since it allowed my mother-in-law to leave the hospital. She was there for two months, and was quite weak upon her release. She is much better now, but still uses oxygen at night.

Week 44 is noted NO DATA. The server at Sense had a “monitor data pipeline performance issue” during the last week of October, with multiple customers affected. Although Sense reported partial data to me that week, some of that was incorrect upon examination. Therefore, I discarded even the partial data and left the whole week blank.

The TV category has three devices, all integrations. At week 46, I got around to putting the living room TV on an HS110. I was surprised by how much energy usage it recorded, as I did not expect it to be so inefficient. At week 18 of 2021, I replaced it with a new model which has the same diagonal measurement, and saw energy usage for that device drop about 75%! Based on 24 weeks of TV usage data, I estimated energy usage from the old TV during the period prior to connecting the HS110. Those estimates are included in the above chart as if they were measured values. The chart below does distinguish between measured and estimated values.

The HVAC category accounts for 33% of total usage during this first year. During the summer (weeks 27 through 38), it accounted for over 50% of usage! Since the 3 month summary above was written near the end of summer, keeping Other to 10% of total usage was easier. As winter came on (I have gas heat), I realized there were things still to learn. I purchased 3 of the new KP115s to help track that down. I now have a total of 18 TP-Link devices, which is almost at the limit of 20 supported smart plugs. I also embarked upon an experiment to measure lighting using Phillips smart LED bulbs. You may read details of that experiment at Hue indicator bulb. That experiment was successful in identifying much of the remaining Other usage.


The above picture illustrates how important integrations are in reducing Other. They have measured just as much energy as has Sense itself! Without Sense, of course, none of that energy usage information from Phillips and TP-Link would have been recorded, so I am still very happy with Sense. @NoHoGuy reported in his 1 year Observations that Sense alone was able to get Other down to to 30%, which is different than my experience. The final key for reducing Other in my case is manual estimates. They are based on past measurements as described for the TV, or based on a traveling smart plug as frequently suggested by @kevin1.

1 Like