I finally had a chance to plot my AC cooling runtimes vs. Cooling Degree Days. As I suspected, outside temperature is only one factor in the equation. The length of sunshine during the day also plays a strong role - months with more sun show bigger slopes (runtime / degree day). I might try plotting against CDDs and my daily solar generation to get a better picture. Thanks for your encouragement.
The same charts plotted with solar production as the color. Solar energy production should be a great proxy for solar heating of the stucco on the sides of my house ! Looks about right with the slope increasing with increasing solar energy (except in a few cases).
I would be interested to see your data presented in the same form that @shanefinn03 and @dcdyer presented ours. ‘Total KWH wattage for all A/Cs: y-axis’ vs. ‘CCD index: x-axis’. (one graph, one slope). Could you also provide the R2 correlation and slope variables. That would allow us to compare our analysis with yours. My thermostats do not provide runtimes. I am still impressed at the amount of data you have collected and the detailed data analysis you provide. Thanks Don
I held back on using Sense identified energy because I saw so much conflation of Sense devices depending on the season - I have 5 different identified Sense AC devices that all correlate to my Ecobee runtimes at different points in the year:
Since Ecobee runtimes during cooling are far more accurate than “batched together” Sense energy numbers, and since I have simple 1-stage AC units, what I’ll do is multiply runtime by what think to be the actual power usage of each unit (based on rated numbers and Sense numbers). I think that will give the most accurate view in energy vs. CCD comparisons. Then I’ll do the regression graphs and linear model fitting.
ps: My Ecobee’s are more accurate for cooling, but I have a furnace issue that renders my downstairs Ecobee far less accurate during the heating season. My cold air returns are too small, leading to the cutout switch turning the furnace off after about 5 minutes of runtime, cutting power to the Ecobee, and leading to a subsequent reboot.
Here’s my assessment based on the methodology I outlined. I estimated my lower unit at 4.7kW and upper unit at 3.5kW. The slope numbers are in the same ballpark. As I expected, the upper number has better correlation since we cool the bedrooms more consistently than the downstairs, which has better airflow and stays cool much longer into the day. Most of our AC usage upstairs is from1pm onward. Maybe 3pm or later for the downstairs unit.
@kevin1 I was hoping that you would post one graph with one line that added both your “downstairs” and “upstairs” total daily kwh vs. CDD index. (A whole house consumption comparsion). I forced my linear calculation thru ‘zero’. From your graphs, it appears that your home is more energy efficient than mine. I see a lot of daily values that are 0 kWh usage so I am guessing that you turn off your A/C units when you are not home.
Thanks for straightening me out. Here’s the whole house plot. Our slope numbers are actually quite close - Both 3.6kW / CDD. And my R2/correlation is surprising better as well. But a few additional thoughts / questions.
I’m going to need to redo if I want to match up against your results. I used CDD65 while you used CDD75, because that was the default for the website I used (partially because that’s the typical US baseline standard). That would change my intercept, but not the slope. Sorry I didn’t read your explanation more carefully earlier - I should have noticed the difference when your were showing a smaller number of cooling degree days in Texas than I saw for the CA Bay Area microclimate I’m in.
My AC units are not that efficient - 20 year old SEER 11.5 units. But the Ecobee thermostats do turn up AC threshold when sensors tell it that we’ve been out of that part of the house for 15 min or longer.
House is about 4000 sq feet, with reasonable insulation, but no attic fan. We keep the heating / cooling numbers at 68 degrees and 74 degrees when we are home, but we have someone home nearly all the the time.
Our biggest cooling challenge is solar heating of roof and stucco, not air temp or humidity. That’s why we need to cool, even when the outside temperature might be below the the air temperature.
I’m up on the Wemo now. It was pretty easy. The granularity is also 1 sec, just like the HS-110.
It uses UPnP, but I didn’t bother as I already know its IP and port. Looks like the port is always 49153.
One really bad feature is that the Wemo always disconnects power and leaves it disconnected after an outage. Not good for sump pumps or anything critical. At least it comes back up on wifi.
Shouldn’t take too much to program a message to turn it on periodically, but we shouldn’t have to do that.
Still can’t create a topic…
I’m betting that TP-Link HS-110 and Wemo Insight use the same power monitoring chip, although I know they use different WiFi / host chips. Will be interesting to see if they both give the same data results at 1 sec polling.
HS-110 vs Sense vs Wemo
I got another HS-110. But it was a V1 instead of the V2. I should be getting the V2 soon. The V2 can do 220V, but the chip set is the same. That’s helpful.
With the firmware upgrade to 1.2.5 the HS-100 matches Sense much better, but it still isn’t 100%.
I put the 2 HS-100s on the sump pump and the dehumidifier and switched the Wemo back and forth, piggy backed on the HS-110s.
In the first plot I notice that:
1a. Wemo almost exactly matches Sense for the dehumidifier.
The HS-110 is now down about 25 watts from the Sense/Wemo numbers.
1b. Sense missed when the dehumidifier stopped once, but picked it up when it stopped the next time. In other instances (not shown) Sense comes back down between dehumidifier activities.
1c. It doesn’t notice the drop step where I guess just the fan is running.
1d. It missed all the sump pump activity.
On the second plot with the Wemo on the sump pump, I notice:
2a. The HS-110 is still spot on timing-wise with the sump pump even though I’m collecting on a .5s interval. I’m going to drop the issue of the late reporting by the HS-110 for now. I no longer have any evidence of it.
2b. This time HS-110 and Wemo agree on the wattage while Sense is about 90 watts higher.
2c. Wemo takes a while to ramp up and down. The HS-110 and Sense both nail it.
Pulling the Wemo off the sump pump now before I forget to check it and flood the basement.
Finally figured out a way to locate most Sense monitor errors without comparing with power company results by using hourly Always On here:
@Kevin1 I did a data review on my “Always On” values for 2018.
In the first graph, you can see that my “Always On” increased over the year. I attribute this to more Smart plugs and switches having been added to the house.
In June we took a vacation and powered off all the laptops and devices (external disk drives) that are normally running and you can see a drop in the “Always On” values. There was even another small drop in the June time period because an electronics cooling fan controller failed and I lost a power supply to a security camera at the same time.
It appeared that there were more values that were outside the base-line during the first 6-months of the year, so I created 2 separate histograms.
There were 538 outliers in the 1st-half of the year versus 348 outliers in the 2nd-half of the year. It appears that SENSE is doing a better job of calculating the “Always On” values.
I do not think that I made any major changes to my devices that were plugged in (and Always On) that would have created this difference. I do see a cycling pattern in the data. It is possible that a change in the “Always On” calculation occurred by the SENSE programmers. I expected to see a single bell curve in the histograms, but both charts show a double curve. I did not try to equate the ‘outlier data’ with missing downloads (or reject any data points).
You asked for some interesting data charts and I had not seen anyone publish their data in this type of presentation. Comments?
Thanks for publishing. A few thoughts from my perspective:
Take a look in the Power Meter at your outlying hours on the low end. I guarantee that many of them will have at least one data dropout in that hour.
Is see similar behavior in my chart - lower baseline before summer 2018, higher after summer 2018, but the similarities stop there.
- I have spikes for Halloween '17 (inflatable black cat and pumpkins on 24/7), plus winter break, spring break and summer break when my son was home from college
- Also have a weird period from Mid Aug through Mid Dec where I think Sense was playing with the Always On calculation. Starting in the beginning of Nov’18, Always On with smart plugs was added into the equation.
- Around Dec 20th, things wen crazy and I started seeing many data outages which affected Always On.
- I’ll try to do histograms for my data, to see if my distributions are also bimodal, but I’m going not have to think about where to draw the cut lines. Any thoughts ??
My data below:
- Red dots are aggregated daily Always On totals
- Blue dots are hourly Always On x 24 (to normalize with daily)
- Green dots on the bottom indicate that there was some good data for that clock hour (still could be a data dropout for part of that hour).
- Orange dots at the top indicate data was missing for an entire clock hour.
What do we see ?
- A long stable period from Aug 17 until June 18, where Always On mostly stays in a stable zone, though the number of hourly dips in Always On increases after March 18.
- A wilder period of swings between June 18 and mid-Aug 18.
- A very flat, low period between mid-Aug and Mid-Sep 18
- A resumption of higher Always On with wider swings from mid-Sep to the end of Oct 18.
- The start of the smartplug beta at the start of Nov 18, with a falling Always On as smartplug always on data gets pulled out of Always On. More dropouts and outlier low Always On hours.
- The start of a new Always On calculation around Dec. 20. Lots of hourly dropouts and shorter dropouts as well.
- Starting Jan 1, 2019, I did several things to reduce dropouts and they seem to have partially worked.
BTW - I’m still going to do the histograms, but I just put my nose back in my statistics books to try to figure out whether the central limit theorem is applicable to Always On. Also trying to remember the tools or find new ones that would enable me to separate the data into stuff that should fit a normal distribution and the part that shouldn’t, or would fit a different normal distribution. My current belief is that for populations (time periods) where the Sense Always On calculation remains stable, plus for the data points that don’t have data dropout, they should fit a normal distribution.
My histogram of all Always On hours is a mess…
Since I have been on a bender removing hourly datapoints that have some dropout in them, I’ve been able to dig a little deeper into my weird Always On distributions.
Here’s the original data covering 17 months… Remember: blue is hourly data here, red is aggregated daily. The orange dots on the top represent hours where no data was available (a big data dropout).
Same chart with good hourly data points in green, data points subject to data dropout in orange:
Here’s the histogram of all data, including dropouts:
Here’s the histogram of all data, with the dropouts removed. Not much difference except at the low end.
Next I I broke up the timeline into the 5 domains I talked about earlier.
- Original - The original Always On calculation
- Crazy - A period when the Always On calculation appeared to go crazy
- Better - the subsequent period when things appeared to get better
- Smartplug1 - Sense’s first round of adding smart plug data into the Always On calculation. Started with the beta of SmartPlugs
- Smartplug2 - The approximate time Sense updated the Always On calculation with smart plugs
Each period seems to have its own distribution. Some might even have more !
And here are the histograms for each of those time periods:
Now I have to think about whether any of this has any meaning
ps: Looks like I may have entered a new domain after Smartplug2, Smartplug3 ! Starting to see smartplug results center around 209W.
@kevin1 That is a lot analytical information on your “Always On” values!
- After you excluded the ‘dropouts’, your data is fairly constant until it went “crazy” (as you described it.). Hopefully your values are returning to a more normal daily pattern.
- What method did you use to determine when a ‘data dropout’ occurred? My Power company’s meter (which I used earlier to determine SENSE had bad data) only supplies me with daily information, so I don’t think I can determine bad data on the hourly basis that you are doing. You mentioned that you are using a secondary energy monitoring system to check your home usage.
I was not certain what my charts would reveal about my “Always On” values (hourly data).
- I think that data dropouts early in 2018 were creating some bad data points for me. In March, I installed a delay-timer relay (set at 4-minutes) to assist with the SENSE reboot after a power outage. I lose 6 minutes of data (4-minutes for the intentional delay, 2-minutes for the SENSE unit to reboot/reconnect) after a power outage, but that is better than hours of lost data. (Or having to manually reset the breaker.)
- There is a slight trend downward in the first half of 2018 on my data because SENSE was still identifying new devices. (My guess?) After 7/2018, I have not had many new devices identified.
- Turning off un-need equipment while I was away from home in June showed an expected drop. It wasn’t much, but every little bit helps.
- I am still trying to decide if this graph actually reveals any trends or problems.
- I did develop an EXCEL spreadsheet for myself where I listed every item in my house that is always plugged in and using power, then assigned an estimated wattage. Some data I gathered using a ‘Kill-A-Watt’ device, some data was taken by looking up the manufacturer equipment specs and some was just an educated guess.
My estimated “Always On” value is 330 watts.
My SENSE “Always On” value is 348 watts.
I think that my estimate is a close ‘ball park’ guess to what SENSE is calculating.
- Special Note: I have 93 separate devices that are always pulling power. I counted every item that pulled even the smallest amount of power (even the ‘lighted doorbell buttons’).
The only reason I can think of for the fluctuating “Always On” pattern is there are days when we tend to use our computers more and access the external drives. Maybe the modem and routers consume more power when Web searches (or Netflix) are being used. I’m just guessing at reasons. I haven’t run any tests.
We do know that the “Always On” value is an averaging calculation so you would expect to see the data points being ‘smoothed’.
Thanks for providing us with a glimpse at your data and working to interpret what occurred during those times.
Thanks as well for sharing yours. Your Always On appears much better behaved and predictable than mine during 2018, especially since you can correlate the what looks to be a downward slope in the beginning of the year with you actively doing Always On reductions. I’m guessing the bimodal pattern indicates hidden changes in the Always On algorithm. And that your low end outliers during the first 6 months (and second 6 months), are really data dropouts.
As for your questions about mine:
I do another export in a couple days - what I see from the bubbles is that my new normal is around 210W, but sometimes export reveals stuff the bubble does not.
As for finding dropouts, I cooked up a web app data scraper that looks at the Power Meter for every day, and counts the number of dropout events and negative usage events visible on the screen. Finally have a way to automatically find virtually all the dropouts. More here, including code:
Now that I have been able to exclude virtually all data dropouts from 2018 into 2019, I’m going to chart my utility vs. Sense data, one more time, by month.
Here’s the kWh difference between my billed PG&E data vs. Sense data, before dropout removal. Note that some of the hourly differences close to 20kWh and even over (those either involved dropouts when our cars were charging, or negative Total Usage errors). Also realize that each monthly distribution cluster consists of over 600 points, so even if you see many outliers, most are overlapping in that tight band around 0.
Pull out the data dropouts (gaps) and things look better. The biggest hourly errors drop down to 7kWh. Still high, but all the distributions get much closer to 0. Also note that almost all the error falls to the positive side (Sense comes in lower than PG&E).
Why are the big differences still occurring ? When I view these big differences in the Sense Power Meter, I don’t visually see any dropouts so there must be another error mechanism at work.
Looking at the hourly percentage error, another trend becomes visible. It looks like both the percentage error envelope and number of points with big differences grew from Jan’18 - Aug’18, and even into Sep’18, then suddenly became better.
Aha ! That makes sense ! At the end of Aug’18, Sense contacted me to tell me that based on my data, my very early model Sense might be experiencing ‘pin corrosion’ on the pins that connect the CTs to the monitor. They provided extension cables with redone pins, plus special grease to better seal the connections. I had my electrician install the extensions in mid-Sept. And guess what ? Much tighter difference distributions close to 0 since then. Compare the distribution of percentage error between 1-18 and 1-19. Vastly improved.
Mind you, Sense detected this issue without having access to my PG&E consumption data. Nice job, Sense.
Are Dropout inconsistencies across devices normal?
Custom Date Range in Web App Power Meter Tab
- What type of special grease / lube did they supply for your extension cable connectors?
- Is you unit mounted outdoors?
- Are you near the coast? Was the corrosion due to salt water?
Here’s the info that came with the cables :
Compliant to UL 2646
Rated for 300V and 80C
Non-toxic silicone grease applied to ends
MG Chemicals #8462
Unit is in a closed, dry, locked service closet, but it is not fully sealed (no weather stripping on door) or climate conditioned (no connection to heating / cooling).
Within a mile of San Francisco Bay (brackish water - not full salt water), but I don’t believe that salt water air was to blame. We don’t see the kind of metal corrosion here that occurs in true ocean beachside towns like Santa Cruz or Monterey.