How Accurate is Sense vs. Utility Metering?

The first thing many people ask when they first install their Sense is “How accurate is Sense vs. my utility monitoring?” We can now begin to answer that, at least in the case where one’s utility allows them to download usage data. I’m going to walk through my accuracy analysis process for all the existing data I have for 2018.

In my case, I have solar, so I have two utility data sources:

  • An hourly net meter energy reading from my electric utility, PG&E, via their “Green Button”. Unlike some solar installs, I don’t have a second meter looking at my total house usage - I’ll only be able to compare Sense net usage vs. the net usage from my PG&E meter.
  • An hourly solar energy reading from my SolarEdge inverter. The data is downloaded via my SolarCity / Tesla portal. Unlike PG&E and Sense, the SolarCity download capability only allows hour resolution downloads a day at a time. I had to write a little script to download all 224 days in 2018 I was interested in, then merge them all together.

Once the data was in, I did a little processing in R,

  • Converting the vertical narrow Sense data format into a wide time vs. device format and extracting only the Total Usage and Solar Production columns
  • Calculating my Sense net usage (Sense Total Usage + Solar Production, since Solar Production is negative)
  • Calculating the difference (Diff) between PG&E net usage - Sense net usage
  • Calculating the percentage difference (PerDiff) between PG&E net usage - Sense net usage
  • Aggregating the data into daily data so I could also compare that.

An example of wide Sense data below - Note all the NAs (not available) for hours where there was no energy value available in the Sense export for those devices.

Here are the initial results for Sense net usage vs. PG&E net usage in a scatter plot. A high percentage of the data correlates quite well. The slope of the line is almost exactly 1 at both the hourly and daily level. But there are a significant number of outlier data points, especially at the hourly level, where the Sense power values are well below the PG&E values. And based on color, the furthest outliers occurs in March and June. There also seems to be a peculiar upward curving tail in the negative end of the hourly curve. I’ll investigate both of those in depth a little later.


One more look at the accuracy - If I look at a histogram of my net PG&E usage minus Sense net-usage difference, two things become apparent.

  • There’s a long, but very thin “error tail” on the positive side, where the PG&E net is much greater than the Sense net.

  • But huge majority of the differences are within a very narrow band around zero. Based on my histograms, I’m only going to look in detail at the 217 hourly mismatches (out of 5421 total) greater than 200W to start with.

Hourly difference is less than 500W

Hourly difference is less than 200W

3 Likes

So now it’s time to figure out the origins of the 217 biggest hourly mismatches between Sense net energy (I’m correcting this from previous entries - I’m really comparing energy, not power) and my PG&E hourly net energy, for the 5,421 exported hours. But how does one dig through that much data to analysis and categorize root causes ? Fortunately there are a couple of easy clues about what to look at more carefully:

  • First, the comparison between solar hourly and daily mismatches hinted that multiple mismatches often occur in the same day. Therefore it makes sense to apply a measure to each mismatch that looks at how many other mismatches occurred in the surrounding 24 hours or so. That measure of locality would probably be helpful in identifying systemic issues that last multiple hours.

  • Second, looking at the 5 largest mismatches (largest Diff) in the sorted list, it’s clear that there is something wrong with Sense’s measurement during these hours - Total Usage is negative ! So we should look at the magnitude of mismatches vs. Sense Usage and Sense solar to see if there is any pattern. What’s more, most of the largest mismatches take place during only 3 different days (table below)

It’s a simple matter to add a calculation to sum the number of mismatches in the surrounding 24 hours. We’ll call that value the number of problem neighbors (ProbNeigh) and plot the results over time.

Time locality of many of mismatches totally jumps out from this chart, though there also appears to be a random dispersion of other mismatches one time. Let’s push more closely into a time region with plenty of grouped issues.

Clearly we’re going to need to look into what happened on April 29th, June 7th, June 10th and June 21st.

But before we look too closely, we should also look for patterns between size of mismatches, Sense Usage and Sense Solar. Here’s that plot:

It should be immediately apparent that there are some problematic Sense measurements here. Sense Solar data should always be zero or negative (or some very small positive value). Sense Total Usage should always be positive. We have 2 quadrants of measurements that violate those rules with the ones in the negative Sense Total Usage and positive Sense Solar containing mismatches of the largest magnitude. What’s happening here ?

Now it’s time for the painstakingly manual part of the process, looking at the the detailed waveforms for each of the mismatches to see what’s actually amiss, starting with the obvious Sense measurement errors I identified earlier.

The good news is that the Sense web app makes this much easier. Once you are logged in, the Sense web app lets you call up a time specific window in the Power Meter via a start and end parameter in the URL. Using this feature, I was able to quickly generate URLs for every 2 hour window surrounding all 217 mismatches. Here’s an example URL that displays the 2 hours surrounding 9pm on the 2nd of Jan in the Power Meter:

https://home.sense.com/meter?end=2018-01-02T20:30:00&start=2018-01-02T18:30:00

So now comes the fun part - actually seeing what mismatches look like. For my Sense environment, I discovered two flavors of measurement errors. The first was the negative Sense Total Usage situation, shown below.

The start of a negative Total Usage hour:

The end of a negative Total Usage hour after a reboot:

For three multi-hour periods on March 7th, April 28-29th and June 7th, my Sense monitor went totally wonky and produced these kinds of waveforms and data, eventually correcting itself or fixed with a reboot. As the earlier graph suggested, this Sense monitor issue was the root case of the 5 largest errors, as well as bunch more smaller ones. The good news is that Sense has updated the monitor firmware to alleviate this bug.

The second flavor of Sense measurement errors first made itself known as I was investigating the cluster of mismatches detected on June 10th. More pedestrian than the negative Sense Total Usage phenomenon, this mismatch stems from simple loss of data, either due to monitor down time or a networking problem extensive enough to prevent full backfill of the data from the Sense monitor buffer.

It turns out that the next batch of mismatches by magnitude, were caused by this kind of data gap occurring late at night when our EVs were charging, like the situation below.

Eventually, after looking carefully at the first 20 or so waveforms, I decided to just muscle my way through all 217 mismatches, categorizing the 2 monitor measurement issues along the way. In the process, I discovered a third flavor of mismatch waveforms - one that looked completely normal. The waveform below is associated with the largest gap that can’t be directly traced to an obvious Sense measurement problem or data gap.

So where did the mismatch come from ? We’ll have to delve a little deeper and more analytically on these, plus look at the PG&E side of the equation as well.

BTW - Quick statistics on the 217 largest hourly mismatches after viewing them all. 36 stemmed from negative Total Usage, 56 tied back to data gaps from the Sense monitor, and 125 look, for all intents and purposes, as completely normal on the Sense side.

Whew ! Now that I’m done labeling all the Sense measurement errors and gap that result in mismatches, I can do some fun stuff. I’m going to try to categorize the 217 mismatches by all the known variables, including # or problem neighbors and whether they are Sense data gap issues.

I decided to use a simple K-Means clustering with 4-6 possible clusters, and tried to adjust the weighting to do two things:

  • Clearly separate mismatches that were Sense monitor related vs. others of unknown origin, so I could focus on the ‘non-gap’ mismatches
  • Create distinct groupings in the “non-gap” part of the mismatches

Without going into all the details and weighting experiments that I did, here are some of results given my strategy.

My clustering cleanly separates mismatches due to Sense monitor problems vs. other errors, and also stratifies by the number of problem neighbors. Note that most Sense monitor errors occurred standalone, or nearly standalone, with no other nearby mismatches.

The clustering approach also stratifies the ‘non-gap’ mismatches by the magnitude of Sense Usage. We’ll see if that is helpful in identifying the root cause. These first and second scatter charts also explain the labeling of the clusters.

One cluster, in cyan, consists solely of both types of Sense monitor issues. The reddish orange are the mismatches that have, by far, the most problematic neighbors. And the olive and lavender have fewer or no neighbors, but are distinguished by whether they are in the high or low end of the usage curve.

Now let’s apply the clusters to more familiar scatter charts.

We saw this one earlier in black and white and wondered about the positive Sense Solar Production and negative Total Usage. Turns out those indeed were all monitor-related data gap errors, as well as others.

Here’s the same type of scatter chart we used to initially look at correlation, except now I’m using it to look at only the mismatches. Two things stand out to me from this chart:

  • I made my dots way too small to really reflect the true size of the mismatches.
  • ‘non-gap’ mismatches are heavily skewed toward the lower end of usage. Probably worthwhile looking at a distribution.

And speaking of distributions, here’s a view of the “density of mismatches” by size.

  • Very few large ones - mismatches rapidly fall off.
  • Virtually all the really large ones are attributable to monitor issues.
  • Probably worthwhile pushing into the next set by size, which are mostly ones with few neighbors but high Total Usage.

Here’s a chart to get everyone thinking. I went back to my graph of the number of problem neighbors (mismatches in the surrounding 24 hours) vs. date for the most mismatch-prone period of time, and overlaid both the cluster coloring plus my best guess at when various firmware upgrades happened. It does look like firmware upgrades may have caused 10 or so of the data gaps in 3-4 different time periods. Other then that, I’m looking for comments on what I should try next.

One more plot looking closely at what mismatches really look like. Hourly Sense and PG&E data overlaid with markers for the different types of mismatches for some of the most mismatch-prone days in June. What’s fascinating is that the accuracy is typically so good that you can’t even see the Sense data line in pink - it’s completely covered by the PG&E data, except in the mismatch zones. Blue dots indicate Sense monitor data gaps, while the rest of markers indicate the type of cluster categorizing that mismatch.

BTW - these monitor issues aren’t all on Sense. I’ve been tinkering with my network for various reasons and have created a few outages on my own. If my Fing network monitor had a deeper event log, I could have overlaid my network tinkering events as well.

Now that I’ve been able to identify some of the real measurements errors, I’m going to strip those hours off and take a look at the earlier plots again, with the known problematic data removed.

Here are the hourly and daily scatter plots, looking much better…

And the same histograms of the difference between Sense and PG&E are predictably much tighter …

Plots between my solar data sources also look better - no visually dramatic outliers.

And a tighter histogram spread:

If I do a linear fitting on both usage and solar, I get adjusted R2s that are very close to 1, meaning a very close linear fit.

The hourly usage comparison has a slope that indicates that Sense is slightly more than 1% optimistic vs. my revenue meter. From this perspective it looks like there are still significant mismatches in the center of the line and a curve in the tail in the negative zone that bear investigating.

The daily solar comparison has a slope that indicates my SolarEdge inverter is slightly more than 3% optimistic vs. my Sense data. Probably worth taking a closer look at SolarCity vs. Sense vs. PG&E during solar production. Unfortunately, there’s not a direct way to compare all three, since I only have net readings from my PG&E meter and SolarCity only offers solar data.

@kevin1- You inspired me to analyze my home data. It’s been a while since I have used EXCEL charting…

I compared my daily electrical consumption with my local power company meter and the internal personal energy monitor called SENSE. I now have almost a year of data stored. My electrical provider is CenterPoint Energy of Texas.

Setup Steps:

  1. I extracted the daily readings from the ‘SENSE V4.0 web version’ for each month in 2018.
  2. I downloaded my power company’s (CenterPoint Energy) meter readings from WWW.SMARTMETERTEXAS.COM
  3. I combined all of the .CSV files into a single Excel file for 2018. (and developed another file for 2017)
  4. I setup my calculations in Excel:
  • kWh (daily) difference = SENSE kWh – Power Company kWh
  • I removed any bad data. (-1.0 < ‘kWh diff’ < 1.0 [between -1 and 1]) If SENSE did not download all of the results for that day, then I excluded that data point. Generally the problem was a loss of my SENSE WiFi connection.
  • % error = kWh (daily) difference / kWh Power Company (daily) * 100
  • And an ‘offset value’ from the average baseline defined later.

Plotting Charts in Excel (Windows 10).

First, I plotted the SENSE values against the Power Company values using a simple graph. This allowed me to determine which days might be bad data. Then I plotted a linear regression. If the slope is 1.00 and the R2 factor is 1.00, then the data points would be a perfect correlation.

Second, I plotted the ‘kWh difference’ values as a Histogram. I was looking for a perfect single bell curve. What I observed was two distinct bell curves indicating two separate trends. My first plot showed a nice bell curve.

I kept reducing the range and discovered two bell curves.

Third, I plotted the ‘kWh difference’ values as a scatter point chart and looked for any trends. Two distinct baselines were observed. (It is hard to see the two dashed lines that show the average baselines on this chart at -0.137 and +0.011)

Fourth, I calculated the average for each baseline, then I calculated the ‘offset’ from the baseline. ‘offset’ = kWh difference - baseline average, I plotted those values in a Histogram, Then I summed all the outliers (values that were not close to the bell curve).

Observations:
On April 23rd, 2018 there was a shift on the offset baseline of 0.148 kWh and has stayed consistently at the new baseline. I explored several possible reasons for the baseline shift.

  1. New monitor software update from SENSE during this time period. (That had not happened.)
  2. Change in power consumption. Warmer weather, more A/C usage. (2017 data did not show a baseline shift when the weather changed.)
  3. Temperature change inside the house. My thought was maybe the SENSE monitor (located inside the house in a climate controlled room) was operating at a different condition. (We keep the same setpoint on our thermostat year-round).
  4. Maybe my home power company meter had been changed out or their meter was at a different operating condition. (No changes were noted.)
  5. On 4/23/2018 I added a TD-69 delay timer relay to my SENSE monitor. At the same time I reworked my CT clamp installation. I used the foam insulation from a ½-inch pipe insulation as a spacer to center my clamps and to position them so that they are 90-degrees perpendicular to the service cable in the breaker panel. When I initially installed the CT clamps (Sept. 2017) they were not at a true perpendicular angle or perfectly centered around the cable. I also taped the CT clamps in closed position, making certain they were completely closed. The new foam insulation ‘spacers’ were not allowing the clamps to remain closed.
  6. My charts indicated that prior to re-positioning my CT clamps, that I could expect a higher number of outlier data points.
    Pre- 4/23/2018: 111 data points with 37 outliers
    Post-4/23/2018: 124 data points with 30 outliers

Recommendations:

  1. Verify your CT clamps are completely closed and at a 90-degree angle to the power (or service) cable when doing your initial installation. Changing the clamp positions may give you a change in the readings.
  2. My SENSE monitor now tracks the same values as the Power Company meter more frequently after making this very minor change. The SENSE unit currently reads an average of +0.011 kWh/day more consumption than my power company meter where previously it read -0.137 kWh/day or less consumption.
  3. SENSE currently tracks the power company meter with an average 0.024 % daily error between the two readings.
  4. It is also possible that the new TD-69 relay is helping to automatically reboot the SENSE unit (and reconnect to the WiFi sooner) after a power outage. Prior to 4/23, I was manually rebooting the SENSE monitor 1.5+ hours after the unit failed to communicate. SENSE text and message “off-line” alarms do not send until the device has been off-line for over an hour.
2 Likes

@Dcdyer,
Totally impressive stuff. You have regained your Excel mastery :slight_smile:
Good push into the bi-modal error distribution and associated analysis. Looks like you are doing great with the squared up CTs ! 0.24% is tight, especially when I think most revenue grade power meters are only guaranteed to be within 2% of actual.

@dcdyer, I just wanted to say, your analysis brings back my statistics studies in college. But your post makes my statistics work look like Kindergarten sandbox stuff.

Impressive!

just saying…

2 Likes

Accuracy Update

Now that I have been able to exclude virtually all data dropouts from 2018 into 2019, I’m going to chart my utility vs. Sense data, one more time, by month.

Here’s the kWh difference between my billed PG&E data vs. Sense data, before dropout removal. Note that some of the hourly differences close to 20kWh and even over (those either involved dropouts when our cars were charging, or negative Total Usage errors). Also realize that each monthly distribution cluster consists of over 600 points, so even if you see many outliers, most are overlapping in that tight band around 0.

Pull out the data dropouts (gaps) and things look better. The biggest hourly errors drop down to 7kWh. Still high, but all the distributions get much closer to 0. Also note that almost all the error falls to the positive side (Sense comes in lower than PG&E).

Why are the big differences still occurring ? When I view these big differences in the Sense Power Meter, I don’t visually see any dropouts so there must be another error mechanism at work.

Looking at the hourly percentage error, another trend becomes visible. It looks like both the percentage error envelope and number of points with big differences grew from Jan’18 - Aug’18, and even into Sep’18, then suddenly became better.

Aha ! That makes sense ! At the end of Aug’18, Sense contacted me to tell me that based on my data, my very early model Sense might be experiencing ‘pin corrosion’ on the pins that connect the CTs to the monitor. They provided extension cables with redone pins, plus special grease to better seal the connections. I had my electrician install the extensions in mid-Sept. And guess what ? Much tighter difference distributions close to 0 since then. Compare the distribution of percentage error between 1-18 and 1-19. Vastly improved.

Mind you, Sense detected this issue without having access to my PG&E consumption data. Nice job, Sense.

1 Like
  1. What type of special grease / lube did they supply for your extension cable connectors?
  2. Is you unit mounted outdoors?
  3. Are you near the coast? Was the corrosion due to salt water?

Just curious.

  1. Here’s the info that came with the cables :
    Cable:
    Compliant to UL 2646
    Rated for 300V and 80C
    Contains Grease:
    Non-toxic silicone grease applied to ends
    MG Chemicals #8462

  2. Unit is in a closed, dry, locked service closet, but it is not fully sealed (no weather stripping on door) or climate conditioned (no connection to heating / cooling).

  3. Within a mile of San Francisco Bay (brackish water - not full salt water), but I don’t believe that salt water air was to blame. We don’t see the kind of metal corrosion here that occurs in true ocean beachside towns like Santa Cruz or Monterey.

The next thing to do is to compare Sense against the other data source, our hourly feed from SolarCity, showing production from our 4.2kW solar system that uses a 2013 vintage SolarEdge inverter. All the data is going to be negative (energy consumption is positive, production is negative), so this might also give a little more insight into the negative tail of the net usage correlation scatter plots earlier.

My first scatter plot of the hourly solar data, with coloring set to the month was little confusing. The “line” was more of an ellipse. And the coloring really didn’t offer any clues as to why.

But shifting to coloring based on time of day showed a discernible pattern. Sense gave higher readings than SolarCIty in the morning, SolarCity gave higher readings in the afternoon - a systemic error. But why ?

Looking at the aggregated daily data, the ellipse goes away, indicating the the error cancels itself out on a daily basis. Note that I have gone back to coloring by month for the daily plot.

A few things we can see from the daily plot.

  • There is good correlation between Sense solar data and SolarCity, though the slope of the correlation line is slightly less than one. I’m not going to calculate it by regression until I remove erroneous outliers for which I can find a measurement issue, but the slope differential bears out a 4% SolarCity/SolarEdge inverter over-optimism saw with earlier measurements.
  • The hourly measurement ellipse is gone. That indicates there were offsetting measurement differences between Sense and SolarCity, that cancel out when aggregated on a daily basis. I suspect that there is an interval assignment difference between the two measurements of 15-30min. Perhaps the SolarCity measurement is for hour centered around the timestamp, while Sense measurement is for the hour following the timestamp. That would explain the ellipse.
  • We have a number of big hourly differences between SolarCity and Sense, but only a few big relative differences between SolarCity and Sense at a daily level. That means that the big daily mismatches are likely caused by the accumulation of sequential hourly errors.

A quick look at the hourly differences in a histogram show a different view on the ellipse, a typical distribution the width of the ellipse, with a small number of outliers worth investigating.

The daily histogram tells us that we really only need to look closely at a small number of mismatches. The histogram once again highlights the systemic optimism of the SolarCity data - the mode of the distribution is offset bay about 800W from 0.

Now we have looked at direct comparisons between Sense and both “utility” sources. Next, it is time to do some detailed mismatch analysis between Sense and PG&E data.

2 Likes

OK - I decided to evaluate my old historic Sense vs. PG&E data one last time using a little different methodology. My biggest issue was figuring out how to remove the bad data, that primarily resulted from Sense data gaps. This time round my goal is to remove data via use of a linear model (regression), via a pure statistical approach. I’m going to:

  • Compare my Sense hourly net data (consumption - solar) vs. my utilities hourly net usage data
  • Use the data to create a linear model via regression
  • Remove the 50 data points with largest error residuals
  • Use the improved data to create a linear model via regression
  • Remove the next 50 data points with largest error residuals with the new model
  • Use the improved data to create a linear model via regression
  • Remove the next 50 data points with largest error residuals with the new model

Doing that gives a nice graph like this showing the removal of the worst points…

If I look for the causes for the 150 biggest residuals I can compare against some data I put aside about Sense and SolarCity connectivity and see that most of the very largest residuals stem from data dropouts from one of those two sources.

SenseVsPGE

Out of almost 6100 hourly datapoints (254 days), I removed 150 of them, all of which had true error causes.

  • 61 Data dropouts from Sense
  • 63 Cases where the Sense Solar data went crazy - started matching my Total Usage
  • 26 Cases where SolarCity data dropped for some reason.

I can also see how the removal of the worst points changes the distribution of the residuals from this originally:

Residual1

To this after 50 removals:

To this after 100 removals:

To this after 150 removals

You can see how the residual error drops and centers nicely around zero with the removal of the problematic datapoints.

1 Like

Decided to do the same analysis for my newly reset Sense. I pulled a roughly proportional number of the poorest fitting points (about 2%). And saw even a tighter fit.

Tighter error margins

Closer fit to the unity line.

Coefficients:
(Intercept) Sense3$PGENet[keepers3]
-0.001715 0.999934

A new accuracy update. Once I got past issues related and too many smartplugs affecting connectivity of the Sense monitor, Sense data has been spot on accurate. If I get rid of the 60 (1.3%) or so worst points caused by the monitor issues, the unity line looks great.

Accuracy

Better yet, the error/residual histogram shows virtually all hours to be very close, with over 97% of the net metered hourly energy within +/- 50Wh.

Error%20histogram

Deviance Residuals: 
     Min        1Q    Median        3Q       Max  
-0.37026  -0.00436   0.00158   0.00800   0.35992  

Coefficients:
                          Estimate Std. Error t value Pr(>|t|)    
(Intercept)             -0.0015896  0.0005094   -3.12  0.00182 ** 
Sense3$PGENet[keepers3]  1.0001519  0.0001215 8234.16  < 2e-16 ***

And if I look at the time period after the issue was resolved, I see even better - a couple of minor power outages affected the data, but everything else is linear.

Accuracy%202

And error/residual term tightens up tremendously.

Deviance Residuals: 
      Min         1Q     Median         3Q        Max  
-0.060682  -0.006075  -0.000292   0.005738   0.065980  

Coefficients:
                          Estimate Std. Error   t value Pr(>|t|)    
(Intercept)             -7.596e-04  3.021e-04    -2.514    0.012 *  
Sense3$PGENet[keepers3]  1.001e+00  6.778e-05 14770.513   <2e-16 ***
2 Likes

I have been trying to come up with a better way to track the accuracy of my Sense vs. my utility data over time, to try identify perturbations and drift. I’ve been experimenting with looking at the rolling (1 week) error distribution over time, where error = hourly PGE net meter - Sense net. Since I can’t display the whole distribution, I’m charting the median error/difference (blue) and standard deviation (red) over time. Notice that the standard deviation scale is 100x that of the median.

Pretty much all the spikes in my standard deviation coincide with data dropouts from my Sense either due to Sense monitor issues, networking issues or my electrician working inside my breaker box (the most recent spike is from the power being turned on and off on my mains over a couple of hours). So the bottom line is that regions with standard deviation spikes are mainly caused by data issues.

Trying to look for drift over time is trickier. I have to discard regions with red spikes. If I look closely, I may be seeing slight upward drift in my median different since about the start of April, though the work on my breaker box disrupted the data in early May. I’ll have to watch carefully to see wha happens. The drift isn’t much - the median moves by about 3Wh / hour over a month or so. Thoughts ??

Here’s the same plot with both median and standard deviation on the same scale and the big standard deviations removed by limiting the y axis to 0.04 (40Wh)

1 Like

Kudo’s to @kevin1 for his work in comparing SENSE to the Utility. I wish I could do that type of analysis, I also wish I could understand it all. Some of it yes, some of it is over my head.
I used to be a meter tech a long time ago and still like to keep up on as much as I can, but time marches on. I wrote a little information on comparing SENSE to the utility meter. I hope it helps anyone who wants to know.

This is a picture of my electric meter. Notice the black bar beneath the numbers. This bar moves across the dial. The speed is in proportion to the watts the meter is reading at that instant. If you want to compare your meter to your Sense reading, all you need is a stopwatch and a calculator. Your smartphone has both.

Every meter has a watt-hour constant, (kh) The constant on most of the Digital Meters is 1, and it will show on the face of the meter. The definition of kh for a solid state meter is - The number of watt-hours represented by one increment (pulse period) of serial data. For an electrical mechanical meter the kh is called the Disk Constant, and it is the number of watt-hours represented by one revolution of the disk. The disk constant will be a different number like 3.6 or 7.2.

For the digital meter this means that one pulse = 1 watt-hour. That means it takes 1000 pulses to equal one kWh.

To measure the load, you are going to time the pulses. This is what metermen call a stopwatch demand. It is an instantaneous reading of what the meter is reading in watts.
The formula is:

Watts = 3600 x kh x revolutions divided by seconds.

Kh is a meter constant that is shown on the dial

Revolutions is the number of pulses shown on the dial (the moving bar) each time it moves is one pulse (or revolution).

Seconds is what you time as the pulses are indicating.

Example:
MY house, my pool pumps were running and Sense was showing around 3688 watt. I timed 6 pulses for 5.69 seconds. 3600 x 1 x 6 = 21600 / 5.69 = 3796. Pretty close. The trick is to have a constant load.

If you have a heavy load, AC running, pool pumps running, car charging, etc. the bar will fly by. My meter has 8 bars. With a heavy load I count the last in line and multiply by 8. Some meters may just have a black square that blinks off and on, but there should be some indication you can time. If you have a small and steady load, you will be pretty exact. With a large load it may vary a bit. The key is to have a constant load and be exact in your start and stop timing.

There are many different types of meters and yours may be different. I am in Southern Nevada. Some meters may even give you an instantaneous or current reading. It also depends on the Utility providing the service.

This also works with meters that have a revolving disk, I don’t know if there are any of them around. If you have a disk, time the revolutions of the disk, there will be a black mark to start and stop the time. Do at least 2 revolutions of the disk, no more than 10 are necessary.

You can also Google “kh meter constant” for more information.

I hope this helps those who are trying to verify the accuracy of the SENSE reading. Remember the more constant the load the more accurate you measurement will be.

7 Likes

Thanks @jkish ! Thanks for presenting the simplest and quickest way to validate your Sense readings. I’m betting many new users install their Sense, see the first readings in the Power Meter, then ask that important question - How accurate is this ?

I’m thinking of writing up a primer on how to check, for newbie users, from simple to sophisticated. Would love to reference your explanation.

  • Immediate spot check against your meter. This works well if you have a relatively constant load in your house at the time, which would mean that your Total Usage in the lower left corner of the Power Meter is fairly stable (and the same for Solar Production in the right lower corner if you have a Sense solar install). You can do this the second Sense is done calibrating after setup.
1 Like

Hello Kevin

Feel free to reference the explanation in your primer. I will consider that as a compliment.

1 Like

My utility (Tampa Electric) isn’t as advanced as some other utilities so my only means of monitoring usage outside of using Sense is to A ) take daily pictures of my meter for usage and export and ensure I take those photos at the same time each and every day B) be wait for my bill to be generated each month and compare that to Sense. Because option A is just too tedious and I am bad at being consistently at my meter at the same time each day I decided to just compare usage vs my monthly electric bill data.

I received my first full month of Net metering on Tuesday. I am happy to report that Sense calculated a net surplus of 673 kWh on this bill. My bill reports 676 kWh net. That’s more than close enough to be within the margin of error between the accuracy of the Sense calibration for the primary and solar CT’s as well as any differences between the exact timing of the meter readings vs the Sense monitor!

I am happy to share any raw data showing how I came up with this. Because the reporting by Sense is a fixed “what is the start date of each billing cycle” and each bill from the utility tends to vary from a low of 26 days to a high of 34 days based on previous bills, I had to export a full daily report for the year from Sense and pivot table out the days I needed and then calculate out the Net energy for each day.

All in all I am extremely happy with my ability to monitor usage and the accuracy I have with my currently limited datasets

1 Like