So You Just Installed Your Sense and You Want to See If It's Really Working

You just installed your Sense and you are raring to see it start detecting devices in your house. The first question that many users ask while waiting for that first detection is “How do I convince myself that the Sense numbers are accurate (or figure out that you have a cheating utility company) ?” This is a short primer on ways to double check the basic numbers coming out of your Sense.

What can you do ?

  • Just believe - Lots of other people have validated Sense’s accuracy in their particular installs via a variety of methods. There are plenty of examples on this forum.

  • Immediate spot check against your meter - This works well if you have a relatively constant load in your house at the time, which would mean that your Total Usage in the upper right corner of the Power Meter is fairly stable (and the same for Solar Production in the upper left corner if you have a Sense solar install). You can do this the second Sense is done calibrating after setup… One nice example on how to do this with a standard electric meter from @jkish here. I’ll share the same for how to do the spot if you have solar and a net-meter as well, in a little bit.

  • If you can wait a little, like a month or so, you can check against your first full utility bill. If you are lucky, your bill / meter readings are on the same day every month, or even better, at the end of the last day of every month. That allows you to use the Sense built-in trend intervals. If not, you can still approximate by doing a little calculator work and summing individual daily usage together from the Sense daily Trends data. An example on how to do this will follow.

  • Do a detailed analysis - If you can’t do any of the above, or if you try one of the above and the two measurements aren’t correlating, then you’ll need to do a more detailed investigation. Fortunately, Sense, plus most utilities give you the tools to do a deeper analysis and figure out where any differences might be coming from. I’ll spend a couple of posts on how to do this kind of analysis with Sense exported data and a spreadsheet or R.

More details on these different methods in the next few posts.

7 Likes

Immediate spot check against your meter

The nice thing about this approach is that you can do this pretty much any time, including seconds after your Sense has gotten past the setup steps, once you can see the Power Meter. @jkish nicely covers the case for a simple (non-net meter) here. I have a net-meter and solar, so I’m going to touch on how to do the same comparison, below.

First thing to do for a solar install is to pop up your Sense Power Meter and capture the Solar Production and Total Consumption numbers, on the upper left and upper right. In my case, that’s 438W of Solar and 1242W of Consumption, or 804W of net usage. My Usage is fairly flat, but my Solar is drifting down so I have to catch my meter pretty quickly after this if the numbers are going to be close.

Next look at your net-meter. If it is anything like mine, it’s a smart meter that cycles through every couple seconds showing different numbers, including a step where is shows the current power being delivered to your house or being returned to the grid. In this case, my meter is showing a net delivery of 809W.

Hmmm… 809W vs. 804W is pretty close, within 1% for that spot check.

If you can wait a little, like a month or so, you can check against your first full utility bill.

OK, this one requires some patience and it may not totally work out for you for reasons I’ll touch on at the end. Take a look at your first full utility bill after you have installed Sense. That could mean a worst case wait of almost two months. @samwooly1 has posted a few comparisons on the forum showing his good results. But there’s a extra step for us solar users.

Here’s my most recent NEMS (net metering) billing summary. I’ve been shaving energy usage since January, partially thanks to greater solar production, and netted 1420kWh of usage in the period that started on 3/23 ended on 4/21.

That’s going to make it tricky fo me to do an exact comparison with Sense, since the built-in intervals (Month or Bill) don’t exactly match, but I’ll do the best I can. First, I’m going to set the Billing Cycle Start in Settings to the 21st, so that I’m at least matching up on one end of my most recent billing cycle (4/21).

Then I’ll go to the Trends > Usage > Bill screen. If I touch on Usage, I can see my Usage for that Sense billing period, which doesn’t quite line up with my own.

Then I touch on Solar to get the solar production for the billing period.

That’s 2000.3kWh - 501.2.kWh = 1499.1kWh. But that’s a little too big ! Sense included two extra days so I’m going to need to subtract them off. I can put my finger on each of those days and the “mouse-over” will give me the Usage and Solar for each day, like below.

So I can subtract off 3/21-3/22 as (55.0-22.3) + (59.7-23.1) = 69.3kW. 1499.1kWw - 69.3kWh = 1429.8kWh vs. 1420kWh billed by my utility.

Once again, my total error/difference with Sense is within 1%, though this time Sense gives a slightly bigger number, in comparison with my earlier my spot check.

2 Likes

This is beautiful, @kevin1. It pairs really nicely with the 10 Steps on Getting Started with Sense blog we wrote that walks new users through their first couple of weeks and providing a rough estimate on devices detected.

2 Likes

Do a detailed analysis

Sometimes you don’t have any choice on this if you want to get to the bottom of significant differences after using one or both of the simpler checking techniques above. Detailed analysis presumes that you can pull daily, or better yet hourly, data from your utility, something most larger utilities enable.

The Ingredients and Tools
Here’s what you’ll need to investigate:

  • Sense Export - The web app in Sense lets you pull out hourly or daily data for a week, month or year for your own analysis. Export is hidden in the upper right hand corner of the Trends page, and works on the the data period that you have selected in the Trends. You can pull data for the entire calendar year to date with an hourly interval using two steps in the Sense web app (sorry, no iOS or Android app here).

  • Utility data download - My utility, PG&E, like most utilities, has a “Green Button Data” access which allows you to download your energy usage information. Generally you will have the option to download your data for some period of time in whatever time interval your utility uses for billing via Green Button Data. My utility offers data for up to a 1 year period with a 15 minute sampling interval. And even if you don’t trust your utility, you have to treat their data as the golden reference for energy usage since they are measuring via a revenue certified device.

  • A spreadsheet, or better yet R or Python, programming languages to manipulate the data from these two sources so that you can do direct hourly or daily comparisons. I tend to prefer R since it offers an incredible amount of automation and charting, but Excel will work fine as long as you are experienced with some of the more complex functions like merge and Pivot Tables.

The Process:
To do a detailed comparison, you’ll need to do 4 steps

  1. Download the data from both sources and convert it to a common interval. My utility puts out the data in 15 minute intervals, but Sense’s minimum interval is an hour so I have to aggregate/sum my data into hourly data. In Excel there are nice ways to do this in a Pivot Table.

  2. Align and filter the Sense data - Sense puts out all the data for the house including all devices in the export. For comparison with utility data, we need to filter out everything except Total Usage, or in the case of solar, Usage minus Solar (net usage). You might also need to fill data at this stage if your utility or Sense has left out data for a specific hour or day. If you don’t fill in missing intervals, you might either encounter errors during the next step, or you might actually miss discrepancies where Sense or your utility did not measure data. I do a completeness check and fill with either 0 for missing hours or NA (Not Available) depending on what I am trying to do.

  3. Merge the comparison data by interval - I tend to merge hourly data since it gives more detail into where any mismatches occur, but there are also some advantages from merging at the daily level. Be careful not to ignore merge error messages, since you do want to double check for days/hours where either of the two power usage sources has missing data.

  4. Analyze and compare the two measurement sources on a daily or hourly basis. - Time to have some fun. You now (hopefully) have data side-by-side for a range of time intervals, so you can use various techniques to figure out where the differences between the two are coming from. Here’s where we can use lots of esoteric statistical functions and a little sleuthing to find layers of error sources.

Download the data from both sources and convert it to a common interval

Step 1 - Export the data from Sense using the Export icon in the upper right corner of the Trends > Usage page. You can select any interval, but I like to compare at a yearly level (or more).

Step 2 - One you initiate the download, you’ll be prompted for the data interval. I typically use hourly intervals for comparison, since that allows you to more precisely isolate the biggest mismatches.

Once you hit export, you’ll get a CSV (comma separated value) file that you can read into a spreadsheet or an analysis program in R or Python. Here’s a snippet of my downloaded CSV in Excel. There is usually an entry every hour for every device that is detected and using energy during that hour. The only two that matter in the case of a solar install, for the purposes of this exercise are the Total Usage and Solar Production.

Step 3 - Do the same for your utility. My utility company has a Green Button Usage on the webpage that reports energy usage. See it down on the lower right.

Step 4 - Pick your time period and format on the page that comes up after hitting the Green Button. Here are the settings I use. I don’t have a choice of reporting interval.

Here’s what my utility’s CSV ends up looking like in Excel. Notice that it breaks power usage out into 15 minute intervals.

So the next couple steps are all about aggregating the utility data to hourly data.

Step 5 - Create a calculated field for every row that spells out the hour in text format, so that I have a hour value to aggregate over. Note from the formula that I use that I’m using text rather than an actual date, since Excel treats dates differently and slightly annoyingly when aggregating.

Once I have added the hourly markers per row, I can use create Pivot Table to sum every hour.

Step 5 - Create and specify a Pivot Table - that summarizes hourly data. Here’s how to set it up.

Align and filter the Sense data
Now it’s time to reformat the Sense data using a Pivot Table as well, but mainly to filter and reshape the Sense data. But first I have to create another calculated field that is the text version of the DateTime again, for the same reason as earlier.

Step 6 - Add a “DateTime Text” column using the formula shown.

Once I have that extra column propagated down the entire spreadsheet, I can create and tune the Pivot Table. The goal this time is to end up with the hourly DateTime along with 3 other columns, Usage, Solar and Net Usage.

Step 7 - Create the Sense Pivot table using the specification below. Please note that I’m filtering out all but two of the Names, Total Usage and Solar Production. The Grand Total on the right is magically the Net Usage.

Merge the comparison data by interval

So now I have two spreadsheets with Pivot Tables that have all the formatted and calculated data I need for comparison, and it’s time to merge them. Unfortunately, Microsoft has let me down, and pulled a feature that was in Excel 2016. They’re adding the merge/join (GET/TRANSFORM) back into Excel but it’s not there yet for me, so I’m going to do a manual merge, but by copying and pasting the two together into a single spreadsheet.

Step 8 - Copy the Sense Pivot Table and paste alongside the Pivot Table for my utility. Make sure the DateTimes match up. Fortunately, there were no missing hours in either list so the manual merge was easy.

Analyze and compare the two measurement sources on a daily or hourly basis.

Step 9 - The final step is to plot the data. It really takes two steps. Copying the final table one more time to a spot right next to the aligned Pivot Tables, because you can’t chart a Pivot Table directly. Then converting to a X-Y Chart and tweaking. Here’s the result ! A straight line with only a few scattered points. These means Sense was very accurate for a high percentage of the time. But how accurate ? Sorry, but for that one I’m going to switch from Excel to R.

The x axis is my “golden” utility hourly measurement and the y axis is the Sense Net Usage measurement. They both go negative in the hours when my solar is outproducing my house usage.

1 Like

Detailed Error Analysis

So now comes the detective work. There are really two questions here:

  1. How big / small a difference is really an “error” ? When do we stop, because there is always going to be some level of difference when we are using two different measuring devices ?

  2. How do we track down the causes of the differences we deem errors ?

Instead of answering these questions directly, I’m going to take a cue from how physics treats measurement errors.

  • Random errors - are errors that come from random factors during measurement. Generally these errors will have a normal bell curve distribution and cancel each other out over many measurements. We just have to accept these errors and just characterize them statistically, putting a bound on them.

  • Systematic errors - are measurement errors that occur due to some factor that is related to a consistent difference in the measuring system, outside events (power outage, networking issues), or something that wears or changes over time or with temperature. These errors will show up as a bias or consistent difference that correlates with the outside conditions. Here we can relate the difference back and fix it or at least compensate for it.

Our Analysis Tools

Key tools in our holster are below. You can do all these things in Excel, but I’m going to do the next bit in R because it’s easier and offers more flexibility.

  • Unity Plot - This is a great way to visualize accuracy of your Sense. It’s basically just an x-y plot of your utility usage for a given hour/day vs the Sense usage. If your Sense is working right, it should be a 45 degree line with very little divergence. Any point not on the 45 degree line need to be explored to understand the source of error.

  • Linear regression / linear model - a mathematical fitting of the two values for the same time interval. Linear regression will produce a line that best fits all the points, including any bad data point that can skew the line a little. A linear regression will produce the slope of the line (hopefully close to 1.0), the y-intercept, plus one or two R^2 values that indicate the goodness of fit, where 1.0 is a perfect line, with no outliers. You should have a 0.95+ R^2 value if Sense is doing it’s job right. The nice thing most linear regression tools vs. a unity plot is that regression will also produce a residual/error measurement for each point, which represent the deviation from the fitted line. Residuals give us the handle to find the biggest outlying data pairs that need investigation.

  • Error / Residual histograms - a difference histogram gives a better view of how much the difference between your utility and Sense varies, or your fitted line and the Sense data. You can see whether the difference is centered around zero or offset, which would indicate that one of the two measuring devices is consistently more than the other. You’ll also be able to see if you have a typical normal distribution, that is balanced, or if it is skewed. In one interesting case, @Dcdyer saw a split distribution that highlighted a change in his CT setup over time that made it more accurate. A histogram will also reveal the extent of outliers, and allow you to adjust analysis accordingly.

  • Difference timeline - If you want to look for changes in accuracy over time, it’s useful to chart the difference between the two measurements over time. @Dcdyer did a good job of it here and spotted the same difference he saw in his histogram. If you have a long time history (more than two weeks), I would recommend charting daily differences rather than hourly for a variety of reasons.

  • Statistical difference analysis - If you really want to get to the bottom of the causes of errors / discrepancies between your utility and Sense, you’re going to need to investigate various relationships and look for patterns or links. And you’ll likely need to peel the onion - when you find one error source that affects certain data points, you’ll need to flag the errors, then fix or remove those points to find other errors that might have been masked by the first set of errors. So we’ll be looking at changes in error distributions over time and even temperature (if I had been measuring temperature in my service closet).

3 Likes

The Unity Plot

A Unity plot is a great way to get an initial read on accuracy - it’s fast, intuitive and doesn’t require a lot of math to sort out. And the big mismatches are easy to see. Here’s the same plot as I did in Excel done in R/ggplot, with one minor enhancement. Each month is in a different color:

First off, it’s clear that the data line is very close to 45 degrees. That means that comparable hourly measurements between my utility meter and Sense are very close to 1:1. Second, most of the points are very close to that line, making it quite easy to spot the two biggest data errors that occurred in May (magenta). In the point on the left, my utility says I was putting energy out on the grid for that hour, but Sense is reporting that I was consuming from the grid. In the second point on the right of the unity line, Sense says that I am using less energy from the grid for that hour than my utility had metered. So anything below the line is a place where Sense is under reporting, and above the line is place where Sense is over reporting. So lots of good news in this plot - Sense and my utility look very close most of the time. The challenge is that beyond a few obvious bad points, the unity plot, doesn’t help diagnose issues beyond those two points (what’s the next worst point ?) and how can I quantify any error beyond those two points.

Above is an embellished unity plot. Using linear regression (I jumped ahead to the next section), I have colored the 30 points with the largest residuals (divergences from the fitted line), as well as adding the fitted line itself (in blue). Adding them to the unity plot doesn’t directly help quantify anything, but it does show, quite intuitively, the value of linear regression.

The Benefits of Linear Regression, plus the detailed ins and outs of a linear model

So I have now just run linear regression on my Sense Net Usage vs utility Net data. I’m going to highlight a few key items that come out of the summary of the model.

Most important, in red, are the slope and intercept of the fitted line. This tells me that my hourly Sense Net reading is roughly my Utility Reading * 1.0012641 - 0.0017405kWh. There’s no way we could have picked up that small slope variance from 1.000 from looking at a graph. Let me parse that equation in more detail:

  • roughly, means subject to some variation spread that we’ll talk about in a minute
  • the slope of 1.0012641, means that there is a built-in small systematic error of 1.2641Wh for every kWh measured on my utilities meter. When I’m charging my car at 20kW, Sense is accumulating an extra 25W. But that’s the best fit slope with a couple really bad measurement points included. It might get better once we account for those. BTW - the 3 asterisks on the right at the end of the “slope” test mean that the slope value is VERY statistically significant. More on that later.
  • The intercept means that there is also a small fixed error of about -1.7Wh on the Sense reading. But that estimate is not showing as statistically significant yet.

The second thing to pay attention to is the R-squared measures in blue. I don’t want to go into mathematical details, but an R-squared of 1 is essentially a perfect fit of the data to a line. So 0.9998 is very good.

Finally, take look at the spread of Residuals or the deviations from the fitted line, in green. This is a statistical view of the “errors” in kWh (OK, not really errors since they represent deviation from the fitted line, not from my actual utility data, but close enough). We can see the Min and the Max, which correspond to the two worst points we saw from the unity plots. The Max comes from the point on the left of the unity line and the Min comes from the point on the right. If you are familiar with statistics, you’ll notice that the vast majority of the “errors” are tiny, with a few huge outliers. For those who can’t picture it, here’s a view on the spread of the “errors”.

You can see how big those two outliers are since they stretch the graph way to the left and right ! If I zoom in on just the space between -0.2 and 0.2, one can get a view of the real distribution of “errors” without those outliers.

There are precisely 2 datapoints with “errors” greater than +/- 0.2 (or 200Wh) and 13 with “errors” greater than +/- 0.1 (or 100Wh). I’m guessing I should look closely all the data points with residuals greater than +/- 0.1 to see what’s making them outliers.

I’m going to show you two more plots that come automatically out of the linear regression analysis using R, that look at the residuals/“errors” in two more useful ways. The first compares the residual vs. the corresponding fitted line value to see if residuals scale with the size of the fitted value. The second essentially looks that the number of standard deviations each residual is away from the distribution vs. the fitted value. Both tell us that there are 2 big outliers and maybe another 10 smaller ones. The graphs automatically label the 3 largest one with their index numbers. Two, 3012 and 3013, represent hours adjacent to each other in time.

Note that the residuals/“errors” do not vary based on the size of the measurement.

Next step is to look at the 13 biggest residuals/“errors” and see if I can track the root causes.

2 Likes

Tracking Down the Biggest Errors (Residuals)

Here’s the table of the 13 hours where the residuals (errors vs. the fitted line) were greater than +/- 100Wh. I’m viewing them in date order since a couple of them seem to have correlation in time. You’ll also notice that I have included the AbsDiff (Absolute Difference) between my hourly utility reading and my Sense reading, and that it closely approximates the corresponding residual, except for the sign (I did the subtraction in the opposite order). If I regard my utility reading as golden, this AbsDiff is really the true error, so it’s good to see that they are close.

I’m first going to look at all three of the hours on May 5th since I actually know what caused those errors - my electrician was working in my main panel and turned off the power several times. Here is the Sense Power Meter for those 3 hours - definitely the cause of error with big dropouts and inversions everywhere ! But the Sense monitor started functioning normally once he was done.

If I look at the waveforms, two more of the biggest residuals in the list also have gaps, that I can’t explain. Those are cases where my network or the Sense monitor might had a problem. You’ll also notice something else about the second waveform. After a few months, Sense reduces the resolution of the saved data available from the web app, hence the less detailed lines, presumably to save storage and AWS costs.

The remaining 8 worst residuals seem to stem from another cause. The common thread between all of those hours is that they are all during the charging of our Model S, thus during very big changes in power usage and high currents (20kW @ 80A at max). Intriguingly, none of these hours are ones where the car is charring for the entire hour, but rather hours where charging is either starting and completing. The residuals are negative (Sense greater than fit line) when the EV charge is ramping up and the residuals are negative (Sense less than fit line) when the EV charging ramps down.

You’ll also notice that Feb 19th is the breakpoint for the reduction in webapp time resolution. Feb 20th has half second resolution while Feb 19th is at 1 minute resolution, so Sense must keep around hi-res waveforms for about 3 months.

More Regression - Improving Accuracy
Now that I know the causes of the largest hourly errors, I can do a few more regressions with the biggest error points discarded. Just a reminder - here’s the summary of the original model:

If I remove just the three hours when my electrician was working on my main breaker box, most of the results improve.

The R-squared essentially become a 1 indicating a perfect fit ! The spread of the residuals drops (no surprise since we removed the two point with the largest residuals). And the intercept value is now statistically significant, even though it has increased in magnitude. So I can now definitively say that my Sense reading = (1.0015774 * utility reading) - 0.0030047 kWh within a tight error band of variance.

And the unity plot looks great, as expected.

Now time to look at what is happening with EV charging.

Analysis of the next bunch of errors

Now it’s time to get a little bit anal-retentive. I going to see what I can do to classify the next bunch of errors/residuals - all 43 of the situations where the magnitude is greater than 50W (0.05). To classify, I’m looking at the Sense waveforms for that hour and trying to capture what’s happening in a couple of ways. Thanks to a little automation, I can capture the max/peak power for that hour, plus I can qualitatively characterize what’s happening. Beyond the Electrician, the other 4 “causes” are below:

  • Gap - A data gap, likely caused by the Sense monitor or my network. Here’s the example of a gap that caused the largest residual. If I look in greater detail at that 6 second gap at 8010W, it looks like it trims 13-14Wh off of the hourly result.

  • Ramp-Down - A large ramp-down from high power usage to much lower power usage, most of theme related to the finish of an EV charging cycle. Strangely, I’m not seeing any high-residual errors when either EV is steadily changing - just when the power is changing rapidly. Here’s the biggest residual ramp-down. Notice that it’s definitely an EV charge given the max power level. There’s a chance that there might be ramp-up in this one as well making it an “up-down”

  • Ramp-up - Opposite of a ramp-down, a situation where usage ramps up massively, typically at the start of an EV change cycle. Here’s the largest residual case of a ramp-up. Once again a the max power tells us that it is the start of an EV charge cycle.

  • Up-down - There are some high-residual situations that seem to occur when there is a combination of significant up and down power cycles over an hour, some related to EV charging, others potentially not. Here’s the largest residual up-down. I’m thinking that these cycles might be more related to my floor heating elements, but not sure.

After looking at them all and characterizing them by “Hourly Max” and “Cause”, I now have a table that I can chart, in this case arranged from smallest residual to largest.

Charting the residuals against max hourly power and “cause”, I get a better picture of what might be going on with the next 40 errors.

Pretty much all Hourly maxes above 15,000kWh involve EV charging. But as I mentioned earlier, not a single large residual involves 1 hour of steady EV charging, always ramping up or down. So the whole rightmost region is the EV ramp zone. That still leaves some more mysterious smaller residuals that still seem to stem from up/down power measurements during an hour.

This isn’t to scam, but it’s a reminder of how few points exist outside of the “OK Residuals” zone (which is slightly larger than 3 std. deviations 0.5 vs. 0.42).

Now the interesting question - why would significant up-ramps or down-ramps cause greater differences between Sense and my utility meter than long term steady high power usage ?

2 Likes

Error Biases in Each Measurement
One more thing I can look at. Since my Sense net power measurement is really the results of 5 separate measurements, 1 voltage and 4 currents (2 legs x mains plus 2 legs x solar), I can try to take a look for variability in residuals with respect to individual measurements (nearly). The voltage is the same for all, so I can ignore that. I don’t have separate data for each leg so I’m going to have to look at the combined legs for both solar and net usage. Solar is easy - I just look at the Residuals vs. SenseSolar where Solar is > 0. I want to isolate on net usage (vs. Total Usage), because net usage is what my actual CTs are measuring. I’m going to remove any data where the net CT data is mixed with solar data by only looking at data where solar > -0.005 (my inverter eats a 2-3W of power at night).

Here’s how the residual changes with varying solar measurements. It has a slightly negative slope when treating solar generation as negative power. Or a slightly positive slope when we consider solar generation as positive.

And if I only look at Sense usage measurements where the solar is effectively off, and I’m only getting data from the mains CTs, I can see a slightly positive slope as well.

Now that I have the measurements separated out into only-solar and only-net CTs, I can try to look for any drift over time. Here’s a view of the solar-only residuals over time - no measurable drift, though there seem to be sporadic bursts of higher residuals over time.

Here’s the same for only-net CT measurements. Same situation - no drift over time, just sporadic bursts of higher residuals.

I’m going to throw in one more way to look at accuracy over time, a jitter plot of “errors”/residuals for each month. Looking today, it looks like my Sense has been getting better and better at reducing the outliers since the start of the year. I’ll have to do another plot like this when I have the full data for the month of May.

1 Like

Inside My SmartMeter (Almost)

Still sleuthing where discrepancies between Sense and my utility meter might come from. I hit upon a new tack this weekend. I used to be able to pull raw data from my meter directly via my Rainforest Automation Eagle, the essentially abandoned downloads and high quality waveform viewing on the “Legacy (4 LED)” Eagle model, which convinced me to look for alternatives outside of buying the new model within has improved software. Instead, I discovered I could “pipe” the raw data flowing to my Eagle through to a service called WattVision. It only keeps around a week’s worth of raw data for free, but that’s all I really need right now.

Looking at the raw data from the the meter via WattVision, I’m seeing between 397 and 445 data samples per hour. At the max, that’s a sample from the meter every 8 seconds. I’m not sure why there is so much variability. I’m going to do a deep dive into one hour of the past 32 that has one of the largest discrepancies between what I think my utility will sum the data into, vs. what Sense observes. I saw “will sum” because my utility download data lags by 1-2 days, while my Eagle/WattVision combo delivers the data up close to immediately. Here’s the hour (incomplete) - June 6th at 1PM.

You can see the meter keeps about an 8 second cadence, though there’s already a gap 10-11 secs. in, and there are probably another 17 gaps because this hour only has 427 samples. If I chart the raw data it looks like this:

WattVision abstracts the hourly data somewhat, when zooming in on the daily plot.

And here’s the same in Sense, though Sense is showing Total Usage rather net usage (thankfully the solar is very consistent, so we can do a reasonable waveform comparison.

How does the raw data from the meter get rolled up into hourly data ? If the data is uniformly sampled, then one can just sum all the power sample and divide by the number of samples. But if there are gaps, and the sampling is non-uniform, the calculations are harder, plus one has to make some assumptions about the cause of the gap. Was the gap a missing point or one that didn’t need to be transmitted. And if a data point was really lost, how does one compensate ?

1 Like

13 posts were split to a new topic: Squaring NV Energy Usage Data with Sense Data