Some Data Science of “On", “Off” and “Idle”

Numerous visitors to this forum question “why it is so hard for Sense to see" an on or off pattern for a device in the Power Meter that is clearly visible to them. Or why does Sense have trouble figuring out when a smartplug device is in and “Off” state, vs. an “Idle” state vs. an “On/Active”. The gist of issue is that detecting patterns in the midst of noise and other patterns is a hard problem and the human brain and vision system are a very sophisticated piece of biologic hardware built for flexible recognition. Your eyes and brain use a lot of learning history and incredible processing to come up with a detection.

Ever since the smartplug integration was released, I’ve been trying to dial in on just the simple problem of what ‘Off’, ‘Idle’, and ‘On/Active’ really mean, by looking at my smartplug data. How hard could it be to classify the power usage into a few simple states ? Let’s look a a few examples, starting with the simplest first. Along the way I’ll highlight a few key points that make even this simple classification problem harder than it seems.

Here’s my hot water recirculation pump via an HS110, which has a sampling resolution of about one second (vs. Sense’s 1 microsecond sample time). Most of the time it’s running at around 45W. I turn it off via a timer from midnight to 6AM, but the timer continues to use about 1W. Plus, occasionally there are also dropouts where the chart goes to zero when it is really still running. I don’t exactly know how Sense treats those dropouts internally, as zeros or as not available (NA). I do know that if I have a dropout that lasts over an entire clock hour, Sense export will leave that hour out of the exported .csv, at least in the current web app. From a one second resolution chart, there seems to be a very clear on/off or idle/on behavior (is 1W idle or on ?)

Here’s a short view of some of the data from Sense hourly energy export data. I use the term energy because Sense outputs the energy consumed during that hour. The NA’s mean that Sense export did not provide data for that hour. Plus it looks like Sense zeroed out some of the data on either side of the NA’s where dropout also occurred, since we see hourly values between 45Wh and 1 Wh. So a few energy datapoints will lie between off/idle and on, even though the operation of the pump is digital either Idle (1Wh) or On (~45Wh).

Just from this you can see two tricky things about analyzing the data:

1) Selecting the best sampling time resolution is critical to “seeing” the best results. Data values are crisp and clean here between hours because my timer on/off cycle is a multiple of the sampling resolution, 1 hour. But if my timer was on the half hour, we would see more frequent in between values. And as we’ll see later, time samples should to be somewhat smaller than the runtimes of the different power modes to get the best results, but microseconds is probably too small.

2), Whatever analysis we do, it has to be robust enough to deal with missing NA data. Some types of time series analysis might also require “complete” data, data for every hour. In that case, we would need to pad the missing data, hopefully with good representative values.

To get a different, more useful view on the power usage behavior, I created an energy histogram, with default bin sizes for the different power levels to see if there are discernible clusters of power usage that I could call Off, On or Idle. I could see two immediate issues with the histogram. 1) The gaps in the histogram around the biggest spike indicate a bin size problem. And 2) The histogram needs some distribution-oriented smoothing to make the clustering easier to work with.

Fortunately, histograms have a companion density analysis function/plot, that smooths out the distributions based on different selected algorithms. Here I have chosen a density plot using the default gaussian model (blue line). I then annotated the plot with 2 largest local maximums (green triangle - peaks) plus their 2 adjacent local minimums (red triangle - dips), to hopefully come up with a numeric way to find the energy/power thresholds for ‘Off’ vs. ‘Idle’, vs. ‘On/Active’ for each device.

We still have an issue with the histogram gaps. It doesn’t look like much of a problem for this data set, since the clusters appear obvious and the density curve looks like it has done an OK job. But I encounter a bigger problem with the data sets for some of my other devices, like the outlet strip in my master bedroom closet that includes an access point, a backup NAS, AppleTV, Tivo and Zigbee bridge. Here, the binning size and associated histogram gaps raised havoc with the density smoothing by overfitting the poorly-binned data leading to false local minimums and maximums.

The fix, is to adjust the size of the bins to a multiple of the minimum energy resolution of the Sense export data, 0.001kWh, or 1Wh. Now Sense may actually save additional accuracy back in the cloud, but the point is that looking at the data with the right power/energy resolution “lens” is critical to seeing a clear results.

Here’s the recirc pump with resized bins. Note the lack of gaps and the crisper density curve.
Verdict - For the recirc pump, there is a clear 20-30W breakpoint that separates “Idle” from “On”, with a very digital behavior. I choose to say “idle” because there is measurable power going to the timer, and in my mind, “Off” really should correlate with 0W, and/or the smart plug actually being turned off.

Revisiting the master bedroom closet power strip data, even better news. The histogram is no longer gap-toothed and the density curve is close to continuous, with reasonable local minimums and maximums. Looks like the binning adjustment worked.

3) Just like with time resolution, data analysis also requires tuning based on the resolution of the other data for best results. And a corollary - mixing data of different resolutions can be more treacherous for data analysis than one might think.

According to my numerical analysis approach (above), for the master bedroom closet power strip, there’s one energy mode at around 37Wh, with a much smaller one possible at 12Wh. But is that 12Wh a real mode, or is it an artifact ? To determine that let’s take a peek at the Power Meter view of the same data at 1 sec resolution. Perusing the waveforms, I don’t see any 12Wh datapoints at all.

When I go back to my hourly data to find the values around 12Wh, I see this:

Looking at the Power Meter for 2018-11-21 06:00:00, I find a partial-hour data dropout. Nosing around the in-between values a little more, I find that all of them are a result of hours that include partial data dropout.

So, no, the second local maximum is not a true power mode.
Verdict - this power strip only has one power mode, “On”.

It looks like I will need to strip out local maximums that don’t rise to some threshold level of density. I’ll figure out that threshold, and deal with other challenges in my next installment.

7 Likes

You had me at

and the human brain and vision system are a very sophisticated piece of biologic hardware built for flexible recognition. Your eyes and brain use a lot of learning history and incredible processing to come up with a detection.

I worked in the metrology field programming coordinate measuring machines. I used a similar analysis of a parent, a toddler, and a bouncing puppy positioned in a moving and changing triangle. The parent pointing in the direction of the moving puppy and giving triangulated instruction to the child. Try to code that. Or from co-discoverer of calculus "Music is the pleasure the human soul experiences from counting without being aware it is counting. " Gottfried Wilhelm von Liebniz. The eyes, ears, and brain are really sophisticated.

This post should be required reading for all newcomers. Thank You Kevin for a well articulated explanation. Unfortunately, it will be glossed over by many as we grow to an 800-word informational society, but hey, it is at least presented as the first installment.

4 Likes

Agree, thanks for taking the time to put this in-depth report together. Great work.

I think this data analysis issue would be made easier, and less net work would be done overall, if the input handler for the smartplug data collector were cleaned up and could do better error detection/correction. As these are all wifi devices, they will be subject to some amount of packet loss and unavailability as a normal part of their operation. I assume these smartplugs are sending a stream of UDP packets to the monitor using a fixed packetization interval, avoiding the overhead of TCP, but removing the guaranteed transmission feature. Dealing with this architecture in a realtime environment is a problem that’s been solved (for the most part) in VoIP systems, where adaptive jitter buffers, packet loss concealment algorithms, reordering mechanisms, etc. handle the effects of a lossy network environment.

It could be as simple as what many personal weather stations do when some realtime data is lost: just repeat the last value for a while.

1 Like

This is great, Kevin. I’ll pass along to the team.

I meant to tell you the other day how amazing metrology is … I learned about it in my first job in college working on semiconductor manufacturing processes. Used amazing machines everyday that would measure thin film thicknesses, line widths and alignment between features, with nary a thought, at the time, about what went on behind the scenes (until a measurement didn’t t come up as expected). Incredible applied physics, made easy by the metrologists.

Just a caveat before I start “Installment 2” on this topic. I don’t profess to know how Sense actually does “Idle” vs. “On” detection nor do I have access to the fine-grained data they have to work with. But I want to share how one might do an assessment with the export data available.

Installment 2 - Data Science of “On”, “Off” and "Idle"

As I mentioned in the previous installment, the next important step is to figure out a density level for the second biggest maximum, or secondary mode if one exists, that indicates a true data mode, while filtering out peaks caused by data dropouts. Two observations here to help narrow the possibilities:

First, one only needs to worry about filtering out dropout noise that exists between 0.000kWh (Off) and the primary mode (the biggest one), because data dropout noise is subtractive - it lowers data values that should be attributed to the primary mode, but it cannot increase data values. So for now, I’ll look at data points above the main mode as legitimate data, but filter secondary modes that lie between 0.000kWh and the primary mode using some chosen threshold amplitude.

Second, with the current export resolution, I can’t do a census of the number of data dropouts. I would need a resolution of a second or smaller to detect all of them, so I need to find a proxy for how frequently they occur. Fortunately, the nice digital nature of the recirc pump energy usage gives me a good proxy via the frequency of in-between hours vs. the total number of hours. Summing the numbers, I see 49 in-between values out of 618 samples, giving about an 8% incidence of dropout noise errors for the recirc pump. But the recirc pump is only on 3/4 or the time, so I have to multiply by 4/3 to get worst case, or 10.5% Assuming that all HS110s see the about the same % dropout, and that there are no other huge causes of bad data, a 12% threshold for the second mode density should be enough to filter out data dropout related modes, though we may accidentally lose a small real secondary modes or two in the process.

So with this new filtering criteria, let me look at a few more devices on smartplugs.

Sonos Amplifiers
Here’s the 1 sec resolution energy waveform for our two Sonos Connect:AMPs looks like this. Combined, they draw a steady 13Wh most of the time, with spikes when the music is turned on in the rooms they supply.

In the histogram, the primary mode is as expected, around 13Wh, with a small secondary mode between zero and the main mode, which again is likely attributable to dropout noise.

If I look in the hourly data, I see 3 datapoints past the local minimum to the right of the primary mode (red triangle just left of 0.020kWh). Those datapoints correspond to the 3 most recent periods I had the speakers turned on, mostly to see if I could force Sense’s algorithm to see both Idle and On states. The threes datapoints I “detected” just barely met my criteria for being beyond the primary mode. Sense still hasn’t identified a separate Idle vs. On for the Sonus yet, possibly because I haven’t left it on long enough with the music cranked.

Washing Machine
Here’s the waveform for our washing machine with a time resolution of 1 second. Mostly Idle at around 2Wh 98% of the time, but with a very broad range of energy usage when it is running.

And here is the associated energy histogram. My algorithm suggests that 20Wh might be a good breakpoint between Idle and On. Please note that the wash cycle runs about 1 1/2 hours so the power used during a cycle will fall over 2-3 exported hours. The 20Wh breakpoint probably represents 5 minutes, or so, of a wash cycle falling into an exported hour.

If I look at the hourly data, 20Wh seems like a good breakpoint for Idle to On transition. I see 78 hours greater than 20Wh out of a total of 969 hours sampled, or On about 8% of the time. That jibes a quick visual of the Washing Machine Power Meter over the time period the smartplug was installed.

In the next installment I’m going to look at my furnaces, but they will require a little additional background because the furnace fans for each were both previously identified by Sense.

Installment 3 - On and Off History for Furnaces

When I started looking at the energy histograms for my furnaces, especially my upstairs furnace, I noticed something a little funky. There seemed to be a few spikes on the low end of the graph, indicating multiple energy modes that all deserved attention.

FurnaceUp

But before I took the histogram at face value, I realized I would have to reckon with another challenge in defining energy modes - resolution mismatch. I remembered that my furnace run cycles were short, never longer than 10 minutes, most only about 5. And they come in two flavors, 1) blower-only, for AC and fan modes, and 2) full furnace mode. I have my Ecobees programmed to run the blower for at least 5 minutes every hour to mix air in the house, unless a heating or AC cycle has already done 5 minutes worth of circulation. Here’s a view of typical operation through both heating and hourly blowing cycles.

So in reality, I have a complex mix of roughly 5 minute cycles being energy-summed on an hourly basis. Plus I’m missing the 6-7 Wh (0.007kWh) idle level that both furnaces use in between blower cycles, because I’m sampling at far too large a time interval.

Once again, selecting the best sampling time resolution is critical to “seeing” the best results. And I’m not able to see the real story of what’s actually happening with my furnaces at the one hour export resolution

I initially premised that one spike might be the minimum, a single blower cycle per hour, while the next spike represented maybe a double. But I was wrong.

When I tried to validate this hypothesis in the hourly data for the upstairs furnace, I encountered a different pattern. The baseline hourly energy usage (1 5-min fan cycle per hour) for the furnace ran at 0.021kWh until mid-day on Nov 12th, then it jumped to 0.038kWh. This seemed very strange until I was reminded that I only installed the smart plug on that furnace on the 12th. I had thought export was only producing the smartplug data, but in reality, export was presenting all the pre-existing data for identified devices, in addition to the subsequent smartplug data.

Plotting the number of “smartplug” devices exporting data each hour, over time, gave me a better appreciation for what I was seeing. This graph shows my current situation with 14 HS110s installed, with one turned off at the smartplug most of the time. 4 of the smartplugs are installed on previously identified devices, including both of the furnaces, which were discovered via the signature of the blower fan alone. The 4 previously identified devices show up in time before any of the smartplugs were installed (pre-existing data). The smartplugs also contribute to the uncertainty in this graph, since identified devices only deliver data during hours when the Sense sees them as active. Sense cannot see the continuous, little-changing “idle” power of these devices. The graph only settles (barring a few dropouts) when the last smartplug gets installed in late Dec., and all the devices provide continuous data stream via the smartplug for every hour.

I also spotted a little surprise on Nov 4th - my smartplug count doubled for exactly one hour. Turns out that is the extra hour from Daylight Savings Time, when we fall back. Real data, but under corner-case conditions.

4) Another analysis pointer - know your data and the changing conditions under which it was collected. More specifically, mixing data collected under two different conditions in a histogram gives misleading results.

So it appears that I should strip off all the data collected prior to the smartplug install for all four of the devices that we detected earlier. If I was really methodical, I might briefly “unmerge” all four smartplug devices from their pre-existing devices, then export to separate them. But instead, I’m going to take the fast and dirty route and strip the preexisting furnace data based on time.

Here’s the original upstairs furnace histogram (again), including legacy data. And below it, the new, smartplug-only, histogram. Notice the disappearing spike at 0.021kWh and the movement of the red triangle local minimums.

FurnaceUp

FurnaceUp1

And the same for the downstairs furnace. But not much of a difference since there was actually very little legacy data: Sense had lost track of my downstairs furnace blower as we entered the heating season.

FurnaceDown

FurnaceDown

So in final analysis, if I was only looking at the export data, without some careful filtering and examination of the higher resolution waveforms, I would have added a “false mode” to my upstairs furnace, and missed the true 7W “idle” state that remained hidden below my hourly blower usage, for both furnaces.

1 Like

Installment 4 - Revisiting ON/OFF with another month’s worth of smartplug data

I wanted to do one more round of analysis with:

  • Another month of data - my smartplugs first entered service at the start of November
  • Removal of most of my data dropout hours during the smart plug period - 382 partial hours removed.

Always On vs. power consumed by smartplug devices. Maybe not the smartest thing to look hour-by-hour since Always On adjusts over a far longer period, but I though this was interesting. Dropouts likely led to big Always On variances, even with dropout hours removed. But the good news is that Always On seem to be stabilizing again for me (and at a nice low value - 200W or so)

Playroom Cluster - access point and gaming PC. You can see the “ON” hours where the graphics card is engaged !

Washing Machine - our washing machine seems very efficient, seldom using more than 100W, despite pumps and motors.

Family Room Cluster - home AV system. When it’s off, it still uses 25W, 125W when we’re watching (LCD TV not included)

Office Cluster - laptop, monitor, 2 additional drives. 4 modes, from lowest to highest

  • Laptop traveling but monitor and drives still in standby
  • Laptop docked, running background stuff
  • Laptop docked, running simple foreground stuff
  • Laptop docked running intensive calculations / graphics (BUT NOT GAMING)

Sonos Units (2x) - High standby but low incremental power usage when playing. Really takes cranking things to go outside of 25W

Hot Water Recirculation Pump - Should be digital (1W-off or 45W-on) due to timer, but that little hump in between is real. Happens every midnight. Must be because the clock on my timer is off by a little bit.

Master Bedroom Cluster - access point, main Tivo, AppleTV and server. Who knows what the lower hump is ? Maybe the TiVo sleeping every once in a while when it can’t find anything to record.

Furnace Up - seems to make sense. Main peak is 5min/hr minimum fan set on my thermostat. Further to the right are longer duty cycles. I need to investigate the lower peak.

Furnace Down - Similar behavior to upstairs furnace, but more frequent duty cycles. Runs for a much longer time daily during the winter.

Service Closet Cluster - cable modem, router, cable amps, main switch, a couple bridges. Each individual component never varies by more than a watt. Pretty much 38-39W all the time.

HP LaserJet - very impressive. 2W most of the the time. Spikes during printing but never more than 12W in an hour.

1 Like

Thanks, @kevin1, for sharing your data and explanations. I had never heard of metrology, but could get into it with a teacher like you. Very nice post!

What software do you use to make your histograms? I use Excel for my data analysis needs, but I don’t think I could reproduce your work using that technology.

Thanks @jefflayman !
Yes, metrology is becoming increasingly important to make sense of all the data the exponentially expanding numbers of sensors are feeding us.

As for the graphs, the histogram and density plots are straight out of the core R language. The triangles and associated computations are done in R, run under R Studio. Much more capable and faster than Excel, as well as free.

https://www.r-project.org/

1 Like