Sense wattage range for specific devices

I have 3 devices that sense has detected but I’m wondering how.
On my device, stats page it will show the average wattage the device actually uses in the top box.
But down in the description box it will say a washer will use 400-1300 watts.
My washer use 45 watts on average. How did Sense detect and correctly identify this device when it is so far out of range then what is stated?
Screenshot

  1. Washing machine 45 watts actual range 400-1300
  2. Dryer 1223 actual, range 1800-5000 (only one leg detected)
  3. Dishwasher 295 actual, range 1800

Dan,
Two answers:

  • machine learning - Because Sense uses machine learning, it doesn’t use those “ranges” as part of the detection. It uses raw data and feedback from users to do the detection.

  • Avg usage is different from range - when you are talking about dynamic power usage waveforms. A waveform can range between 400-1300W and still have a usage average of 45W if the usage window includes periods of inactivity. I’m always a little skeptical of the averages without knowing the sampling period and the time window.

1 Like

That makes sense and lines up with what George said about “signatures” of on and off events for detection. My washer will show on/off 30 or more times during a single cycle because of the D/C motor switching directions. The only thing I don’t like about it is the number of times it shows it ran a month does not line up with actual cycles. Maybe in the future they can incorporate these brief “off” times so the cycle as a whole wil be accurate. Like a delayed notification. Where dense would wait to see if it comes back on within a set number of seconds.

1 Like

@kevin1, when you mention that sense uses “raw” data for detections, what does that look like?
What we see on our graph/timeline has probably been changed from what sense originally pulled and sent to the servers. I’d like to see exactly how it looks to sense at the level of detection.
The reason I say that is all of us can look at our timeline and instantly make out the signatures of devices that have NOT been detected. Is sense relying too heavily on the Raw data and not looking at it from this same perspective it’s showing us? Would sense detect more devices if it also looked from this perspective?

Two things to think about:

  • In the raw data window, you would see maybe 30-60 cycles of a voltage waveform, 2 current waveforms (L1 and L2), plus phase angles for L1 and L2. Good for spotting many device types, not so good for others. Sense is expanding the range of their detectors, but if they are using LSTMs (a form of recurrent neural networks), you have to be selective about the number of inputs, especially where the data is streaming in at 4x2M Bytes per second (the LSTMs have to be sized from a window/input perspective to the features/signature sizes of the devices they want to recognize).
  • Your human ability to detect is based on some incredibly sophisticated neural networks - your eyes and brain. The challenge is being able to “bottle that” into completely automatic algorithms. Machine learning via neural nets has been the one of the best tools for tackling human-class recognition challenges, but remain far more limited (a billion times less complex than the visual part of your brain).

So the moral of the story in my mind is that Sense is moving in the right direction, and it is hard to take a “pattern” you can see into a robust, automated detection for all situations.

2 Likes

As always @kevin1, great explanation from your experience and perspective.
Will we ever see the day that machine learning or computers will ever match the human brain? I seriously doubt it but agree that Sense is definitely on to something and headed in the right direction.
Does it make sense that I was thinking that device detection is happening way before the data we see on the timeline and I feel as if sense should take a second look? Instead of just using the raw data it sees, to go and look also at the timeline graphical data we see to see if it might see something there?
I’m a “why” person and I’m always digging deeper. I don’t know exactly how the whole sense process works and I really won’t be satisfied until I do know every tiny aspect.
Im like the annoying 5 year old asking why, just older.

Let me make one more comment before I answer your question. One of the big limits of Sense is feedback. When you are learning, you get tons of feedback on what you are seeing, feeling, tasting, etc. Sense only get’s limited feedback on what it is seeing. That’s why I was very happy to see the Sense integration with smart plugs. The best learning with machine learning is what is called “supervised learning” where every step in the training data has associated “ground truth” about exactly what is happening with every device (on/off and how much power it is really using). Without ground truth, training has to resort to adversarial / generational training where one “machine” essentially tries to impersonate a device and the another “machine” tried to either identify the device or tells the learning process that it thinks the device is fake, with some feedback on why.

Now that Sense is getting much more ground truth data via smartplugs tied to individual devices, I’m think were’re going to increasingly better detections. Training uses the difference between what Sense predicts vs. what it actually sees to teach the neural networks.

2 Likes

As always, @kevin1 comes through with great replies. I don’t have much to add, other than to emphasize this:

It’s very easy to say “if I can see, Sense should be able to!”, but your brain has been built for this over thousands and thousands of years. We’ve only been working on Sense since 2013 :grinning:

Still, @samwooly1 you have a point about looking at the data from a different perspective. We’ve had to do a bit of that perspective shifting for EVs, given that their timescales are significantly longer than the average home device. We’re planning to implement more learning along this front in the future.

2 Likes

Thank you both.
I’m not complaining or thinking anything is wrong it’s just my curiosity and would like to understand as much as possible about Sense.
I’m sure there are other that look at the timeline like I do and think, how does sense miss that for a detection?

1 Like

You’re in luck. You’re not the only one curious and I’m doing my best to get out more content (video/blog/interviews/etc) that goes into the finer details of how Sense works. Certainly, it won’t interest all of our users, but I still think it’s worthwhile.

4 Likes

The only disagreement I have with “ground truth” and smart plugs is we still have a human factor. The smart plug communicate ms with sense but is using the name and description supplied by the user. If someone’s names their smart plug “HS110” but says their is a toaster plugged in but it’s actually a rice cooker, what does that do to “ground truth”?
I’m sure I’m misunderstanding something

There’s still a human factor on the naming and categorization part. But that’s the second part of identification. Using the facial recognition analogy, there are really two pieces the the process.

  1. Identifying a pattern as face (identifying a waveform pattern as a device on or off signature)

  2. Associating the face with a name given labeled faces (associating an on-signature or off-signature with a specific device, given labeled devices)

Ground truth helps the most with the first part. Good crowd-sourcing with good human info helps the most with the second.

1 Like

Yes, I agree. The information has to be “Good” or it throws a wrench in the process.
If there are enough misidentified device in the second part from human error it will ha e an effect everywhere the way I see it.
What I would like to do may provide answers or it may not. My guess is it will add to the long list of questions.
I feel like I’m being talked out of it like it’s a bad idea. That just makes me more curious.
I’m thinking that if the two have very different results and they look at the data, they still won’t know why. But there are reasons that are pro overt specific for why everyone has different results caused by so many variables. Being software driven means the process is following the rules and steps of code, it’s not like the human brain that can deviate from a path to follow. It’s probably incredibly difficult to find the answers but does not mean they are not there.
For all I know this has already been done. If it has, I hope the results will be shared.
I would like to add that if the results were very different, I would not see that as a flaw or problem.

I don’t know how to answer that question/ comment. But one of the strengths and weaknesses of machine learning is that it isn’t as predictable as basic algorithms. Data science uses prediction techniques like “random forest” that rely on an a foundation of somewhat random analysis of the dataset. So I do believe that if you expect deterministic answers, you will be surprised and disappointed.

Sorry @kevin1, that last paragraph was impossible to answer because it was an edit that I someone placed in the wrong post. It belonging th2 sense monitor thread.

This topic was automatically closed 365 days after the last reply. New replies are no longer allowed.