Device detection is one of the most discussed topics on the forum, so I’m really excited that we were able to hear from Ghinwa (Data Science Team Lead) and George (VP of Technology) as they discuss improvements in 2020 and what’s to come in 2021.
Have questions about something discussed in the video? Share them in this thread below, and we’ll answer them if we can!
“Progressive device detection” sounds like it can potentially solve a lot of problems if it means what I think it means. Is Sense adding detection beyond the current on/off transition detection?
I like instant device detection, but I would be very happy if I could look at a Power Meter with past device activity indicated, even if it took Sense a day to analyze the data and figure that out.
I think actively researching would be a more accurate description at this stage. As we have more information on what this looks like, we’ll be sharing more details. Definitely exciting, though.
I think something that would help Sense detect our specific devices is again leaning on the home inventory. Let’s use the EV detection since it was mentioned in the video. Instead of Sense seeing a waveform and needing to check it against the database of EVERY device Sense has ever detected, it could start by asking “what devices do you own?” and then compare against those. Even more specifically, if Sense thinks it found an EV, and the homeowner has a Model 3, Sense can say “Here are the 6 Model 3 signatures over the past 3 years. Are you one of these?” No instead of comparing against hundreds or thousands of devices, it’s comparing against six. If Sense does all that and still can’t find what it’s looking for, then it can look to the overall Sense device database. Maybe a homeowner didn’t put down their model, typed it wrong, replaced it, etc, but for those of us who have actively and completely filled it out, it seems like there is zero use of that data currently within Sense. Let us help you, and using the home inventory seems like a super simple way to narrow down and speed up detection. Heck, even give us a “Detected” checkbox in the inventory. When a fridge it detected, check the box and Sense can stop looking for the fridge.
@brian5,
I’m not sure your suggestion would help. I certainly want Sense to exploit the inventory as much as possible, and I bet it is helpful to Sense in segregating out special large devices that don’t fit the traditional detection models - things like EV chargers, mini-splits, and other variable speed HVAC stuff that don’t have clean 1/2 second on and off transitions / signatures. So for those, the inventory / naming can probably help Sense assign or at least try the special models.
But for the traditional Sense 1/2 second on and off transitions, you saw the on-signature “database” at the start of the video. It’s a 17 dimension (or more) dataset that looks like this:
If you look closely you can even identify a few of the “features” Sense uses to convert from on-transition parameters to a detection.
Feature #:
0 - Other Channel - Who knows what that is ?
1 - Power - but is it single leg power or combined ? And is it peak power average per over a window of time ?
17 - P0 - assume this is the phase angle on leg 0, but who knows ?
But the point is that there really isn’t a database lookup per se. An on-transition has 17-20 parameters that position that particular transition in a 17-20 dimensional space. That location might be in the middle of a single existing cluster of points (a detection), on the hairy edge between two or more clusters of existing points (either a missed detection, a correct detection, a wrong detection, or even a intermittently correct, conflating detection), or in the middle of nowhere (who knows how Sense classifies these). I’m guessing a lot of my smaller on-transitions fall into the category of on the hairy edge, but not unique and repeatable enough to fit into any one specific cluster.
Nice video, tempting ideas–but nothing on heat pumps. The thing is far and away the highest energy user in the house, and Sense is basically useless. Add “heat pump” and “other”, and you get fairly close to actual heat pump usage. I don’t see why this is so hard, and it’s frustrating to not hear of any progress.
Hi Ruth. Although we didn’t mention Heat Pumps specifically, we do anticipate that some of the things we mentioned should help with heat pump detection (specifically, Progressive Device Detection.)
There is an interesting Sense blog below that explains why detection of mini-splits is hard. I would suspect the same is true of virtually any modern heat pump.
I can see why real-time device detection can’t work for devices like these and why progressive device detection is required.
Thanks. I think I ignored the linked blog post at first b/c I have a heat pump, not minisplits, but I see the problem a bit better. Our fans are completely separate from the compressor, but associated with the three pumps for the 3 house zones. I told Sense the pumps are fans, for simplicity. It mostly gets the fans, but it’s lousy at the compressor. And the graded turn-on-off is certainly a thing. What about the relays in the thermostats? They’re loud enough, I’d think their electric signal would be detectable.
I’m not sure this is the right place for this comment, but here goes. @ruthdouglas notes above that a certain device detection is “lousy” and another is “mostly” accurate. Many users have made similar comments in other posts. I’ll tell my own story next. Does anyone know if it would help Sense to get feedback from users in a more systematic way? If so, Sense could implement a field on each device page where users can [optionally] provide a rating, right in the app!
In my own experience, I have seen a wide range of quality. I’ll give two examples. My clothes dryer definition is currently great. It picks up the heating elements on both legs of the 220 device and it picks up the motor/fan which is only on one leg and this is all in a single definition. This did not come easily, however. I’ve had at least three previous device definitions for the clothes dryer, and none of them checked all these boxes. I could not be happier now with this device and would give it 5 out of 5 stars.
The second example is the clothes washer. Sense currently has native definitions for three different aspects of the machine. I call the first Washer, the second Agitator, and the third Spin. Since this is a machine, it washes every load of laundry the same way (unless I change settings, which I rarely do). I would therefore expect to see a pattern such as one load = 0.014 kWh Washer + 0.047 kWh Agitator + 0.144 kWh Spin. I have looked for such pattern, and it isn’t there. Instead, I see usage showing up under Other and a huge scatter in results from these three devices. I would give each of these three devices 2 out of 5 stars, since at least they are trying.
What might Sense do with such ratings? For starters, if it is 5 of 5, do nothing. I know their engine is constantly tweaking behind the scenes, but I would hate for some tweak to break a good thing. If the rating is 1 or 2 stars, they could perform a “what if” analysis. They could ask their server, “If the poorly-rated device were deleted by the user, would you have a suggested re-detection?” If the answer is yes, they could offer to swap in the better definition. This would help in the case of my clothes washer, which I have been reluctant to delete because having some detection is better than nothing.
Justin, I’m thrilled that you like my idea and have shared it with your team! If it helps the idea gain traction, here is another way the idea might help Sense. Per-device ratings could be used as a proxy for quality control analysis.
Here is how that might work. Not all users will go to the trouble of rating devices, but some will. Especially if that rating results in improved detection definitions! Since you have a lot of users, you should soon have enough ratings to calculate mean, mode, deviation, and so forth. Those values might inform the detection engine directly, but they could also be used for marketing. You can tell potential clients that 90% (made up number) of rated device definitions are considered by their owners to be either reliable or very reliable. You can tell them that 15% (made up number) of definitions that are marked as unreliable are later improved by the engine. Stuff like that.
Many thanks to the Sense Data Science team for releasing this update! I loved hearing about everything planned for the year and how you plan on moving forward. It would be great if these updates become commonplace for the various Sense engineering teams. Thanks for making such a helpful product!
Thanks for checking in.
Any idea if this is still an ‘active’ project and if so has Data Science has made much progress?
I believe this video was the last time we’ve heard anything from the Data Science Team. So needless to say we’re a little anxious to see something come to fruition.
Happen to know if any other big projects are in the works?