I want to know if there is a way to label devices that I know but Sense hasn’t detected yet.
In short, no - you need to wait for sense to detect a device normally before you can then work with the detected devices, renaming them, etc.
If you’re just new to Sense this thread would be a good read to better understand why you just need to patiently wait for native detections.
Sense needs to give the consumer the opportunity to name a device when it has been discovered before sense has. This allows the consumer to be more active in the process and feel a sense of importance to his or her product and involvment.
I’m not quite sure what you mean by “discovered” before Sense has discovered it.
If you’re referencing trying to “teach” sense here’s another thread to read in order to get a better understanding of why that isn’t an option.
I understand the dilemma. However, I can provide some insight for a particular device that should help if introduced into your monitoring program. Sense has identified my gas furnace combustion blower as a furnace. It has not identified the main blower at all. This seems odd that the algorithm sees enough current lag to find the very small combustion blower but cannot identify the much larger load that always turns on a set time after the combustion blower. The furnace (combustion blower) is the only device Sense has identified in my system and it’s been installed for over 40 hours. If you could provide a graph with voltage and current relationships shown it would be very useful. You could also provide a tool to input data when a given load is sensed that could be introduced into your database for future use. It also seems to me that a self defrost refrigerator would be easy to identify with its multiple motor loads and an intermittent resistive load (defrost heater) that occur on a regular basis. Bottom line it seems Sense is not identifying different loads as fast as I would like.I seems with some educated input the algorithm could be drastically improved. How can I help?
I you are familiar with face recognition software in photo programs (Apple Photos, Google Photos), what you are asking for is the ability to mark and label faces before the software recognizes the faces. While it might feel good to actually label the face (or waveform in the case of Sense), that really isn’t very helpful. The software can’t really learn from you labeling stuff that it hasn’t recognized yet…
I’m wondering @kevin1
Why it would be any different than how when I import pictures onto my laptop and photoshop makes a square around the faces.
Photoshop recognizes a face there, it just doesn’t know who the face belongs to.
They way you just explained it sounds like to me that sense doesn’t recognize a device but also doesn’t recognize anything pulled wattage either.
I don’t know how photoshop works but must save or something else of that ability, Unknown faces. Because when I do put a name with a face, it tags all the pictures with that face without me going back and doing anything.
It’s not much different Dan. There are two steps to the process in both cases.
- First recognizing abstract things in photographs that have a high likelihood of being human faces or waveform patterns that have a high likelihood of being a single device (or component of a device).
- Then, classifying the face by name, or the device by specific type, if similar faces/patterns have been encountered before.
But there are a few differences as well:
- Technological progress - facial recognition has had many more people working on it for many more years than the area Sense is working on, energy disaggregation or NILM (non-intrusive load monitoring). You treat that “face box” showing up in Photoshop as a simple-to-do feature. It took 100x the number of people at Sense, 8 years of competitions to come up with that capability - here’s an interesting history on it.
Sense has to deal with three others challenges. Faces are most often, nicely framed in photos, fairly similar (eyes, nose, mouth) and unobstructed. Detectable device patterns can vary widely from milliseconds (motors/lightbulbs/heaters turning on) in length to minutes in length (car charging ramp), aren’t similar at all, and can occur on top of one another. Most photo programs can’t deal with partially obstructed faces, especially from odd angles.
There’s one more complication too - Sense has to detect on patterns, off patterns and sometimes what happens in between.
But once Sense finds the patterns, it works very similar to the photo programs. It rolls any patterns similar to the device you have identified and named, into the same device. Plus when it encounters a new device pattern, it guesses what it is based on the crowd-sourced data.
I think you are taking for granted how tough that first step is, just because humans are very well programmed for recognizing faces (our brains are wired for it, plus we get many, many years of training data).
Good explanation @kevin1 and I didn’t even think about with sense, what it’s seeing is many things overlapping, this doesn’t generally happen with facial recognition and when it does, it doesn’t recognize the faces
I believe there is another more significant difference: the lack of “ground truth” for training device detection algorithms.
When using AI to detect – for example – tumors in xray images, AFAIK the training set is well understood: which xrays include a tumor and which do not. For Sense, this knowledge of exactly what device is turning on (and later turning off) is not known. If it gets either of these wrong the energy use attributed to the device will be wrong.
The recent integration of smart plug data will help a lot with this problem, but I fear they may need lots of users with smart plugs on lots of different devices in all possible configurations/settings. Clearly a good use of crowd sourcing to get the data… it would be nice if they could share it with others working on the same problem.
Great point ! I believe that Sense bootstrapped their data set using the ground truth from a few test houses, but you are right - they don’t have the vast labeled dataset, like ImageNet, to work with, though smartplugs certainly should help. Plus there’s no way they can use humans to add labels the way ImageNet did. It might be possible for some kinds of waveforms (car chargers), but not for the vast menagerie of patterns out there.
But will the smartplug data, with a resolution of 2Hz, provide high enough resolution data for Sense’s neural network to train?
I just hope that all this smart plug integration doesn’t slow down true detection.
I can’t afford all the gear and if these plugs alread have the ability to monitor, why have sense?
It’s definitely better for training than:
- No data
- Hourly categorical data (only ON/OFF info)
- 2 Hz categorical data (ON/OFF)
Most of the research work in this area seems to have been done with a big sampling difference between the central monitoring system and the end device monitoring.
My thoughts is that the ground truth data the smartplugs are providing could be used in conjunction with the Sense monitor itself to construe the bigger picture, not necessarily relying 100% on the smartplug itself, but working together.
IE, the smartplug sees a change in consumption, and even though it’s only measuring at 2hz, it yells at the Sense hardware and says “Hey, uh, something big just happened here, take a look at 1 second ago and right now”…and then the Sense hardware actually does the heavy lifting.
Or I could be totally off base, but that’s the way I envision it.
I’m going to give a stronger answer after giving this a little more thought. 2Hz data should work well for training for two reasons.
Sense ultimately uses their detection neural networks/LSTMs to create an estimated 2Hz view of the detected device waveforms. That’s the waveform we see when we view the Power Meter for a specific device that has already been identified. Training is all about minimizing the variance between the predicted output (in the Power Meter view) and the ground truth data, so it’s actually quite serendipitous that both are 2Hz.
That 2Hz comparison would be problematic if the smartplug was actually sampling power at 2Hz and Sense was sampling at its native 1MHz. The smartplug would completely miss spikes and other rapid changes in current at the device. But the chip inside the HS110 has a native sample speed of 2520Hz, or 42 samples per AC 60Hz cycle. It reports the accumulated results twice a second, but those results are based on far greater sampling accuracy.
This topic was automatically closed 365 days after the last reply. New replies are no longer allowed.