We should be able to TRAIN this!

Thanks very much. I have had a long history, since around 1980, in technological development and beta service/testing around the PC and starting in 1996, with JDS’ Stargate implementation. I have served a variety of companies in both software and hardware, such as for US Robotics’ modems and Citrix’ serial protocols for thin client development (you know it well nowadays, but it once was nearly swallowed up by MS with the intent to (probably) kill it. So anyway, I would be willing to contribute in your beta program if you think I might be of use. I’m searchable to a small degree as “TexARC” & “ACOfTexas”, and my site arcarmichael.com provides some background. ?

Mine’s been running a days and finally picked up two devices, thought more would have been picked up. Not into electronics, but after some reading I understand the challenges. Having done systems design and some IE, we might need to take a different view of the issue. Sense is trying collect info to identify a device and we want to see a device bubble. I suggest there’s a “Identify” mode for the monitor panel that could be turned on/off. The monitor is showing +/- watts detecting changes. I can go to different devices one at a time, turn on the device, Sense detects and records the info, now add a popup that ask what was turned on, keeping it as a tagABC thru the analysis, I let the device run a bit, then turn it off, popup asking did you turn off tagABC or what did you turn off, that would stay with the info. Do a few devices, set identify off. Doesn’t mean tagABC will be in a bubble every time it’s on, but would eventually surface. I could repeat the ident process, just to tag additional records.

Just to add, I won’t have an issue of several “identify” sessions to tag the recorded signatures presuming over time Sense would analyze and confirm the device. Maybe needs to be another post, but related is identifying duplicates: 2 frigs, 2 a/c, multiple heaters and TVs??

I’ve got to believe that it would be beneficial for me to be able to manually tell Sense about loads that I already know, versus waiting for the ML algorithms to identify them. We should also be able to enter a training mode where I can manually switch loads on and off to manually train Sense. I see a couple other requests for the same thing, and agree with them 100%.

Along the same lines, I have a pump installed. I know how many watts it makes when it runs. that seems like obvious information to enter. If I fill out enough such devices it should be useful in figuring out what to look for.

1 Like

Echo this suggestion! This is marketed as a crowd sourcing product right?

Please read this thread. It discusses why it is not a feasible inclusion.


If we felt that manual training would a good use of your time, we would work on implementing it

Wow, that’s a tad bit condescending to say, especially when customers are declaring it to be worth their time.

Don’t take it out of context.
There’s posts here complaining about just the opposite on how they “have to scurry around” to I.D. discovered devices. There’s a thousand different personalities here which means a thousand different levels of expectations.
This beats everything on the market… If it discovers nothing, it’s still as good, but cheaper than TED, The Energy Detective.


I have both and couldn’t agree more, I actually have five different methods of energy monitoring and far prefer Sense.

1 Like

@robertalangerhart, you are commenting on an ancient post. Ryan and Sense team have done an excellent job explaining why “human training” is not technically feasible, despite all the obvious enthusiasm and imagined viability.


First, if I’d wanted your opinion, I would have asked you. Second, cell phones can be trained to recognize a face, Google assistant can be trained to distinguish an individual by voice based on three samples, an electronic doorknob can be trained to recognize a finger print, yet somehow it is not “technically feasible” for Sense to be trained recognize a refrigerator or a garbage disposal. Whatever.

Robert maybe you should spend some more time reading the forum before initiating personal attacks on other forum members. Cherry picking a single developer’s quote doesn’t explain the work or thought process going into this product. This is a topic that has been beaten to death and Sense has addressed it in multiple posts so please put in some effort to read up on it.


@robertalangerhart, the kind of face “training” or the voice “training” you describe, is done in Sense as well, when you fill in the type, make and model of a new device it has identified. You can see the results when it offers up community-sourced options.

But there are 15 years of widespread university and private research that have gone into the precursor step in the training process - finding generic faces in photographs or identifying a string of words spoken in any voice. For reasons described earlier, untrained humans aren’t really even technically competent to tag the power waveforms accurately enough with ground truth information, to provide the annotated dataset for this kind of training.

1 Like

Cut some slack. Google resources are huge. The sub-team that focuses on voice recognition for Google Assistant alone is more than of all bodies that make up Sense. Nuance built a foundation of computer speech recognition over 2 decades ago.
The industries you mentioned have standards required to accommodate those technologies. Fridge and garbage disposal companies don’t even know Sense is a thing.


Then explain how there are 1000 different OCR vendors out there recognizing text that wasn’t printed based on any “standard”. I’ve spent 20+ years in AI, and this is entirely do-able. I can train a missile to fly through unknown terrain to take out an enemy tank vs a friendly, yet you defend how it is too hard to identify a table lamp. #whatever

Common text that we all use every day and that OCR’s are trained to read come from different alphabets. Alphabets such as Latin, Greek, Arabic, Hebrew etc… the list is quite long.
These alphabets are a standard of which all scripts / fonts / representations of the alphabet are based on. When someone creates a new font/script or style it is based on the standard. If it wasn’t, we as humans, let alone machines, would not be able to read them.

Now lets take that standard alphabet and put a bunch of letters on top of each other.
Can the 1000 OCR vendors recognize the text in the below image?
I doubt it. Now as a human, with some context clues, you may be able to figure it out because you see more than the computer does, but just because you can see it and detect it doesn’t mean the computer can nor should be able to.

I don’t understand your hostility to a a team of people who are actually building the product. A team of people who have given countless well thought explanations of how their product works and their process for how it is supposed to work. Maybe there is a method to training an electrical detecting device, but its not this device. And I choose to believe that if training it were possible, and as simple as many people like to say it is, either a) Sense would be doing it already as why wouldn’t they want the best product they can, and or b) that there would be a competing product on the market that allows training and has a reliable track record.

If you don’t like the product, return it. If you have actual examples using your 20 years of AI knowledge of how they could program this and you have code and ML algorithms reference examples, then by all means share. If you don’t feel like sharing that knowledge, then you might as well keep your resume to yourself as it benefits no one other than just making you look like a troll.


Why don’t you just buy a Beagleboard with some A/D converters, hook up some CTs, rent some machine learning server capacity from AWS and develop your own LSTMs for identification. Seems quite trivial for an intellect like you…

Seriously dude, you seem not to know the history of ML-based image recognition by your allusions that it’s evolution was simple and fast. It took 6 years, plus a highly evolved and tagged dataset , plus fierce competition to get where we are in image recognition today.

Not defending Sense - we goad them enough for progress. But your comments are just arrogant, yet ignorant enough about ML, to trigger annoyance. I seriously want to see you tag some on and off-events with millisecond accuracy.


Guys, this thread is getting locked. Civil discussion about machine learning and what you wish we were doing differently is fine. This has dissolved into a mess of name-calling and is not what this forum is about. Feel free to continue these discussions in private messages. I encourage everyone to review our Community Guidelines: Sense Community Guidelines

Our feelings on this at the moment are here: Why can't you train Sense?

We’re constantly working on ways to get better user input and do not feel that a training mode is the best way to do that.


Wouldn’t it be nice to tun on a device (e.g. oven) with power meter view open & be able to tap the graph label associated with the meter spike and enter the device name/model…