Discussion: Why can't I train my sense?


Totally agree with Mike. As a retired electronics technician who still owns plenty of test equipment which can see all the different power signatures, there has to be a better way of identifying devices.

My sense has been installed over a week now. What has it detected? One lousy device, my fridge. Cut me a break!

I can clearly see the signature of every device turning on and off in the sense app. I can see when my condensing boiler turns on, when the fan ramps up before ignition, when the igniter fires and turns off, and when the fan ramps back down. Sense is too to stupid to identify it. Condensing boilers are common today and the current waveforms are unmistakable, unlike any other device in your home.

It has not identified any led bulbs, not even a toaster, television, nothing but the fridge. I have plenty of devices in my home (always on 413 watts) and sense has not identified any of it and most likely never will.

I believe you would make more customers happy (the technical ones at least) giving them the ability to identify and label their own devices.

Another gripe I have is my net zero happens the end of this month. I see no way to zero out my consumption and solar production. This was the main reason I purchased sense, to keep track of how much solar generation vs. usage, so I do not go over my net credit.

I truly hope that you consider these gripes, many forum users have the same ones.


@Louie, @mike_gessner,
Since you are electronics types, I’m going to challenge you to show me that you can tag a specific device’s waveform from your home, for training. But first, I’ll impose an additional constraint that the Sense guys have to deal with. Because they wanted the bubble coming on immediately when a device fires up or disappear right when a device turns off, they have to do identification of signatures within less than a second of identified devices coming on or going off. More info here:


Please show me a 1 second long snapshot (at most - 1/2 second is even better) of your home device that you would use to train Sense on that device. You can’t for two reasons:

  1. Time resolution - The Power Meter only shows things from a 2Hz (1/2 second resolution) perspective. You never see the actual AC waveforms. Bottom line - the world you are doing your visual “identifications” in is very different than the world Sense is actually seeing (1 microsecond resolution view on actual AC waveforms with less than a second identification window)

  2. Even if you could view into that 1 microsecond, 1 second long world, I think you would be stymied by the range and huge volume of spikes and ridges. You might be able to look at the the macro waveforms you are showing to me first, then go back to the micro world and “box” the 1 second of an edge that matters, but that would be highly tedious…

BTW - I’m not trying to pooh-pooh your desire to help. I really would like to be able to assist with identification as well, but understand these limits and my limits. Plus Sense has expanded recognition / identification to include post-waveform identification (where the identification is done 30 seconds to several minutes into the waveform), so there might be a place for human training in that as that capability expands. Until then, using smartplugs on 120V devices that don’t seem to be discoverable, is the best way for you to help training.


This is probably not what your asking to see, but will give you a specific example.

My home has a 1,000 amp hour commercial battery bank that powers all my Ham Radio equipment and many 12 Volt items around the home. One of the items it powers is a 440 MHz Ham Radio Repeater. This repeater has to transmit every ten minutes to identify itself per FCC regulations.

The 1,000 amp hour battery bank is kept charged with a Statpower battery charger (switching type power supply).

The current plot shown by the sense app is the charger ramping up to keep up with the current used from the batteries by the Ham Radio Repeater transmitting it’s ID. After the repeater stops transmitting you can see the current taper off as the voltage of the batteries comes back up to the float voltage.

Since I doubt no one in the world has this exact battery bank, charger, and Ham Radio repeater, my Sense will be the only one sending this power signature. As such Sense will never be able to “Crowd Source” and identify for my particular situation because no one else has it.


Thanks for sharing…

So given the current less than one second primary paradigm, you would have to train Sense based on identifying these two little boxed features.

But you captured this snapshot with everything else pretty quiet in the house. You would also have to show instances of the same exact region when there is a lot of additional activity going on in the house.


We have zero reason to lock out this feature. If it were both practical to use and feasible to implement, it would be in the app. This has been discussed ad nauseam in this forum. I recommend searching for those previous threads to see all of the prior discussion. While we’ll continue to assess the feasibility of such a training/learning mode, ultimately, this is our response for the time being: Why can't you train Sense?.

Since we have no issue with users discussing this, I’ll leave this thread open as the primary discussion thread. I have moved it to Technical Questions and renamed it accordingly.


That is awesome to see how it tracks your solar as well. I would love to have solar on my house but unfortunately i live in Virginia and the HOA in my neighborhood forbids solar on houses if it can be seen from any of my neighbors houses or the street.


Just an aside… Would your HOA allow solar roof tiles as a roofing replacement ? Expensive, but designed for these aesthetics situations…



Thanks for leaving this and i just posted this question again just to get people talking about the idea on here. For those that just want to say that this isn’t possible or complain about me asking this question then my only response to them is that it isn’t possible yet and i would ask them to simply go look up the word forum in the dictionary. I simply just want to discuss this with others and maybe that will trigger the ah-ha moment for someone and make this available. Btw I love my new sense monitor and i think its a great little piece of test equipment/monitor and is worth every penny that i spent on it so thank you for this device.


i’m pretty sure they would allow those but there way out of my price range. I would also prefer the traditional panels that sit off the roof because my house isn’t shaded and these would create that shade on my roof and in return keep my attic cooler in the summers. it is what it is and hopefully one day i’ll have a house with solar.


Absolutely. I totally support that. Just know that we’ve thought very deeply about this (and even had a training mode in some prior app versions). It’s been researched and tested extensively our engineering and data science teams. If we felt it was doable, we’d do it. That, of course, could change in the future, especially as we get more data. Energy disaggregation, in the broad way that we’re doing it, is pretty new still. Facial recognition, as @kevin1 noted above, had a bit of a head start and a lot more resources thrown at it. The good news is, we’re making steady progress and Sense will only get better.



I accept your challenge. First off I am saying that “if” they implemented a training mode within the sense app that you could turn on and off to tell sense to start looking for devices to turn on and off then once turned on and when an event/device turned on sense will know to ask you what just turned on and give you the option to label it (just like you label it when sense finds a new device) and also give you the option to label it as a guess. Once you’ve done this once and labeled it you can now test it by turning the device on and off several times to make sure you labeled it correctly and when you are done with training mode you’re sense app can now send this information back to its crowd sourcing to add to their database that they are using.


Sense is already taking snapshots of my power usage and has a record of my devices turning on and off. All i am asking is that we the end users are given the opportunity to help speed up the device discovery process by adding a learning or teaching feature.

My best guess as to why we the end users aren’t given the option to train sense is that the data that is being collected from sense is being sent to a data collection computer and is being analyzed there. Once the data is analyzed they send you a notification that a new device has been detected and that’s when you tell it if that’s what it discovered OR if its a different device that is selected from its list of devices. We the end users are being shown a very basic representation (slower sample rate) of the power usage in real time on our computer and apps on our phones and tablets.


Also like to say that after three days my sense has detected 5 devices and 2 of which i have identified. the first one that i identified was my daughters aquarium heater and i just figured out the second one which is the iron that my wife uses in her craft room which i originally thought was her tea kettle. i have heat 3, dryer and stove on my list and i have no idea what these are yet. i have turned on the stove again to see if it will trigger my sense to tell me its on but it doesn’t so it must be something else. my wife has been in her craft room sewing and watching TV over the past few hours and the dishwasher has been running for a while as well and gets used daily as well but hasn’t been detected yet.



You’re right with your description. A highly compressed and specially processed version of the microsecond data is sent back to the Sense mothership for analysis and training. Think about it - they get voltage and current feeds for both legs of electricity in your house, plus additional processed info like the phase angle between the voltage and current on each leg, including the full splendor of 60Hz AC. We only get to see the finished product, a half-second updated view of just the total RMS power, without all the complexity of the raw data.

You should really push into the type of analysis Sense is doing on their end, using machine learning. If you pushed into it a bit, you would learn that when Sense is doing recognition on about a half second window into the data, the training (you indicating “the garbage disposal just turned on”) has to happen within about the same amount of time to be helpful. Similarly for “the garbage disposal just turned off”). Plus the signatures associated with flipping a breaker are different than the in-use signatures for many devices (think refrigerators or HVAC thermostats), so you can’t just use the breaker as part of training for those. The training that does work for these short recognition windows comes from consistent machine-to-machine level feedback from smartplugs and possibly things like thermostats in the future.

And please remember that there are two phases to this process:

  1. Identifying the patterns as some kind of device
  2. Classifying the device as an XYZ…
    The toughest one, by far, to train is the first.


I would also like to challenge everyone who thinks they can identify their own devices, mainly because I’d like to be proved wrong.

We’re merely saying that to put what the human brain is seeing into code is not plain English as much as my personality thinks it should be. Not sure how to explain the complexity of what is being done as much as it frustrates me at times.

Here’s whole picture vs up close, same device…



I’d say the human brain is fully capable of doing what sense does, just not at the same speed. After all, sense has only “learned” what was programmed. @Grandpa2390 explained it very well in a post.


I understand the desire to have the on/off bubbles show up within 1/2 a second of the device switching, but I think it would also be nice to see if Sense can use the “macro” view of the waveform in their AI implementation to help w/ the energy de-segregation retroactively.

I guess, I see two uses for Sense; the first being the immediate picture of what is currently on/off in the home; and the second being a historical understanding of the power usage for the devices in the home. I think the second problem might be easier for ML to solve. And I don’t believe you need immediate view to make decisions about how you should be saving energy.

I’d like to hear some thoughts on this topic from users as well as @RyanAtSense. It may be that Sense just doesn’t have resources to develop a parallel ML to enable this kind of view.


Somewhere on this forum, there are indications from Sense that they are branching out from the immediate bubble 1/2 second recognitions, mostly for EVs initially, since EVs have a an extended power ramp for a reason. I think they refer to it as post-identification updates as opposed to immediate updates.


It depends on the device, and on your house. With my house, I can identify my own devices. there’s never more than 2 or 3 running at a time. And they all follow the same unique patterns that the human brain is powerful enough to readily recognize.

It’s not fault of Sense that it can’t do what the human brain can do. our brains have been in a working progress for millions of years (or designed by God, whatever you believe). and the technology has been training 24 hours a day for decades in our cases.
the technology that powers Sense, on the other hand, has been in a working progress for a few decades. and has been training 24 hours a day for 1 or 2 years. Even if it were as capable as human, it is still only a 2 year old baby (and I’m just being figurative here).

resolution makes a difference as well. just because something look unintelligible up close or far away, doesn’t mean you have to look at it from those distances. I don’t know if that’s the point you are trying to make. to make sense of an elephant, we stand back 15 feet. to make sense of a virus, we zoom in very very close with electron microscopes. :slight_smile:
But the main point is that, I know I can identify the appliances in my own house. my house has a very small vocabulary, and I’ve been looking at it for a year now. @kevin1’s house on the other hand… I’m don’t think I could do the same. but who knows.

@samwooly1 I agree with you, but I think it is Sense that can’t keep up with the brain. unless I am not following the conversation correctly and ya’ll are talking about something different.

@serovner86 That would be interesting. Eventually I’m going to have to figure out how to do export all the data in such a way that I can posthumously do what you described. because even when sense detects devices, it doesn’t always capture all of the power the device used. it detects my fridge turned on a second or 2 after it actually does. and that’s ok. What disturbs me is when it not only detects a device has turned on a second or 2 after it does, but when it also starts recording the power used a second or 2 (or longer in the case of some of my devices) after they start using power.


I don’t know. I’d be afraid they’d all break during hurricane season. And solar panels are bad for the environment when it comes to disposing of them. They’re also, as you said, very expensive. The only reason they are even halfway affordable is government subsidies. I can’t afford them.

But these do solve one issue I have with solar panels, even if they were affordable. and that is that they look ugly. my roof is oriented in such a way that they would have to go on the front of my house. So it is a no go for me. Estimates say they would cost me more money than they would provide energy down here in South Louisiana. and it is mostly because of the shape and orientation of my roof. If my house had been built on the other side of the street, lol.

Otherwise, I wish I could get solar panels. I would being selling power back to the electric company all the time. they’d be cutting me a check every year.
I agree with Bill Gates. We need to have a “manhattan project” for a renewable, affordable, energy source. a manhattan project for solar panels that produce more energy for less money, for example.


Your 2 second problem may not be a real problem. As I suggest, Sense take a half second to a second to detect the on signature (or off), plus there could be another second or so interaction with the Sense mothership before the bubble pops up. Doesn’t mean Sense delayed the start of recording device usage. The real measure would be to look at the Device Power Meter to see what Sense has captured.