How often do the models update?

Just wondering if anyone has any knowledge on what the typical update rate is for new models to get pushed to monitors? Seems like a lot of models get pushed in the first few weeks and then things slow down quite a bit. Any way I can prompt an update to happen?

Also, is there anything special that needs to happen for when a deleted device might be re-detected? I moved a coffee machine to a different outlet and it stopped being detected. I figured it would be picked up again pretty soon if I deleted it but that doesn’t seem to be the case.

Is your question about firmware updates (sense is now on 1.25.2594) or IOS/ Android/Web updates?
Moving a detected device to another outlet on the same breaker should not make a difference. However, if the outlet is on a different breaker sense will have to rediscover the device. My first discovered device was an aquarium heater when I moved the heater to a different tank sense lost track of the heater. The other tank was on a different breaker. As soon as I moved the heater back to its original tank sense picked it right up. I never deleted the discovered device.

Thanks, rverwij. I am referring to the way Sense models our devices on the monitor itself. Like when you moved the aquarium heater, the new signature didn’t match the existing model it had for that device. In theory, if you left it in the new place long enough, it would be detected as a new device (and then you could merge them if you wanted to).

I am wondering how often Sense decides to look over our history and create new models or update existing ones. Is it on a regular schedule or based on something else? Do the number of detected devices matter at all?

Of course the other question is how many models are there and how big are they?


The two things I have heard from Sense about model deployment over the years are:

  1. A new model for a device in your household is deployed when Sense has enough “confidence” in the model reliably and unambiguously detecting a device it has identified. But how that is measured and what the threshold is is confidential, though it’s gotta be some probabilistic confidence measure.

  2. The models are deployed via an automated process. There’s no human involved in looking at your inventory, and or initiating the push, though I’m sure some human interaction is involved in planning and maintaining “training cycles” for different models. And I would suppose that there are cases with new evolving models, where as Sense data scientist might deploy a model to some customer monitors just to check its behavior in-system.

I would guess that things slow down for the same reason I speculated here:

Sense moves through the population of “interesting transitions”, building models and categorizing the easiest to unambiguously determine first. Those are the ones that happened with the greatest frequency of occurrence and greatest differentiation. In my mind, that means well-defined clustering in the 17+ dimension “feature space”. The greatest enemies of “detection” are probably:

  • Device on/off transitions that don’t pass the Sense monitor “transition filter” - for instance EV charging, mini-splits, and electronics power supplies. Sense has other ways for dealing with some of these.
  • Device transitions that are passed by the monitor, but form a big blobby cluster in the feature space that is too big to be one device and too undifferentiated to separate.
  • Two or more devices with nearly the same feature space. I have some heating elements in my house that get confused frequently. Several in-floor heaters of about the same wattage plus my dryer heating element on normal dry.