I think maybe what should be done is train Sense to look at things at a much lower resolution to more easily detect these kind of “large” consumers to more accurately “mimic” human behavior. I understand that at 1,000,000 samples / second things look much different and can look very much alike to each other, and will be needed to detect those much harder to detect devices accurately. That said, for detection of these large and “more easily” detected devices maybe just look at 100 samples/second or even less so Sense can “see” more distinctive lower resolution “shapes” of wave forms, etc that any human can easily interpret as “that’s my EV charging”, or “that’s my heater turning on”, etc. Yes you probably won’t be able to tell at those lower resolutions “that’s my Tesla model S charging”, or “that’s my Honeywell model x heater turning on”, but “who cares” initially? If initially my heater is simply detected as “heater” and then a few weeks/months later it finally determines that it was a Honeywell model x heater that’s good enough for me. But at least for those weeks/months/etc I’ve already known what my heater has used in power.
Based on the responses I’ve seen in this and other detection threads I believe that Sense is currently working the “wrong way around” it’s trying to detect things using data sampled at 1,000,000 times a second, where that is probably not necessary to detect the “larger category” of the device. Yes it’s needed for detecting brands/models/etc, but not really necessary for “this is a heater”, or “this is an EV charger”, etc. It probably is necessary though for things like “this is an XBox One” and “this is a Blue Ray Player”, etc. but the difference between an XBox One and a heater should be pretty clear.
Just my $0.02.