As far as I understand the Sense architecture, device detection is done by models loaded into the probe (the local Sense box), but the models are selected, refined and tuned in the cloud, with some human assistance. I’m fairly certain that the models use some form of a recurrent neural network, maybe an LSTM. If you’re familiar with training a neural network, you’ll understand that it’s complex to convert the feedback that “This device is about to turn on/off” into the right backpropagation weights to tune the model for that device.
Here’s an interesting article on the use of RNNs for learning languages, but the end hints at how RNNs learn to understand complex and not completely determinate sequences, similar to power signatures. The article also makes the case for lots and lots of training, before complex structures are “understood”.