Refinement after device detection?

I can’t speak to Sense’s motivations but I understand the challenges:

Adding to @kevin1’s explanation in the above I would extend thus:

The facial recognition analogy is a good one up to a point. A face has a bounding box in a similar way that you might think a Sense waveform has a bounding box for “Device X was on and off here”. As a human (in the facial analogy) you imagine a bunch of faces and you can pick them out visually and spatially (stereo vision and motion). Sense’s challenge though is more like, in my mind, imagining yourself doing facial recognition on movie where all the frames are superimposed over one another. In the extreme case, where things get effectively impossible … when time is also compressed … i.e. you convert a movie into a still frame. Where does my bounding box go? Tricky.

1 Like