Smart Home Hub Integration

So, I use two types, but really if you integrate with one, you should with almost all. Correct me if I’m wrong, but the vast majority of the smart home hub systems are built using the common code language Java Groovy. Probably not for those his which are locked into their own proprietary protocol system (FAIL), but the more inclusive ones are for sure.

The two I use are Samsung Smartthings Hub v3 and the Hubitat Elevation Hub, which I know are based on Groovy. Others to consider are OpenHAB, Wink, etc. Whist their capabilities are always getting better, I personally don’t consider the Google Home or Amazon Echo systems to be fully functional stand alone hubs, but that will change at some point and should apply to them anyway.

So, integration, why and in what way? Firstly, I completely understand the reluctance Sense developers have to trusting direct customer involvement in the training of the system based on the almost guaranteed infiltration of constant erroneous data points which would exponentially erode the integrity of the algorithms as the develop. It’s not anyone’s fault or a shortcoming of any customers, it’s simply the reality which accompanies the large scale streaming of hundreds, if not thousands of bias burdened untrained judgments, opinions, or assumptions which cannot be validated.

BUT, you could trust a smart home hub to inform the Sense system of what was activated, at what time, and allow this data to be correlate with a specific current draw signature pattern detected by the Sense system. No, it won’t let me tell Sense that the huge cycling spike pattern is my stove top, because it’s not integrated into my hub (yet). But it could assist in parsing out the noise, the simple devices like light bulbs and other integrated devices.

I’m sure I’m missing something, so beat up the theory. I work in an industry where pride of theory is beaten out of people early, so you won’t hurt my feelers. The only way to make it better and figure out which solutions are valid is if we challenge them all to the stresses of the real world.



I don’t have or am I familiar with a smart
Hub, except for the limited Amazon Echo. How do the hubs know what devices are connected?
I understand a bulb can identify itself but other devices like smart plugs, aren’t you telling it what is plugged in?
Without the hub being able to specifically and accurately tell what something is, it opens the can of worms for the corruption you mention.

Maybe all these devices are completely integrated without any user interaction now and I’m just not up on the technology.

Yeah, so in cases where the device is fully configured by the user, it wouldn’t work. However, many of these Hubs, and really the Google Home and Amazon Echo to a far greater extent, offer organic integration with a large list of partners. So, focusing on the Alexa you have, one could make the argument that a specific Alexa Skill wouldn’t work unless the user is actually using the device the skill is intended for. This would offer almost a pre-vetting to significantly lower the risk of bad data. Google works in the same way, only instead of calling them ‘skills’, they simply say ‘linking’ or ‘connecting’ with other partner devices.

To illustrate, I’ll use something I have integrated here and something my dad has as a skill on his Echo.

I have my Google Home system linked to a Geeni smart bulb (poor person’s version of A Phillips Hue). For most devices integrated in my system, I integrate them using my Hubitat Hub, but in some cases such as this, I bring it in directly using the Google Home. This is done for a number of reasons, none of which are germane to this discussion. What should be noted is that for many of the devices which I have integrated through my hub, could be done so through a linked account with Google Hub, if I chose that route. Currently, one of the limitations of Google is the lack of Z-wave support, the virtues of are a whole separate discussion.

So in order to get my Geeni bulb configured, I download the Geeni app and register it with the Geeni cloud. THAT right there is the validation we would need. After that, I simply link my Google account which is nothing more than simply giving Google access to my Geeni account through the Home interface so I can control it utilizing verbal commands.

Amazon is the same, only I would find and activate the Geeni skill. Activating an Echo skill and linking a Google Home account are literally two different names for the same exact thing. As with Google, I wouldn’t be able to activate the Geeni account skill within Amazon Echo until I properly registered the bulb with Geeni, which one again acts as the validation.

In both cases, whenever a verbal or app command is sent through Google Home or Amazon Echo, there should be a way for that to trigger a signal (even delayed with an accurate time stamp) to Sense with the specific device identified enabling the system to correlate the detected change in current draw.

I’m trying to think of if and how this could be effective with the Java Groovy based Hubs. There are absolutely organic integrations with both systems I use, although far more with Samsung than Hubitat. Smart thermostats like Nest or Ecobee integration is what first jumps out to me, based on the high percentage of manufacturers actively integrating as part of their business model.

Where you run into issues with finding the same type of applications I described for the Google and Amazon is first the fact that many of the devices are battery powered sensors, not applicable. Second, almost all the Smart Locks are battery powered, not applicable. While a lot of smart home device manufacturers put in the effort to integrate with the various hubs, most do not. Luckily they’re is a cadre of individual and amatuer code writers for each hub community who are willing to provide their ‘apps’ free of charge, or in some cases for a nominal fee on the community forums. This, however, presents a pretty obvious validation issue for Sense.

Lots to chew on, but after all that, it might be more advantageous to focus on the 4 big virtual assistants based on the device validation they would organically provide, if Sense agrees with this AWESOME idear!


I see your point.
I also have Geeni products along with many of the other “Poor Man’ smart home devices.
Most of the products I have like Geeni, Smartlife and the others, don’t or aren’t capable of communicating the necessary information Sense would need. The don’t offer energy monitoring which is the current requirement for Sense. I guess they would be able to supply at least what they are as far as make,
Model and serial number.
Oops, got off topic of hub and got my mind stuck on the individual device. Time for bed!

So, if what you’re saying is accurate with respect to Sense requiring devices to provide energy usage data in order to be integrated, I would posit the following question to the Sense folks (respectfully): If services connected to other devices are capable of and in some way providing energy usage data monitoring, what exactly is the added value of the Sense system in the end?

Regarding the vast majority of 3rd party service manufacturers, you’re spot on. The apps designed in house by the device manufacturers are not likely going to play well together in the Sense sandbox. Based on the all too common, yet foolishly short sighted culture of obsession of corporations with proprietary protectionism, any attempts to do so would be tantamount to Sense trying to herd the proverbial cats. Doing so would require a tremendous level of effort and resources, which I contend would be wasted due to the work potentially already being done.

Why put in an effort to harmonize thousands of 3rd party service manufactures to comply with Sense standards when Google and Amazon have already done most of the work for them. All Sense needs to do is determine which 3rd Party devices are adequately validated by virtue of account linking with Google Home and Amazon Echo, and develop the Sense integration with just those two hubs so they provide the aforementioned data whenever those devices are activated or deactivated.

Is important to remember, this concept isn’t meant to collate and harmonize the energy usage data provided by any other manufacturers. The way I understand it, that’s outside the intended evolutionary path of the Sense system.

This suggestion is based on the theory of the Sense algorithmic AI machine learning system leveraging currently deployed home automation architectures when validated 3rd party smart home devices are integrated for use with them. Depending on what Sense sets as validation standards, there could already exist a database of home automation devices from hundreds of 3rd party manufacturers, many of which may potentially meet our exceed any Sense required standards and scrutiny simply because it occurred at the time the user linked accounts. Again, this would all rest on the Sense developers willingness to completely rethink how the Sense system should integrate with the various hubs and virtual assistants currently on the market.

The result would simply involve the Google Home or Amazon Echo, through an enhanced level of integration, sending a data packet, with device type, which command was executed, and the timestamp data, to the Sense cloud any time a pre-validated device is activated or deactivated using the hub. This data is then simply utilized by the Sense AI during the system’s initial device identification analysis through the correlation of two variables: 1. The Google Home / Amazon Echo reported device, the command, and the exact time, and 2. The variation in current draw signatures detected by Sense matching with the time.

If the Sense AI integration is deliberately developed with the Google Hub/ Amazon Echo systems to this level of depth and complexity, the Sense AI algorithms would more quickly identify more types of devices in more than just those customer’s with Google or Amazon hubs. The Sense AI system as a whole would benefit from the perpetual feed of high value validated data enabling the AI to arrive at a higher level of confidence sooner during all device discoveries across the Sense enterprise.

The way I understand the Sense literature available on the Knowledge Base website, the greater the number of devices which Sense has successfully identified and which are operating simultaneously at a given home, the easier it becomes for the Sense AI to analyze and eventually identify those devices which produce more complex current draw signatures. This simply quickens the pace of that.

Did any of that make sense? Where am I? Where is my cane and Meals on a wheels. Those k kids are being loud and making me nervous…

I would love HomeSeer integration to give Sense ground truth of the dozens of Z-wave switches and devices.

1 Like

Yeah, I’d love to someday learn brevity…

@BigNutz have you played around with HomeAssistant?


The core disaggregation methodology is Sense’s bread & butter.

All the integrations get you toast: sure, tasty, but not everybody has a toaster and they waste a lot of energy when you’re trying to feed the world, so to speak.

The value comes from the (intended) simplicity, i.e. just Sense on a panel is their “minimal” form of energy monitoring hardware. At scale the simpler the hardware is to manufacture & install the more reliable and more scalable the system will be. Imagine Utility or National scale electricity providers drilling down into all the integration permutations … they can’t be expected to care because where is the value-added?

There are opposing forces with integration decisions that I’m guessing the Sense team think long and hard about.

So that’s a great question. The answer is a resounding YES. The reason is very simple. The only one tablet-based system control dashboard which I feel is exactly what I want and beyond cool looking was being used, and could only be used, to control a HomeAssistant based smart home.

The reason I switched from Smartthings to Hubitat was 100% based in the fact that Smartthings is almost all cloud based control, and I need local control for my setup. But man, the guys at Hubitat don’t like me because I’m pretty vocal about how atrociously 1991 the Hubitat User Interface is. It’s awful.

So what the what!? What am I waiting for!? The answer is pretty pathetic, but I’ll own it. When I sneak a look at the HomeAssistant website and setup, it always seems two notches above my nerd rating. I know what Groovy coding is, but I couldn’t code myself out of a wet paperbag.

Am I wrong? I never found it, is HomeAssistant a locally based control system?

1 Like

Yeah, I know I have the luxury of dancing around with the Good Idea Fairies in the wonderful vacuum of ignorance. Yes, it’s simple, but only to us. I don’t even want to look behind the curtain at the algorithms, for fear of a seizure. My point is a lot of the work is already done, they only need to deepen the integration with the hubs.

I believe it is, but I’m not entirely sure what you mean. I have arlo cameras, so I need to logon to the cloud based service to see them…the same goes for Sense the rainforest eagle, rachio, ring and a few others. I guess that means they are NOT local. There are other devices that are configured locally, as in using private IP addresses.
HomeAssistant is installed on a computer or Raspberry pi and my understanding is that device is the “HUB”. Integrations are then added…some are local…others are not. You can thrn configure remote access into the hub so your home devices can be seen/controlled via the internet. By default this is not enabled.

Yeah, you’ve got the basics down. The Z-wave and Zigbee protocols what I see as the most widely used to enable local control. Basically, if the internet connectivity is lost, but the hub is still connected to a powered Wifi router, local control based systems retain most wireless control functionality and automation capabilities.

Google Home and Amazon Echo are absolutely 100% cloud based, which becomes annoyingly obvious every time my internet drops And Google rudely interrupts me to let me know she’s worthless. SmartThings claimed at one point to be partially based on local control, but I was never able to see it demonstrate that. So why the obsession with local vs. cloud based control? I’m glad you asked!

Some folks insist on a local system because otherwise “They” will see what your doing and watch you and stuff… I won’t criticize that perception, but it certainly is NOT mine. There is a very minor distaste I have for the inherent operational inefficiencies which are cloud based control systems.

I’ll illustrate this using my own set up because I regularly use both. I am a heavy Google Home user which I have as integrated with my Hubitat Elevation Hub as possible. When I use Google to turn on my kitchen lights, I do the requisite “Hey Google, turn on the kitchen lights”. She’s rocking about an 85% success rate to all, and that’s on with me. So, the verbal command is recieved by my Google Home Mini, which digitizes it instantly in order to transmit it to my wifi router. Through the router determines where the packet is going and pushes it through my modem, over to the closest Comcast hub, and off to the cloud. After the packet redirects 3 or 4 times, it arrives at the Google Home AI interpretation algorithm processing system somewhere in cyberspace. The data packet is then processed in ways which are a mystery to me. Does it analyze the command whilst still in digitized wrappers, or does the system reconvert the command data back to analog? Whatever happens, let’s call it magic, magic happens and the Google AI recognizes my command by validating the data integrity to confirm it’s actually coming from my Google Home device, and then scrutinized the command against what my particular set up has integrated. As luck would have it, it matches the designation of “kitchen light” with a known device which is residing on another system, Hubitat.

At this point, had I asked for a joke or something, the response is sent back down the pipes the same way (sorta) that it came. But in this case, Google determines it needs to pass this command off to the appropriate partner app. So now Google sends an OAuth intent to the Hubitat servers which is 3,000 miles away physically, but mere nanoseconds in Google speed. Once Google is able to confirm that it’s authorization is still valid, they execute a handshake, and like a drug deal in plain site, Google slips the command to Hubitat, in Google language still. Once Hubitat bows and kisses the ring, Google goes into stand by mode while Hubitat translates the Google command into a Hubitat command it just recieved. Hubitat servers already know its associated with my hub, based on the OAuth handshake, so it determines if the command matches with what I have on my hub. It finds that my kitchen light is actually listed, probably does a few more magic data validations, before it is satisfied enough to take the command, now translated into Hubitat speak, packages or up nice and neat into a data packet, and pushes it out across the expanse with my modems IP address in the To line. Once the Hubitat confirms the data packets are successfully on their way, Hubitat servers glance over at the Google server which has been patiently waiting, and informs Google that the command was successfully sent. Your OAuth is now disabled until next time.

Now is a race, almost always won by Google in my house. The Hubitat data packet racing to my IP, but what of Google? Well once Google recieves the confirmation message from Hubitat, it immediately fires off a digital command back to my Google Home Mini, by way of Comcast servers, my modem’s IP address, though to my router which, get ready for this, routes the data packet to the appropriate local IP address assigned to the specific Mini. Unless there is some weird built in delay with Hubitat, the command packet from the Hubitat server appears to follow closely behind the Google Home data. As you suspect the routet this time pushes the command to the Hubitat Hub local IP address, where the Hub now decrypts and validates the command, and sends the appropriate signal via the appropriate protocol radio. In this case, it sends an “On” signal via Zigbee radio addressed with the unique device designation in order to direct it along other Zigbee services until it gets to the Peanut Smart Outlet which I have a lamp designated “Kitchen Light” plugged into. Once the command is recieved, the smart outlet validates it as legitimate, and activates line power to the outlet, and tada, light! The outlet then sends a message back to the Hub to record the event and update the status to “On”.

Now, the Google data that came into my network ahead of the Hubitat command data, what happened to it? A fraction of a second prior to the actual light illuminating, my Google Home Mini will announce “Ok turning on one light” as it’s follow up. It’s supposed to stop doing that and only play a tone as confirmation now, but all my Minis aren’t there yet.

So yes, that entire trip through cyberspace took probably 1.1 seconds total. But Jesus that’s a lot to just turn a light on. Let’s contrast that with what happens during local control. Same hub, same light, same command. The difference this time is instead of a voice command, I’m using a godawful looking dashboard UI on my cell phone.

When I launch the dashboard on my phone, what is really happening is I’m launching a reconfiguring Google Chrome browser which always points to my Hubitat Hub Wifi Router designated local IP address. So up comes the dashboard, I cringe, but work through it. I find the square that has “Kitchen Light” in bold letters. I tap the square and that fires off an HTML formatted message through my phone Wifi radio to my router, the router directs it to the Hubitat Hub. The Hub then translates the HTML into Java, validates it a few magical ways, and determines which radio to use to send an “On” command through and what designated device. As with before, the Zigbee radio is used and the means is sent, long of the short, we’ve seen this part before, BOOM light.

SOOOOOO much less… EVERYTHING. So much cleaner, so much quicker (it actually is), and you know what, when the internet goes down for the 12th time in a day, I can still turn my kitchen lamp on without getting off my ass!! But the primary reason I wanted local control was because of the stupid internet. I wanted to automate the power recycle for my modem in the event of internet connectivity loss. So I got myself a Zooz Z-wave part strip, badda bing!

Lastly, if you’re not familiar with Z-wave or Zigbee, they are competing systems, both great, but both limited to simple near binary data transmitting and receiving. These are protocols for turning things in and off, not streaming video, audio, or to send complex code.

That’s it!!!


You ask some good questions. At the risk of burdening you with knowledge :wink: , here are a few answers:

  • Sense prefers smartplugs that deliver energy info for two reasons: 1) energy results fit naturally into the Sense energy focused UI and fills the gap for many devices that are likely near-impossible to detect today, plus 2) “ground truth” is best served up as energy/power measurements, not on-off:

  • Sense has one hub integration today, the Philips Hue lighting hub. The Hue is a bit unusual in that it has embedded, calibrated power calculations for each exact model of lightbulb as a function of on/off/dimmer level. Not sure that is possible with all home automation system or all forms of automated devices, but it might be a good example of how other hubs could be integrated. Two important features - 1) It computes energy/power for on/off/dimmed devices, plus 2) it communicates with Sense via the local Ethernet subnet.

Thanks Kevin,

So help me out, when you have a plug capable of pushing out energy data, does the integration create a the Sense AI algorithms with a side by side comparison (of sorts) with the Sense detected data on one side and plug data in the right playing back in sync?

I assume the AI device detection is based on a look back analytic concept. All my devices were usually found after they were off for awhile. I can’t imagine the bandwidth of data you guys have flowing in at all times, and ever increasing intervals.

I worked some correlation and pattern recognition analytics with some data sets as big as 2.5 million distinct data packets, or lines, but it was static once I finished the pull. This real time data tsunami kinda stuff blows my mind.

First off, I’m not a Sense guy, but a machine learning enthusiast, electrical engineer and Sense user, who likes trying figure out what’s going on behind the scenes.

  • You are right in your thinking - the Sense “detection algorithm” for devices is created and refined by comparison between the Sense data and the plug data (or other “ground truth” sources). What really happens is that Sense creates a mathematical model for each type of device using machine learning techniques. The model takes in “features” (things like the two legs voltages, currents, and phases for your house), and tries to predict whether a device is on or off, plus the power usage. Sense then uses the incoming “ground truth” from the plug to adjust parameters of the model when they do training and model updates.

  • You have also hit on something else that is important - data flow. Sense certainly can’t ship all the raw 1-microsecond, 2 Byte data back to the mothership for processing, so it does a lot of local processing and filtering. I’m guessing that there are 3 streams of data back to the mothership: 1) 1/2 second resolution RMS power data for the whole house for each leg, 2) 2 sec resolution RMS power from the smart plugs, plus 3) a highly processed event signature for each major transition in power usage (my theory is that you see tags for each of these when you are watching the power meter live - the little waveform flags with device names or power numbers on them). That reduces the upstream traffic to Sense to something manageable. But it also means that slowly changing on and off waveform cycles, like EVs or heat pumps aren’t spotted by the Sense monitor and have to be detected later from the 1/2 second resolution data.

  • Even with the data reductions, Sense is probably dealing with hundreds of PetaBytes of data, some of it reliable, some only partially valid and some possibly garbage (I’ve had data dropouts due to a variety of factors, some my fault, some due to the monitor). So the machine learning approach is certainly not an easy way to go, but probably the best way to go given all the rapid advancements in both machine learning algorithms (the datasets are really the algorithms) and associated hardware acceleration.

I’m going to repost this article one more time, because it explains how we’re getting to the point where machine learning breakthroughs are really only limited by the availability of dataset with quality feedback/labeling, plus fast enough hardware for training.

That’s all awesome… Look at me, I’m pretending to understand everything you said! :rofl::rofl:

I seriously do appreciate you taking the time to teach it to me like an 8 year old. I’m a civil aviation “guy”, and I have become to terms with the fact that at some point as a child I’m pretty sure the language centers of the brain where I’d process things like coding, speaking Yiddish, and successfully hitting on chicks, was deprived of oxygen… For like a LONG time. I’ve tried, I just can’t process any foreign language or choice language syntax which isn’t exactly like English, and that’s even American Blue Collar English!

That 2.5 million line data analysis I did was related to one of the select subjects I’m able to hold an interest, Civil Aviation. That said, it held my interet long enough to rock a career in whatever sector of the aviation economy I found myself in. Let me caveat that real quick, the part where I worked on Federal Environmental Disclosure Law Compliance for Air Traffic Modification Actions was Godawful and a 2 ½ year period of life which despite all efforts won’t be returned. EVERY… SINGLE… DAY… I pondered whether slowly suffocating myself in a 10 gallon bucket full of uncooked rice would provide me with less misery, at least until some soulless coworker revived me to punish me.

Thank EVERYTHING that the data analysis was a characteristic of the job which followed that dessert of joy of a job. I mean, the actual processing and deduplication process which was required was kind numbing to say the least. Like I said, the goal of the job made up for any pain endured by monotonous hours of data crunching. Did I mention my shop was operating in a perpetual state of severe under-resourcing which manifested as my enough people and no access to proper tools. Yeah, this was the job I loved. My point I’m leading you towards is when I say 2.5 million packets or lines of data, I’m actually talking about Microsoft Excel lines… Yeah… And the only Artificial Intelligence we had was when one of us made an attempt to make people think we know more about a certain topic when we clearly didn’t. We deduplicated all that data with basic Excel functions and capabilities, and we LOVED IT!!!

Anyway, I might have digressed a bit there. It builds down to my being a committed full disclosure kinda guy, often to much disclosure. That said, I don’t want people to think I’m anywhere near as smart on this stuff as Kevin is.

Hey Sense managers, we’re all going to return our Sense boxes and get refunds if you don’t hire Kevin. He’s nice and makes me way smarter than I am. We are, right? Hello… :microphone: This thing on?

@BigNutz I’m right there with you. I have used Sense for over a year now and just need to be able to integrate it into Hubitat to complete my dream smart home. Only real reason I bought the device was so I could make things happen with my smart devices based off of dumb devices powering on/off.

You may not have mastered any other language, but you have mastered at least a couple of flavors of English language comedy… I was dying at “EVERY… SINGLE… DAY… I pondered whether slowly suffocating myself in a 10 gallon bucket full of uncooked rice would provide me with less misery, at least until some soulless coworker revived me to punish me.” It’s not funny to live it, but incredibly funny to hear it…

And you were indeed doing the rudiments of machine learning with your massive dedupe effort using Excel. But today, the data is yuuuuuger !

1 Like

So I’m not the only one with a Dream Smart Home!?! Lights gently brighten and dim as I walk through the house, delay free! Honestly, I have one wish for the Smart Home industry… Well two.

Will someone somewhere please finally create a single application which provides better than Atari 2400 for a UI, oh… And… Wait for it… CONTROLS IT ALLLLLLLLLL???

That’s not much is it? Honestly, the closest “Hub” to doing this is my Google Home, oh well except for the whole “CONTACT SENSOR!?! WHAT’S A CONTACT SENSOR” old man attitude they apparently hold over in Mountain View. Friggen Alexa knows what they are. Although Alexa won’t swear, and Google will, soooo I’ve made my choice.

Second is a shortcoming which provides me disappointment pretty regularly. That is my belief that these talking virtual assistances will never evolve beyond their current novelty status which tell us dad jokes and set timers until they solve the natural language issues. Sadly all the myopic manufacturers may never realize that they are all pouring barrel of resources into the exact same R&D initiative, with none any closer than the others. For the love of Oprah, PLEASE combine resources and share results when you finally design an assistant which is not fully dependant in a ridgedly scripted set of commands which have no syntax flexibility. I just want to tell Google to “Activate That Ther Illumination Device in the Kitchen” before AI Judgment Day. I have a Husqvarna Robot Lawnmower named Mr. Goat. I’m pretty certain he’ll some day become self aware, realize how lazy I am which will quickly develop into resentment, and everyone knows what’s next. Jesus, he’s a friggen robot lawnmower with blades and a motor. I love Mr. Goat, best purchase EVER!! I’ve never had my preconceived expectations for a product so soundly and completely blown out of the water than with Mr. Goat. That’s all I have to say about that