Just adding a datapoint. 5 days to the min since the last N/A event, all 5 KP115’s went offline tonigh. What I noticed is that it is 5 days to the min since the last time they all went offline, NOT 5 days since they were rebooted. I was a bit slow on the reboots last time and they were scattered over a couple of days but the next N/A event happened exactly 5 days after the the last N/A off.
I changed the DHCP lease time to 30 days just before restarting all 5 plugs tonight.
@Offthewall could you provide the exact time and time zone? Also, is the time always the same between this event and all previous events? (i.e. always at 5pm) Did you ever have these running on a previous firmware that didn’t experience the issue? How old are they?
Thanks in addition for all the information. I know you’re getting frustrated with the whole thing.
Now that is … interesting. You mentioned that you were trying other apps that accessed Kasa earlier… did those use your Kasa cloud access? Or were they all local integration? Do any of them manage Kasa schedules, or timers or away mode?
Also, do you have anything else that might have used Kasa cloud access? Have you changed your Kasa cloud password since trying the apps?
The plugs are all roughly the same age. Installed between April 23rd and May 3rd, in three batches. All ordered from Amazon. All have always had the latest firmware updated on install.
I am in the Eastern Time zone, currently on daylight savings time.
Here are the date and times of the last N/A events. I don’t have data prior to this since I deleted the Kasa integration at some point in a troubleshooting attempt.
June 13, 10:29PM
June 18, 10:36PM
June 23, 10:36PM
June 28, 10:44PM
July 3, 10:46PM
July 8, 10:51PM
July 13, 10:52PM
Every 5 days like clockwork (since I recently figured out it’s the off to off times - unrelated to how long it takes me to reset them). The time seems to be creeping forward slightly.
I only tried one other app (Watt) after I noticed this N/A happening in order to troubleshoot. I have since uninstalled it. It did use my Kasa Cloud account and I do not think I’ve changed the PW since.
Your issues are mind warping. I’m experiencing issues too, but not so uniform like you are. My plugs went out at 12pm almost on the dot. I’m PST so there’s no correlation there. I’m also experiencing the wierd power spikes as others have described elsewhere. So at 12pm I got a spike and then the plug went crazy after that.
You got me paranoid so I just changed the Kasa cloud pw. So now nothing but the official Kasa Cloud has access with the new pw. Breaking troubleshooting rule #1, I’ve now changed two things - lease time to 30 days and kasa pw reset.
Normally I wouldn’t recommend shotgunning but in this case it’s appropriate because the test cycle is so long and the additional change can be applied to a subset of the group. One or two plugs.
You know what else is startling… My Kasa plugs have been communicating with my Google home hubs. I noticed it on the pcaps. I have 8 hubs. This morning I woke up and all 8 hubs are bricked.
Hard reboot and factory resets failed to resolve it. They are toast. All 8 while I slept. Haven’t seen anything on Reddit suggesting firmware or updates but I’m still monitoring
I just added to the survey. 6 of my stalwart HS110s that really haven’t given me any issues in months, suddenly turned into zombies tonight. One of them came back with a reconnect command, but the other 5 refused to come back to life:
The HP LaserJet plug was one of them - it showed up as n/a in Sense
None of the Zombies responded to pings. And they remained ping Zombies, even after power cycling the smart plug. FYI - all of my many smartplugs had DHCP assigned (non-fixed) IP addresses.
I finally tried to update the firmware in all of my HS devices, forgetting that I have my router and main switch on an HS300 that was a party to the firmware update. The brief glitch caused both of those to reboot. The overall effect was that we had a short network outage, but all the Zombies came back to life.
I did a little fishing on HS110 issue and found that they were also giving Home Assistant a rough time: