A smart home vs an automated home

  • I did a blog post on my website about this topic and wanted to bring it up here to see what people thought of as far as a different realm of sensors. I'll start by talking a little about the topic title.

    Some people just have automated homes, and others with more complex setups start touching on the topic of smart homes. What is the difference? Here is a scenario:

    You normally wake up at 6:00 AM for work during the week, so you set up timers that do different things. At 5:55 AM you have the coffee pot turn on to automatically make you your morning coffee. At 6:00 AM, the alarm sounds and you turn some lights on in the house to be ready for when you get up. At 6:50 AM, the garage door automatically opens for you to leave for work. And at 6:55, the garage door closes and your house gets locked and the alarm sets itself.

    A simple scenario that could be handled by home automation. What then makes an automated home a smart home? Data data and more data. The more data that you have your system collecting, the more informed decisions you can have it make. One of the things that I wanted to delve into with this post is occupancy sensing. To take that a step further, people sensing. I am not just talking about having motion sensors to tell you when someone walks by. I am talking full on people sensing. Your house knowing how many occupants are inside, but where each of these occupants are. This is one of the hardest parts of a true smart home. Lets look at the scenario that I mentioned above. Normally you wake up a 6:00 AM to go to work, but you decided some time ago that you would take a vacation day. Your system looks at your calendar and sees this and knows that you don't need to get up early, so it doesn't sound the alarm or turn on the coffee pot. What if the system not only knew that someone was in bed, but knew that it was you. So you slept in a little bit and wake up at 6:30 to start your day. Your automation system sees that you have gotten up, so it then starts the coffee pot. It knows that you have the day off, so it doesn't open and close the garage door, and it doesn't set the alarm.

    What if your system not only knew you or other members of your family were home, but where in the house people were at any given time? What if your setup knew that you liked the temperature in a room at 70° F, but your wife liked it a bit cooler at 67° F. and could adjust the room temperature base on who was in the room. These are just some tip of the iceberg examples of things that make up a smart home. I used to be a user of a controller software called Open Source Automation (OSA). Here is a video from one of OSA's creators, Vaughn Rupp. https://youtu.be/KTLPAW9YCwM

    So now on to my question, what are peoples thoughts on ways to do people sensing? It could be MySensors type ideas or others.

  • Hero Member

    Maybe carry around a weak bluetooth beacon that's unique to you? Then put receivers in rooms that you want monitored It would be preferable to have it built into a watch or something that you're always wearing anyway.

  • @dbemowsk there is some promising research using radar, single source, with multiple receivers in a T shape, in theory it is 3dimensional and whole house. Ill see if i can find the paper I read, i think there is similar work that uses of the shelf wifi gear.

    Ive also looked into (and not found enough to do anything with) lower frequency rf,but it seems to require running wieenaround the room, and probably a grid on the floor. It also seemed similar to capacitive sensing, but i dont understand the science enough to truly claim that.

    I tried to find research on what EM people absorb and emit, but didnt find much publically available. I dont like the idea of carrying a fevice around for tracking like this.

    There are methods that are sensitive enough to detect respiration, used in fire detection that may be of use. Maybe extremely sensitive microphones that can detect heartbeats or footsteps. Some serious signal processing there, which maynlead to DSP devices or maybe some gpu acceleration.

    Any of these could be doctorate earning projects, but I keep hoping . Things like intels movidius usb stick may make this stuff more accessible.

    Sorry these arent much more than speculatiom, but it what I have found when looking for similar solutions.

    Oh 2 more, multi "pixel" ir, omron makes some that are 4x4 or 1x8, several of these, or some kind of scanning to get enough resolutions, or maybe webcams and opencv type computer vision.

  • Maybe something like a lytro camera sould be adapted. If the data could be gathered live and processed quickly.....

  • Back when I was with the OSA project, one thing people were using was cell phone data. Not so easy for detecting what rooms people were in, but you could get some idea of who was home or not. I have also heard of people doing cool hacks with Microsoft Kinect cameras. Here is an example of some good motion sensing with kinect. https://www.youtube.com/watch?v=3-kZjkNIFxY

    It's an interesting topic which is why I posted it. The key is gathering data. The more data you can gather, the better the chances you can figure out who is home and where they are in the house.

  • Hero Member

    I suggest figuring out what would be the easiest way to try it out. If you like it, then maybe you're more inclined to try computer vision or something that's not so easy.

  • I know Amazon Echo can now recognize different voices, so it could be listening to see who's home?

  • Hi, @dbemowsk

    Interesting topic 🙂 Not more than speculation here, just an idea - would be thermal cameras that would capture size and shape of subject. By the way kinect is also could be used for this purpose, and maybe even more easily available. Basically that would allow to easily count persons in the rooms, even in the bed - if you have decided just not to get up for some reason 😉 Not sure if kineck could detect motionless person under blanket.

    Another interesting add-on to the solution searching: Xandem alarm system. They claim that can detect and track more than one radio waves interfering object (subject). But they track changes in rf "net", so if someone just sleeps like a baby - not an option.

    And if familiar with machine learning and video processing - I think possibilities becomes very foggy, but endless - some serious data processing and by shape, size etc. you can start recognising not that "if" someone in the room, but "who".

  • @matt-shepherd Something I hadn't thought of. Nice idea. I do have an Amazon Echo in my living room. My only question on that is have they figured out a way to sparate 2 Amazon devices like an Echo in one room and a Dot in another room to know what room a sound came from? Last I checked, that was not possible.

    I was looking into it a bit for other automation stuff like turning on a light in a room. For example, when I am in the living room and I say "turn the overhead light on", it turns on the living room overhead light. When I go into the office, where now I am being heard by the Dot, and I say the same phrase, it should turn the office overhead light on. From the reading that I have done, you can't use them that way, but it should be possible somehow.

  • Just read this.
    Sounds like Amazon actually IS looking to be able to define rooms. I could only hope that somewhere down the road that they add some of that data to the Alexa app for my Vera Plus. I could then script things by room.

  • Mod

    In the latest video of Andreas Spiess he is talking about presence detection with an esp32 by sniffing the wifi traffic. There are a number of commercial products that 2 or more can be installed to triangulate signals from smartphones wifi and Bluetooth (I had to install 3 of those in a Mercedes dealership, they were from Netgear if I remember well): basically there is a Master one and they talk to each other on their own separate wifi network; I think it is kind of a mesh network, because they can relay data of distant nodes to the Master device. Of course once installed they need to be calibrated by me standing in a known position with my smartphone in my hands and turning around 90° each time I was told to.
    I would have liked too to have the home automation system to be aware of people in the house by the means of BT devices, but it is still in the to do list. As said before the data analysis is going to be tricky but it going to be the main subject of the following years, as more and more AI and machine learning cloud services are popping up (I did a quick peek on the IBM site and I got scared by the amount of services that are available and I will never be able to use, as my programming skills are not really the best)

  • @dbemowsk
    I use mine with Openhab as they have a skill, and im waiting for the new ‘Routines’ to be released, I think Amazon are raising their game and along with the new Routines https://www.theverge.com/2017/9/27/16375050/alexa-routines-echo-amazon-2017 and now that it can recognise different voices https://www.theverge.com/circuitbreaker/2017/10/11/16460120/amazon-echo-multi-user-voice-new-feature it could help my house turn from Automated to Smart very soon.

  • @matt-shepherd If they can then get multi tiered location setup down, that would be awesome. By multi tiered location setup I mean being able to define say 2 houses, maybe a main home and a vacation home, and then have devices that have a defined parent, such as main home or vacation home. That would allow for say having two devices named living room light.

    My only thought on that for occupancy sensing would be, what if you walked into the room and didn't say anything? None the less, it gets back to what I said about data, the more you have, the more informed your scripting decisions can be. One other thing, if you used multiple echos or dots, you would have to make sure that more than one device doesn't hear the command.

  • Hero Member

    The question may turn out to be whether people are willing to accept less than perfect performance in exchange for occasionally more capability (when it works). I think Z-wave and x-10 were good examples (of unreliability) showing that's not what people want. People seem to prefer less capability, but have it work 100% of the time the way it's supposed to. At the very least, WAF is low on unreliable things.

  • @wallyllama interesting.

  • This sounds like a very do-able basic solution to counting the number of people in a room.
    As the guy asks, how do you make it look nice. Would there be another way of doing the two beams? Small laser pointer modules perhaps?

    If you put these on every doorway in your house, you could get the logic down to where it would know a fairly exact count of how many people are in a room at any given time.

  • @dbemowsk search "see through walls with wifi". I suspect it could be mounted to the wall and covered with art or something transparent at 10ghz, or with loss of sensitivity in the wall itself.

  • @wallyllama Though I see future potential in this, I don't see it as anything that can be put into operation easily at this stage.

  • Hero Member

  • @NeverDie nice! Sparkfun has a breakout that is 20% cheaper than just the omron sensor. This is is getting closer to my price range, the radar modules are cheap and might be fun, but this would likely yield a working solution sooner.

  • Hero Member

    @wallyllama said in A smart home vs an automated home:

    @NeverDie nice! Sparkfun has a breakout that is 20% cheaper than just the omron sensor. This is is getting closer to my price range, the radar modules are cheap and might be fun, but this would likely yield a working solution sooner.

    Is this the one you found? https://www.sparkfun.com/products/14289

  • Once a method of sensing people is selected/found, then mySenors can be used as the transport layer. This leads to the question of the actual "smarts". The various mysensors supported packages seem to track state, allow control, and have scenes, which are good data and tools for the smarts to work on, but dont seem to be smart themselves. Am I overlooking something?

    Commercial products use the 'cloud' to gather a lot of data from local devices, and create an AI of sorts that local devices then query for the appropriate response to specifuc conditions. I'm not interested in sending all my data to the cloud, so Im interested in completely local solutions.

    Again this doesn't currently exist(that I know of), but many pieces do. Some are just pieces (hadoop for storing data e,g), some are partway there (mycroft ai e.g.), some have large backers (movidius ai accelerator). Some assembly required.

    Are there more complete solutions that I may not know of?
    What goals do others have?

  • @NeverDie no it was an amg8833 breakout and at adafruit not sparkfun sorry, $39us. Mouser and digikey have just the sensor form$22us in small quantities.


  • The amg8833 has an 8x8 grid, and a 60° field of view, so if you have 8' (2.4m) ceiling that will cover a square with 14' (4m) sides at the floor. One pixel will be about 1' 9" ( 44cm) at the floor. That should be plenty of resolution even without interpolation. I suspect interpolation could give an effective grid of 16x16 at least, maybe more.

    Careful planning and mounting in a corner or on a wall would have some trade offs, but might allow for covering a larger area with one sensor.

    One trade off is identification. Is that heat blob a person or @gohan 's cat? That might be doable, but is it Mom or Dad, or teenager would probably need supplemental information.

    Stationary heat sources, lamps, vents etc, could be filtered out, in probably several different ways. I have some large windows that may blur the data, but this isnwhere situational awareness wouldmc9me in. E.g. if (curtains == open && tod == daytime) then apply filter to pixels x through z, maybe time of year etc.

    Other obstacles would probably look like cold spots and unless they are large wouldn't affect detection of people. They might dim a bit, so maybe a filter would be needed here.

    This is quite doable. I've been thinking about it for a while and seeing usable sensors for effectivley 1/2 price has me a bit excited. I appologize if I have monopolized the podium a bit.

  • Mod

    You are pretty much facing the same problems as all the engineers working on self driving cars or whatever is using computer vision (which is going to be tricky to be handled by an arduino alone, and that is why many services are relying on cloud computing)

  • @gohan true for the larger goals, but this sensor is 64 pixels (256 w/interpolation) and we need to track a dot, i think an arduino could gather the data, do a bit of preprocessing, and (the mysensors part) transmit the data to a raspberry pi for "whole house" tracking.

    This is pretty low res and I think a pi could handle it. If not, Intel has a movidius usb stick meant for computer vision/ai acceleration, I believe opencv has been ported to it. So while this is on the edge, some of the blood has dried.

    The other plus is houses move slower than cars, unless people are running indoors, a 2 to 3 second refresh rate should be accurate enough.

    This is a large project and mysensors would only be a portion of it, so for now I'll try to limit myself to talking about how a node based on this sensor would work and if it fits into mysensors properly or not. There is plenty there to discuss.

    @dbemowsk again sorry for hijacking your thread, I'm going to look at the guides for submitting a node to openhardware.io, i dont promise I'll be fast so dont stop working your own ideas.

  • Hero Member


    To what degree will detection range be an issue with these sensors?

  • Mod

    @wallyllama I think an easier way to do tracking of people in the house would be through BT tags, this way you have also identification. Image preprocessing on Arduino I think would be hard to achieve, maybe on a pi zero.

  • @NeverDie data sheet says 7 meters max, there is probably enough margin, at least for typical room sizes in the US. I think the 60° fov will be the bigger issue, getting coverage. Imagine you place the sensor in the center of your ceiling. The room is square, 14ft on a side and 8 ft high (~4X2.4 M), the sensors field of view would exactly cover the floor, but it is shaped like a pyramid with the sensor at the peak, so if you stand flat against a wall, only your feet would be in view.

    @gohan's suggestion of bluetooth tags doesnt have that problem, it can be seen anywhere the signal gets to. You can have multiple detectors for coverage and triangulation. If you have a smartwatch or phone you always carry then youndont even need a separate tag. It is relatively cheap and simple, and most of the tech is done already.

    (Now here is where I loop around and start spinning in circles) I dont want to have to carry anything, it should be possible to detect my presence by all the signals bouncing off me already, like light, or ir, or wifi, or radar, then the googling happens......

  • Plugin Developer

    Perhaps an alternative definition of smart home could be whether it connects to the cloud? Or whether it uses Big Data / Machine Learning / aggregation of the habits of many households to find solutions to things?

    Yet another, for me, is whether smart means 'ethical'. For example, a cloud connected home that shared my life patterns with third parties (which is most devices these days..) should never be called smart.

  • Mod

    @alowhum well... it all goes to the point "do you have enough money/time/skills to invest in a homemade Big Data / Machine learning project"? Do you even have an idea of how complicated that system would be to setup and maintain later on?

  • Hero Member

    Instead of going off on wild tangents about privacy and the like, I suggest we re-focus by asking what good or useful thing we might accomplish if we could make the thermal 8x8 pixel sensor work. After all, this is the first thread to consider it, and it would be a shame to waste the opportunity.

  • Mod

    @NeverDie I agree but it is pretty much related as I really don't think image processing could be done on a microcontroller without the help of a backend server that would actually collects all data from sensors, correlates them and then give them a meaning that can actually be used.

  • Hero Member

    In that case, I suggest @wallyllama start a new thread devoted just to the sensor and how best to make use of it. I wager something can be accomplished without resorting to full blown data fusion. Plainly if you tie your success to difficult, unsolved research problems that have long resisted solution, you will quickly bog down.

    The two obvious things are: direction of movement and, as has already been mentioned, a finer location granularity within a room. Since it's thermal, it could know that you're sitting on the couch even if you're not moving. That's big. Just think of all the occupancy sensors that wrongly conclude the room is empty if nothing is moving. We've all had that experience, I'm sure.

  • You guys are thinking of a complex solution to this that is a single package that does it all. What if you dumb down the scenario a bit. Don't try to think of making a determination using only one type of sensor. After doing some more research on the guy that did the infrared doorway sensors, he said that it was a pretty reliable way of counting room occupants. Maybe you use the infrared doorway sensors as a way of counting the number of people in an area. Now you have a reliable way of counting the number of people in an area, now you start looking at ways of identifying who those occupants are if needed. Thinking in a broad sense, putting some fuzzy logic behind data from a number of other sensors, whatever that may be, may give you some kind of fingerprint for a person that could be used to identify people. using that approach may give you a little better accuracy too depending on the sensors and logic you use.

  • Mod

    @dbemowsk that "fuzzy logic" is what big companies are spending millions to develop and that's why I'm not expecting much from a few sensors and Arduinos

  • @gohan I get it, but at this stage, any bits and pieces that you can put together that can even do a fraction of it is better than nothing. I figure if I can start with the counting people part and somehow layer things on from there I'll be a little ahead of the game.

    I don't want anyone to worry that you are hijacking my thread. Speak freely, this is how good ideas come to be. Precisely why I started this thread. I figured it would spark some creativity from the community.

  • So looking at the MySensors end of this, would it be too far off to think of adding a new node type, "person". A person node could have customizable properties that would allow you to define different useful bits of data related to that person. For example, preferred room temperature, or preferred light level. Heck, you could even have a room or area property that would get set when the system sees you move to a different area. So when you do figure out better occupancy sensing, you can automatically set user preferred light levels and room temps based on who is in the room, and dial them down after a person leaves the room. If you have more than one person in a room, it could take an average of the properties of all in the room to determine a setting like room temperature to provide a happy medium.

  • I think @dbemowsk is hinting at something that fits my understanding of "emergent behavior", individual simple things interact and create more complex results. How many, and whom are different questions. Counters in doorways plus a list of whose phones are at home, maybe add in some hostorical data of who likes to sit in which chair. There are probably better combinations, but that is what I got from hisnrecent comments.

  • @dbemowsk said in A smart home vs an automated home:

    So looking at the MySensors end of this, would it be too far off to think of adding a new node type, "person". A person node could have customizable properties that would allow you to define different useful bits of data related to that person.

    I'd like to hear more about how you would use it. Below is my two pennies worth.

    My thinking is that mysensors is a transport for relatively simple data, like state, value, counts etc, things nodes would need to set the environment up, or report back to central command.

    A complex object like "person" could have all kinds of attributes and preferences, which would modify values sent to nodes. Example the curtain controller knows to open during the day, close at night, and maybe close for an hour at 10 am in the summer when the sun shines directly in and heats up the house( could also be a light sensor), but if the weather says it is clear, and kent is in the living room and it is night open the curtains up, and that would be an override coming from central. The node controlling the curtain doesnt need to know it is me in the room, it just needs to accept the modifiers.

    I say this mostly because as @gohan points out arduinos arent terribly powerful, and telling them too much info may just confuse them.

    I liken it to the body. E.g. your finger doesnt have to know if you are walking up as you are pushing a doorbell, it just extends on command and reports that it made contact, moved forward slightly and hit a stop. Your spine may get involved if the finger reports excessive heat, or something gooey on the switch, and pulls the hand back in reflex.

  • @wallyllama As to your first comment about "emergent behavior", that's pretty much what I was getting at.

    As to the MySensors node, my thoughts when I mentioned the "person" node were possibly some kind of MySensorized identifier or tag for a person much like a bluetooth tag. The more I thought about it though, you are correct that there would be all kinds of attributes, and most of them wouldn't need to be tied to the tag. The "person" though might be on the controller side where the processing power is greater and where most of that data would be dealt with anyway.

  • @dbemowsk interestingly, as I thought about these ir cameras, they may require a smarter node (maybe something like a nanopi neo2) to preprocess the data, and then a person tag may be useful.

    For example 64 pixels, 2 bytes to encode temperature value, 1hz refresh would mean 1280 bytes/s. Which If I have been reading this right, is pretty high for mysensors. There are some ways to reduce that, but it is unkown if an arduino could keep up.

    I've mostly been doing research on sensors, and only built one node and a gateway, so alot of what I have been saying about my sensors is assumption.

    Does it have a defined method of extending the data types? Or a board that decides? A Glorious Leader we need to cajole? Maybe "user defined" types?

    Im kind of in love with these IR array sensors, and I'm probably not objective about what is best for mysensors as a whole, but I have boxes of opinions I'd like to get rid of, so just ask if you want some

  • Again, doing some more brain cell searching and reflecting on the subject of a "person" node for MySensors, I am more and more starting to realize that this part of things may not be in the realm of MySensors. Not to say it shouldn't be part of an HA system, just not handled by MySensors. I think it could be a different module/plugin for whatever controller people are using, e.g Vera, Domoticz, OpenHAB, etc... As was mentioned, a "person" node would probably have a great number of properties and attributes that define a person. That in itself I think is a great argument as to why it should NOT be a MySensors node. Some of those properties and attributes may be defined by one or more different MySensors nodes, but it may also take data from a different kind of node based on something like what @wallyllama mentions which might require a more complex processor such as a nanopi. The ways in which a person may be identified could differ greatly between systems and could range from simple to complex. Again I get back to the simple IR doorway occupancy sensor that can count the number of people in a room. I think that this could be a great starting point for something like this and could be something simple enough for MySensors to handle. Going with something like this and later finding varying ways to determine who the occupants of a space are may be a way to get this started.

  • Mod

    @dbemowsk that's something that netatmo did with their smart ip camera that is able to recognize who entered the room or your garden and with that you can set some rules in a HA system and I bet it is far from simple to be done on a DIY project

  • @gohan Openmv has a single board camera with opencv and micropython, another option.

    I think @dbemowsk idea of door sensors fits nicely with mysensors as he has said. Is there a more appropriate forum for the more complex devices that anyone knows of? Im thinking if I come up with a node, I can add it like any other, but there will be a lot of talk that ends up a bit off topic.

    On topic, are there controllers that are more amenable to the kind of combining of different nodes to identify people that we are talking about? I've used Mr. House for other things, and domoticz for my one test node, not enough to really have an opinion.

  • @wallyllama I was actually a Mister House user prior to finding MySensors. The death of my Raspberry Pi 2 that I was running Mister House is what got me looking for other options, which is how I found MySensors. I then tried Domoticz for a bit, mainly because I found PERL, which is what MisterHouse is written in, hard to work with. Domoticz had some limitations too. I now have my Vera controller which I like. All of these have deficiencies in certain areas, but the nice thing with all of them is that they support many different types of HA hardware.

  • A thought experiment. 5 known people and 1 pet in a room. 1 living being leaves the room. IR door sensor in place. What information do we want about the new situation? And what sensors would we need to gather it?....

  • @wallyllama Are you talking about all the way down to the system knowing who each of those people are?

  • I dont know. I guess Im wondering what you want from this. Do you care about pets vs humans? Adults vs children? General person count only? If the goal is to not shut the lights off on people then the last one is good enough.

  • I have recieved 2 amg8832 chips for experimentation after some delays because of import rules. The labeling on the bagggie says they need to be mounted within 108 hours of opening the bag. I would recommend getting a breakout board and not raw chips.

    I'll post in the appropriate category any additional progress.

  • @wallyllama If I am looking at the information on these, these could see the direction a person is moving correct? If so, that would be a way to count people entering or leaving a room.

  • I've been thinking of them as low resolution cameras that can see in the dark, but there may be more clever ways to think of it that I havent come across, but any computer vision algorithm would work. Motion detection for sure.

    I have a fairly large living room with a high ceiling, if I mount one in the center, i should cover most of the room, I estimate a person would be about 1 pixel at the floor. The coverage is a pyramid so at the edges height is zero. Corner mounting like the video shows would probably fix that.

    I think these work work better than my idea for a giant capacitive touch screen.

  • @dbemowsk this can be crudely done via routine feature on amazon alexa, what it offers is that you can rename any IOT device state (ON or OFF) with any name, so like in your example, living room lights you can just say Alexa "living room overheads" and it will turn the living room overhead lights, similarly for office you can say "Alexa; "office overheads" and it will switch on the office overhead lights. Ofcouse you have to have a different phrase for OFF state, but you get the idea. But yes the same phrase for all rooms and letting alexa sense which room r u in, and acting accordingly is actual smart home. I would keep searching if I can find n build somethig like that.

  • @sam9s what you are describing, while a nice way to control things, has the same basic flaw as a PIR device. You have to tell it you are there. The PIR(alexa) knows what room it is in, but you have to signal it some how. Alexa is signaled by a voice command, PIR by motion, but if you are quietly reading a book both of them forget you are there. Alternatively you can have them assume you are there for a set amount of time after the signal, or until they get an off signal.

    The trick is to get them to detect you without actively addressing them. If Alexa can detect breathing, or heat or CO2, etc, then it would solve the problem.

    you can combine alexa with door sensors. If alexa is triggered and no one has left the room then someone is still here. That is the idea that @dbemowsk pointed out earlier in the thread.

  • Mod

    If you could tell Alexa which room is in, you wouldn't have any problems since it normally doesn't wander around the house 😁

  • @wallyllama said in A smart home vs an automated home:

    The trick is to get them to detect you without actively addressing them. If Alexa can detect breathing, or heat or CO2, etc, then it would solve the problem.

    If you enter a room and do not say anything, then Alexa has no way of identifying WHO you are. Even if the echo could detect breathing, heat, or CO2, you are back to knowing that someone is there, but not who.

Log in to reply

Suggested Topics

  • 4
  • 9
  • 3
  • 14
  • 274
  • 5