Smart home may not be far

A sensor makes your home scary-smart by letting you connect devices that offer convenience, safety and help you to save money on your energy bills.

If you want to set up a connected home, you have two options. You can buy a bunch of smart gadgets that may or may not communicate with other smart gadgets or you can retrofit all of your appliances with sensor tags, creating a slapdash network. The first is expensive and the latter is a hassle. Though there is a better option than first and second where one simple device that plugs into an electrical outlet and connects everything in the room.

That’s the idea behind Synthetic Sensors, a Carnegie Mellon University project that promises to create a smart, context-aware home. The tiny device unveiled this week at the big ACM CHI computer interaction conference, captures all of the environmental data needed to transform a wide variety of ordinary household objects into smart devices. It’s a prototype for now, but as a proof of concept, it’s damn impressive.

How does the module work? It’s so easy, you need to plug the module into an electrical outlet and it becomes the eyes and ears of the room where its embedded sensors log information like sound, humidity, electromagnetic noise, motion, and light (the researchers excluded a camera for privacy reasons). Machine learning algorithm translates the data into context-specific information about what’s happening in the room. Synthetic Sensors can tell you, for example, if you forgot to turn off the oven, how much water your leaky faucet is wasting, or whether your roommate is swiping your snacks.

Researchers have explored the concept of ubiquitous sensing, but it has started making its way into homes with products from Nest, Sense, and Notion. The Carnegie Mellon University researchers have gone forward and built a device which connects all unconnected devices by packing several sensing functions into one device. It’s like a universal remote for connected homes. “Our initial question was; can you actually sense all these things from a single point?” says lead researcher Gierad Laput.

Yes, they could. In fact, sensors are so small and sophisticated that gathering the data wasn’t hard. The challenge is to use it for some purpose. Gierad Laput is now figuring to answer questions people had about their environments (How much water do I use each month?) or do things like monitor their home security. But first, he needs to translate that data into relevant information. “The average user doesn’t care about a spectrogram of EMI emissions from their coffee maker,” he says. “They want to know when their coffee is brewed.”

Using data capture by the sensor module, the researchers assigned each object or action a unique signature. For example, opening the fridge produces a wealth of data: first, you hear the creak of the door, see the light, and then feel the movement. To a suite of sensors, it looks and sounds very different from a running faucet, which produces its own data. Gierad Laput and his team trained machine learning algorithms to recognize these signatures which have built a vast library of sensible objects and actions. The variety of sensors is a key. “These are all inferences from the data,” says Irfan Essa, director of Georgia Tech’s Interdisciplinary Research Centre for Machine Learning. “If you had just one sensor, it would be much harder to distinguish.”

Gierad Laput says the technology can identify different activities and devices simultaneously, though not without issues.

“Doing this type of machine learning across a bunch of different sensors feeds and make it truly reliable under different circumstances is a pretty tough problem,” says Anthony Rowe, a Carnegie Mellon University researcher working in sensor technology. By that, he means human environments are complex. A useful universal sensor must recognize and understand the variations in inputs. For example, the sensor should be able to recognize your coffeemaker from your blender, even if you move the appliance from one counter to another. Similarly, adding a new appliance to your kitchen can’t derail the whole system. Ensuring that level of robustness is a matter of improving the machine learning, which could fall to the system’s end user. “The easy solution in the short term is coming up with an interface that makes it easier for users to point out problems and retrain the system,” Rowe says.

That’s hard to do with Carnegie Mellon University current prototype. Though the technology is solid, the interface remains practically non-existent. Gierad Laput says he might build an app to control the system, but the bigger idea is to incorporate Synthetic Sensor technology into smart home hubs as a way to capture more fine-grained data without the need for a camera. “If you embed more sensors, you are more knowledgeable,” he says, referring to Amazon’s digital assistant. The end goal of a smart home: building an environment that knows more about itself than you do.

More information can be found at: Carnegie Mellon University.




Be the first to comment

Leave a Reply

Your email address will not be published.


*


Free Newsletter
Subscribe to our newsletters and stay up to date with the latest sensor technologies!
We respect your privacy.