Physical interaction with the IoT: animating everyday objects

Our concept in a nutshell:

Upon hearing “The Internet of Things”, our mind day-dreams into meshes of entangled devices working around the clock, carefully sampling the environment with their tiny sensors and reporting to us at distance, in order to satisfy mankind’s voracious and inexplicable appetite for efficiency & more data. Also, many know that the Internet of Things (IoT), has become both a buzzword and a trillion dollar market — 1.9 trillion USD to be more precise. Forbes further cites an astonishing 16 billion interconnected devices by last year’s evaluations.

So two questions came to our minds: (1) where are all those “smart” devices? and; (2) why are those devices not enhancing my (human) experience?

(1) where are all those “smart” devices?

This first question is easy. The IoT devices are hidden inside objects around us and attached to us: connected cars, smart homes, sensor networks and wearables. This list would endlessly go on as more and more objects are embedded with the required IoT components: a sensor, a battery, a microprocessor and a (usually wireless) transmission modem. Next time you are bored, play that game, find the closest IoT device next to you and spot these components. Your wifi controlled coffee machine has them all… and yes, it is a real IoT device. Many people have written lovely tales around IoT enabled houses and IoT daily interactions. These stories usually feature ghost-like devices that act upon your environment, such as “smart” thermostats that talk to your smartphone, “smart” doors that open only to you (or to your digital-self, whose identity is an encrypted RFID pocket card) and that “smart” refrigerator that texts you that you are out of soy milk. So much “smart” around us, yet most of us are not that excited about wifi coffee machines, nor about smartphone controlled thermostats. So where’s this missing “smartness” that we would argue would really change our experience?

(2) why are IoTdevices not enhancing my (human) experience?

What do these devices do for us, really? Well, the industry enjoys terminology so essentially there is a split in IoT: some devices are hidden as they sample & report the world to us, while others are exposed to our interaction possibilities. Let’s take the example of those 23 million IoT enabled cars. You can find these devices inside a contemporary car in different fashions: voice-activated systems, navigation systems (just interfaces to Google Maps, etc) and the hidden workers, such as mechanical health sensors that communicate and report to your mechanic, you name it.

So it seems that the IoT vendors, tech makers, researchers and so forth focus mainly on sensing and communications. Why is that? Well, that happens because most everyday objects cannot be easily augmented with actuators. While having a sensor that counts how many sips of coffee you have can probably fly with a tiny-tiny ATMEGA processor and a low-powered last generation bluetooth modem running on a coin cell battery; it is unlikely that the coffee cup can run away when you had too many sips as a reminder that you had too much coffee. So why can’t the coffee cup be animated? This vision does not happen because adding a motor to it would not work with that tiny battery. However, we do think that would be an interesting means of expression both for the coffee mug and for you to experience that. And some researchers have explored that notion of animated at the expense of artistic projects like a water faucet that curls up and plays with you, or a sofa that changes shape as you sit on it, etc. But to bring that to ALL the devices in your household would mean a lot of batteries and a lot of motors and actuators. So that is why we are stuck with a web of interconnected objects that do nothing else but sensing and reporting. We’d like to think there is another type of experience to be had, a much more personal and physical. One that is beyond fitbit bracelets, coffee machines that know how many espressos you had and other monitoring devices.

Our concept: Physical interaction with the IoT: animating everyday objects

Here’s our contribution to the internet of things: affordance++, a concept that allows everyday objects to become animated as we interact with them. For instance: a drinking mug that doesn’t want to be grasped by its body when it’s too hot, and instead suggests to be held by the ear; or a spray can that always shakes before you use it; or a door handle that helps you correctly open it despite the fact you had no clue in which direction the handle rotates in that particular part of the world; or a meeting room that reminds you to knock before entering if a meeting is being held inside.

As a matter of fact none of these examples are IoT devices with a big battery. In fact, none of them have a single motor. They are as simple as the IoT devices we know. So what makes the spray can shake? Your muscles. Your own body. Instead of actuating motors we actuate the user’s muscles with electrical muscle stimulation. Electrical muscle stimulation works by sending electrical impulses, through electrodes attached to the user’s skin, and into the motoric nerves. The impulses act as a control signal that triggers the muscle fiber contraction. This is the same technique used by doctors & physicians in muscle rehabilitation. In fact, we even use the same device as you find in clinics, except that we computer control it and retrofit it in a wearable bracelet. So in short, once you reach out for an object, this object helps you to better manipulate it by telling your muscles how to do it.

We found out in a lab study that participants understood the poses that the objects suggested as the best approach on how to use them. This seems to work for the harder case of unfamiliar objects too, like a patented avocado slicer or a magnetic sweeper where the visual affordance is not sufficient to suggest how to properly operate the object.

Concluding…

Affordance++ is a research prototype built to explore far out concepts, and for that it uses non-practical technologies such as optical motion tracking, which requires lots of calibration. etc.. but we crafted it to explore the futuristic notion that the affordance of an object could be extended beyond the current attributes of our relationship with these objects. Furthermore, the notion is not limited to actuating the user with electrical stimulation — this is simply the most direct and mobile way to achieve actuation of hand poses; other methods, such as hand-worn exoskeletons like Dexmo [6], are possible.

Lastly, we would like to re-emphasize that the idea that affordance++ moves the actuation to the user instead of to the objects is what allows to minimize the technological effort of animating every single object that we interact with, e.g., hundreds to thousands, and provides a more embodied experience with the Internet of Things.

by Patrik Jonell &  Pedro Lopes

This entry was posted in HCI, IoT by Pedro Lopes. Bookmark the permalink.

About Pedro Lopes

Pedro is a PhD student of Prof. Patrick Baudisch’s Human Computer Interaction lab in Hasso Plattner Institut, Berlin. Pedro creates wearable interfaces that read & write directly to the user’s body through our muscles [proprioceptive interaction].  Pedro augments humans & their realities by using electrical muscle stimulation to actuate human muscles as interfaces to new virtual worlds. His works have been published at ACM CHI and UIST. A believer on the unification of art and research, often gives talks about it [Campus Party’13, A MAZE’14, NODE’15]. Makes and writes music using turntables [in eitr]. Enjoys writing about music [in jazz.pt magazine] and tech [as digital content editor at ACM XRDS].

Leave a Reply

Your email address will not be published. Required fields are marked *