CHI Day 2

The second day of CHI started off quite happily for me as I was presenting my new work on Proprioceptive Interaction (sorry for shameful link!) at the muscle-interfaces session which was very interesting. In this session researchers discussed how future muscle sensing can be increased for higher resolution input or even by combining multiple technologies such as EMG and MMG. After that I could relax a bit and attend more interesting sessions on a variety of different topics! Later on, there were sessions on smartwatch interactions, which demonstrate that we are no longer in the smartwatch hype but instead we are really in the wearables era! Great to see that research are also thinking already beyond-wearables, skin interaction, smaller devices, haptic wearables and so forth, which will be presented tomorrow (Day 3, check the post too): looking forward to that!

soft 3d printer from Disney

Later on I attended a very interesting and futuristic session on 3D fabrication which in the same vein, demonstrates that we are beyond 3D printing only in the maker community but also in the HCI community! In this session researchers showed their new ideas for the world of fabrication, such as 3D printing using soft fabric (great for plushy-toys!), check their video here.

The day ended with the job fair… a great opportunity to the more junior people to find internships and perhaps a new position either at industry or research labs!

CHI day 1

The first day of CHI started with a great opening plenary by Lou Yongqi (check keynotes here and yes, check out WHO is the closing plenary!), which came forward and highlighted the importance of Sustainability in research! Followed by an amazing program of novel technology (think Virtual Reality!), human augmentation (check this totally new way of embodying another person by Prof. Rekimoto), user studies (“Understanding and evaluating User Performance”), and understanding of elder users (“Designing for 55+”) and communities (a great session on Activism in Wikipedia, one on Privacy and one on the “Maker Community”)!

Taken from http://blog.johnrooksby.org/post/116870237912/lou-yongqi-keynote-at-chi2015 Copyright remains with original page.

Also this was the day of the video-showcase, which is a non-academic venue in which authors can submit their videos for further appreciation. It is an amazing opportunity to great a glimpse of CHI by sitting in the theater and watching great research in motion. This year’s winner was the Transform project by the MIT Media Lab (see it here), from which one of the authors is our dear editor Sean Follmer, so congrats to him and his team!

ACM CHI has started! XRDS is following!

For over 30 years, the CHI conference has been the top-tier venue for the developments in the field of Human Computer Interaction (HCI). CHI has been truly a place to share ground-breaking research and novel ideas into the ever evolving interaction between humans and machines. This year the conference takes place in the vibrant city of Seoul, in the heart of South Korea!

Screen Shot 2015-04-21 at 12.39.43 AM

Unlike most conferences in HCI, CHI is has a broad spectrum of disciplines: computer science, cognitive psychology, design, social science, human factors, artificial intelligence, graphics, visualization, multi-media design and many others; making it a huge conference: this year, at the opening keynote, were more than 2800 researchers!

CHI is an important venue not just for professors and senior researchers but primarily for the younger ones, such as myself. CHI is a prime moment to reflect, learn and observe the field. There is no rupture, innovation, ground-breaking thoughts without a clear understanding of where HCI is right now.

If you are not familiar with CHI or even with HCI, don’t be afraid! The field is very understandable to non-experts as people try to be as clear as possible, because CHI itself is a mix of the aforementioned and very idiosyncratic disciplines; so we keep things lively with videos, animations and short summaries. Have a look at the program and you’ll find many videos to watch. In fact, just to make things really exciting, this year the chairs created a youtube playlist that allows you to browse through this massive program
in the comfort of your laptop (wherever you are!). If you are more into the academic reading, then you’ll be happy to know that at CHI the papers are immediately published during the conference, so you can already access them through the ACM Digital Library!

I (Pedro) will be covering some highlights of CHI on the XRDS blog over the next four days, so stay tuned here (and also follow us on twitter).

The changing nature of (ubiquitous) computing

I seem lately to be having recurring conversations on the same theme: the changing nature of computing and the movement from desktop to mobile/ubiquitous computing (aside: what is ubicomp? You could start with this defining video from 1991). Humorous anecdotes about children interacting with technology often come up in these conversations (or vague recollections of YouTube videos—anecdotes 2.0). Kids futilely trying to pinch-zoom their parents’ magazines, as in the video below; or throwing their parents’ smartphone around without a concept of its cost or—relative to, say, the family computer—the novelty of its interaction. Novel technology for the parents, mundane for the child.

New generations live and breath—not adopt—new technology, giving them a fundamentally different perspective

I think, like social change, much of technological change comes through new generations that grow up with realities their parents had to adopt—computers, the Internet, social media. Wonderful clichés like, “back in my day, we had to know how to read a map!” betray fundamentally different views of the world that are symptomatic of technological shift. When kids are so used to a technology being there that they can’t conceive of its absence—the 2-year-old pinch-zooming a magazine in vain—that is when a new generation of people, whose underlying worldview is not shaped by old ideas but built on a foundation of new technology, develop solutions that are truly native to that technological landscape.

So, what does this have to do with ubicomp? Ubiquitous computing is a thing—separate from other instantiations of interactive computing—only insofar as it isn’t ubiquitous. Once it underlies, as it increasingly does, so much of how we interact with technology on a day-to-day basis, it becomes less meaningful to say one does work in ubiquitous computing apart from other areas of human–computer interaction (HCI).

For example, my own interest in pervasive health sensing and feedback (i.e. mobile, or in-home, or ubiquitous health tech) did not arise from my interests in ubiquitous computing as an area—I had none. It arose, broadly, from my interest in human–computer interaction and a particular application area. It happens that many of the problems and questions I am interested in draw on ubicomp solutions, and are appropriate for a ubicomp audience, but if (for example) my research takes me into web-based or desktop-based solutions I will follow my way there. I suspect many other people ostensibly in ubicomp today feel similarly. Ten or twenty years out from now, when the kids that today are frustratedly pinch-zooming magazines have become researchers and app developers, it won’t occur to them that building interactive systems doesn’t involve ubicomp, since in their technological landscape the two will be the same.

Ubicomp becoming ubiquitous?

As the computing everyone uses moves off the desktop, more and more questions in human–computer interaction involve ubiquitous computing technology, such as smartphones, even if only as a platform. Does that research then become ubicomp work? Or will the notion of ubicomp become so embedded in much of the rest of HCI that this distinction is meaningless? Like most things there is a grey area here, but as ubicomp becomes integral to much of HCI it might be useful to ask if we need to rethink the boundaries of these concepts. I suspect the coming generation of pinch-zoomers will have difficulty seeing the difference.

DIS 2012 Highlights

Here’s a few of my own highlights from DIS 2012 sessions that I attended…

At the seams: DIYbio and Opportunities for HCI (Stacey Kuzentesov, Alex S. Taylor, Tim Regan, Nicolas Villar, Eric Paulos): Fascinating look at issues facing the DIY Biology including community, materials management, ethics, etc.  Some good examples about how interaction design might have a role in supporting the DIY Biology community.

How Learning Works in Design Education: Educating for Creative Awareness Through Formative Reflexivity (Katheryn Richard, Haakon Faste).  How traditional principles of good education break down when applied to creative design education.

Reflective Design Documentation (Peter Dalsgaard, Kim Halskov).  System for design documentation, this time thinking about how this could be useful to researchers who do research through design.  Very thoughtful, particularly during the Q&A.

Framing, Aligning, Paradoxing, Abstracting, and Directing: How Design Mood Boards Work (Andrés Lucero). Mood Board 101: what are the benefits to using them, what can interaction design borrow from this practice that’s common in industrial design, fashion design, textiles, etc.

Understanding Participation and Opportunities for Design from an Online Postcard Sending Community. (Ryan Kelly, Daniel Gooch). Nifty short paper about the life and times of http://www.postcrossing.com/

Exquisite Corpse 2.0: Qualitative Analysis of a Community-based Fiction Project (Peter Likarish, Jon Winet).  Nifty short paper about crowdsourcing a novel line-by-line over twitter, looking at how the narrative is being constructed and managed in a lightweight, distributed medium.

Experiences: A year in the Life of an Interactive Desk. (John Hardy).  One computer science researcher’s reflection on spending a year living and working on an interactive desk. Brought up lots of longitudinal issues that realistically must be considered if interactive work environments are going to be supported in the long run.

… Oh, and, if you’re still curious about that “cool bit of electronics” that came with the conference nametag, it turns out it’s part of Tom Bartindale‘s to-be-published research project at Newcastle University’s Culture Lab. The board has an IR transmitter that  is picked up by the cameras at the conference that are recording talks and interviews with authors.  This metadata of ‘who’s on camera?’ allows videographers to search through stacks of footage and find clips with particular subjects.