DIS 2012 Highlights

Here’s a few of my own highlights from DIS 2012 sessions that I attended…

At the seams: DIYbio and Opportunities for HCI (Stacey Kuzentesov, Alex S. Taylor, Tim Regan, Nicolas Villar, Eric Paulos): Fascinating look at issues facing the DIY Biology including community, materials management, ethics, etc.  Some good examples about how interaction design might have a role in supporting the DIY Biology community.

How Learning Works in Design Education: Educating for Creative Awareness Through Formative Reflexivity (Katheryn Richard, Haakon Faste).  How traditional principles of good education break down when applied to creative design education.

Reflective Design Documentation (Peter Dalsgaard, Kim Halskov).  System for design documentation, this time thinking about how this could be useful to researchers who do research through design.  Very thoughtful, particularly during the Q&A.

Framing, Aligning, Paradoxing, Abstracting, and Directing: How Design Mood Boards Work (Andrés Lucero). Mood Board 101: what are the benefits to using them, what can interaction design borrow from this practice that’s common in industrial design, fashion design, textiles, etc.

Understanding Participation and Opportunities for Design from an Online Postcard Sending Community. (Ryan Kelly, Daniel Gooch). Nifty short paper about the life and times of http://www.postcrossing.com/

Exquisite Corpse 2.0: Qualitative Analysis of a Community-based Fiction Project (Peter Likarish, Jon Winet).  Nifty short paper about crowdsourcing a novel line-by-line over twitter, looking at how the narrative is being constructed and managed in a lightweight, distributed medium.

Experiences: A year in the Life of an Interactive Desk. (John Hardy).  One computer science researcher’s reflection on spending a year living and working on an interactive desk. Brought up lots of longitudinal issues that realistically must be considered if interactive work environments are going to be supported in the long run.

… Oh, and, if you’re still curious about that “cool bit of electronics” that came with the conference nametag, it turns out it’s part of Tom Bartindale‘s to-be-published research project at Newcastle University’s Culture Lab. The board has an IR transmitter that  is picked up by the cameras at the conference that are recording talks and interviews with authors.  This metadata of ‘who’s on camera?’ allows videographers to search through stacks of footage and find clips with particular subjects.

Reporting from DIS 2012

I’ll be blogging this week (June 12-15) from DIS 2012 in Newcastle, UK.  This year’s DIS conference is actually part of a two-week conference series that also includes Pervasive 2012 and the International Symposium of Wearable Computers (ISWC 2012).

When I arrived I was very pleasantly surprised that my registration “bag” included:

  •  My badge/nametag.
  • A cool bit of electronics (more on this later).
  • A conference program, which fit into my plastic name badge.  The reverse side has a map, for easy reference.
  • A USB key with conference proceedings.
  • The ubiquitous conference bag … which is actually an Onya Bag that fits into a tiny stuff sack and attaches to a keychain.
  • A lanyard, to which everything is attached.

I tend to a) recycle 90% of the flyers that come in conference bags within 10 minutes; b) continually forget my conference program; and c) begrudgingly lug former conference bags to the grocery store.  Thank you, DIS 2012 organizing committee, for thoughtfully designing registration and being well-organized.

You may still be wondering about that QR code and cool bit of electronics near the bottom of my name tag.  Registration let me know that it’s being used to identify me automatically in video taken at the conference, and that it works with interactive coffee tables in the main lobby area.  I’ll do a bit more investigating on how this works and will report back soon!

Dear HCI, Thank you. Love, Mechanical Engineering

My entire academic background – BS, MS, PhD –  is in Mechanical Engineering.  However, in addition to conferences hosted by the American Society of Mechanical Engineering, I also attend the suite of ACM’s Human-Computer Interaction (HCI) Conferences. So, why should Mechanical Engineering care about HCI?

First, product design includes interfaces.  ‘Product design’ refers to the blend of mechanical engineering and industrial design. Design is the ‘outward facing’ side of Mechanical Engineering; product designers conceptualize, design, and implement many of the physical products you interact with on a daily basis.  In the cafe that I’m currently writing from, a design engineer was involved in everything from the teacup, the teapot, the table, the chair, and the laptop I’m writing on… and all the packaging that each of those products arrived in.   These traditional products still have interfaces – examples from Don Norman’s infamous “Design of Everyday Things” address how people physically interact with ‘non-smart’ products and devices such as teapots, doorknobs, or rotary telephones.  Today’s product designers are asked to not only design the physical product, but also weigh in on how the user should interact with smart products.

Second, design research in mechanical engineering can learn from findings from interaction design.  Early-stage phases of new product development – particularly user research and concept generation– are agnostic to whether or not the final ‘product’ is a physical product, software, a physical or digital service, or an architectural space.  As a result, many of the same design theory principles coming out of the interaction design community are broadly applicable to other design domains, including product design or new product development, within some level of translation.

Finally, engineers deserve well-designed technology. Engineers are people too – and, while computer scientists frequently design new programming environments for themselves, mechanical engineers and new product developers are not always the subject of thoughtful, human-centered technology design. Taking an HCI perspective to understand how engineers and designers are users of software opens up the possibility for better-designed tools in the future (I’m looking at you, CAD!).

… so why should HCI care about Mechanical Engineering?

It’s sometimes easy to get lost in cognition, perception, algorithms, and pixels.  However, when mechanical engineers check their gut, they see the physical interface between humans and computers.  You’ll see plenty of relevant contributions from Mechanical Engineering in the areas of ergonomics, haptic feedback, or tangible interfaces. But more broadly, mechanical engineers offer the reminder that humans (and computers) still primarily exist in a physical world.

How secure is your software?

When you are implementing an application, your first goal is to achieve a specific functionality. For instance, if you want to implement a specific algorithm that was given to you as an exercise from your informatics course professor, or you just want to create your personal website, the first thing that comes to mind is how to “make it work”. Then of course, you will follow some code conventions during implementation while simultaneously check your code quality. But how about security? How secure is your code? Is there a way for a malicious user to harm you or your application by taking advantage of potential bugs that exist in your code?

Unfortunately, most programmers have been trained in terms of writing code that implements the required functionality without considering its many security aspects. Most software vulnerabilities derive from a relatively small number of common programming errors that lead to security holes. For example, according to SANS (Security Leadership Essentials For Managers) two programming flaws alone were responsible for more than 1.5 million security breaches during 2008.

In 2001 when software security was first introduced as a field, information security was mainly associated with network security, operating systems security and viral software. Until then, there were hundreds of millions of applications implemented but not with security in mind. As a result, the vulnerabilities “hidden” in these (now legacy) applications can still be used as backdoors that lead to security breaches.

Although, nowadays computer security is standard fare in academic curricula around the globe, few courses emphasize on secure coding practices. For instance, during a standard introductory C course, students may not learn that using the gets function could make their code vulnerable to an exploit. Even if someone includes it in a program, while compiling he or she will get the following obscure warning: “the ‘gets’ function is dangerous and should not be used.”. Well, gets is dangerous because it is possible for the user to crash the program by typing too much into the command prompt. In addition, it cannot detect the end of available memory, so if you allocate an amount of memory too small for the purpose, it can cause a segmentation fault and crash.

The situation is similar in web programming. Programmers are not aware of security loopholes inherent to the code they write; in fact, knowing that they program using higher level languages than those prone to security exploits, they may assume that these render their application immune from exploits stemming from coding errors. Common traps into which programmers fall concerns user input validation, the sanitization of data that is sent to other systems, the lack of definition of security requirements, the encoding of data that comes from an untrusted source and others which we will have the opportunity to discuss later on this blog.

Today there are numerous books, papers and security bulletin providers that you can refer to about software security. Building Secure Software by Viega et al., Writing Secure Code by Howard et al. and Secure Coding: Principles & Practices by Graff et al. are three standard textbooks that you can refer to. Furthermore, there are some interesting lists that quote secure coding practices like OWASP’s (The Open Web Application Security Project), CERT’s (Carnegie Mellon’s Computer Emergency Response Team) and Microsoft’s. Also it is interesting to check from time to time the various lists that contain top software defects like CWE’s (Common Weakness Enumeretion) Top 25 and OWASP’s Top 10. But do not panic, you are not obliged to become an expert in secure coding. There are numerous tools that can help you either build secure applications or protect existing ones.

Eyes Clouded by Distributed Systems

You are probably reading this article with a dual- or quad-core processor, and perhaps with even more cores. Your computer is already a distributed system, with multiple computing components—cores—communicating with each other via main memory and other channels such as physical buses—or wires—between them. As you browse multiple web pages you are interacting with the largest distributed system ever created—the Internet.  We recently celebrated IPv6 Day [0]: IPv6 is a new form of addressing devices connected to the Internet because its sheer scale has outgrown the previous standard IPv4’s list of addresses—all 4+ billion of them.  Every Internet company depends on distributed systems, and, by extension, the economies of the world are now tied to them.

Companies such as Google, Facebook, and Amazon are all interested in building highly efficient large-scale distributed systems which power their businesses. Over the previous decade, Google has described their Google File System (GFS) [1]—a file system spanning thousands of computers to store more data than any single computer system, and a technology that has shaped almost every form of large-scale computing since publication: MapReduce [2].  MapReduce is distributed computing for the masses because it distills everything down to two functions—Map and Reduce—and once they are specified it handles all other aspects of coordinating thousands of computers on behalf of the programmer. Facebook has released open source projects such as Thrift [3] for implementing communication between programs in different programming languages. Amazon built the first, and largest, public cloud EC2 [4] by inventing new distributed systems designed to bring datacenter scale to the masses—with EC2 you can easily start 100 servers within minutes.   Amazon has offered many other services to enhance their overall cloud such as a storage substrate called S3 [5]—think of it as a building block for a GFS—and CloudFront [6], a content distribution network (CDN) designed to distribute data around the world for low latency and high bandwidth access. Akamai [7] also helps deliver the web’s content with one of the largest CDN networks in the world. Netflix has their own distributed CDN [8] as they outgrew solutions provided by Akamai and Amazon.

Continue reading