The Curious Case of a Sick Google Glass

During recent experiments for a research paper, my research group observed very strange symptoms from our Google Glass. Most of our experiments were done to study the impact of latency on cognitive assistance applications such as programs designed to remind you who is in front of you, or notify you that it is safe to cross the street. We observed a large variation in latency which was unexplainable by the usual culprits such as poorly performing WiFi networks. We had isolated all the possible sources outside of the Google Glass, but the unknown source of latency jitter was still ruining our experimental results. At this point, we knew we had to figure out what was going on inside the Google Glass itself.

Someone suggested early on that the device might be doing some tricks to cool itself down. During our experiments, we ran the device continuously capturing video and transmitting frames over the network.  When you imagine a Google Glass device it is a bunch of electronics which generate heat while doing their work. There is no space for a fan, and it would probably be uncomfortable, so the only “heatsink” is the human head. Of course, people can only tolerate so much heat, thus the Google Glass must carefully manage its thermal footprint. Being mobile requires tricks most developers and researchers are not accustomed too!

After investigating, we found that the Google Glass was dynamically scaling the frequency of its CPU to reduce energy consumption and generated heat. It may have been turning off its radio, or putting other components into a low power state more often as well, but CPU frequency was the easiest metric for us to follow. This is a common engineering trick used to reduce energy usage on laptops, desktops, and even servers. But, when workloads increase, the CPU frequency usually scales back up to match user demands.

Counterintuitively, when under heavy load the Google Glass reduces CPU frequency. Instead of trying to do more work and keep up, it does less work!  This is how it manages heat—by limiting the amount of work it completes at any given moment. I have not seen end-to-end system latency vs CPU frequency studied much in the research literature, but Figure 1 shows the effect of CPU frequency on overall application response time (think of the application here as a ping test) over a WiFi network to a server a couple hops away.

Latency vs CPU Frequency

Figure 1. Latency trace over time as CPU frequency varies over WiFi to a couple hops away server.

There are a couple of methods to stop CPU frequency scaling on the Google Glass. The first, and more reliable, method would be to modify the Google Glass software to not do any scaling whatsoever. Although desirable for research experiments, we chose not to do this given our short deadline because we were not very familiar with modifying the Google Glass OS and we didn’t want any more surprises. The second, reliable enough, method is to actively cool the Google Glass with an external device.

We chose the technologically advanced ice pack, from a freezer in our office. In Figure 2, you can see our advanced experimental setup featuring a Google Glass ($1,500 Explorers Edition), and a CVS ice pack ($5.99). Trained professionals were at hand during all experiments; you may or may not want to try this at home. I haven’t asked Google if this voids any warranties yet…

Google Glass with Icepack

Figure 2. Google Glass with CVS Ice Pack

Just like a sick person using a damp cloth or ice applied to their feverish forehead, we used an ice pack wrapped around our Google Glass to keep its temperature down. Once we had done this, our Google Glass CPU frequency stayed stable for our latency measurements. We made sure we kept an eye on it, but with the ice pack in place the latency jitter disappeared!

What about your desktop? Could the CPU frequency on a desktop affect network latency? Yes, CPU frequency affects network latency on a desktop as shown in Figure 3. At the highest frequency for the CPU on my desktop there is minimal jitter and the maximum observed latency was under 4 milliseconds to a server one hop away. Contrast this with the other possible frequencies my CPU can run at.  They each have more jitter, as well as higher maximum observed latencies.

Latency traces for desktop with CPU scaling

Figure 3. Raw latency traces (milliseconds y-axis, 10,000 packets) for a desktop with CPU scaling to a one hop away server.

Investigating the outliers in Figure 4 shows a clear differentiation between the highest CPU frequency for my desktop, and the other levels it can run at.  All of the lower frequencies experienced greater latencies overall.

Latency outliers

Figure 4. Six outliers showing the tail of the latency traces.

A lower CPU frequency implies higher latency for any task. It just can not do the same amount of work as a system running at a higher frequency. You can imagine that as you lower the CPU frequency, you increase the number of instructions that get queued up waiting for CPU time because it is running slower. This queue is basically unbounded; you could use all of RAM or even spillover to swap space on disk queuing up instructions for your CPU. And the larger the queue size, the higher the latency. Intuitively this makes sense, but I haven’t seen many people explore this quantitatively!

Leave a Reply

Your email address will not be published. Required fields are marked *