How secure is your software?

When you are implementing an application, your first goal is to achieve a specific functionality. For instance, if you want to implement a specific algorithm that was given to you as an exercise from your informatics course professor, or you just want to create your personal website, the first thing that comes to mind is how to “make it work”. Then of course, you will follow some code conventions during implementation while simultaneously check your code quality. But how about security? How secure is your code? Is there a way for a malicious user to harm you or your application by taking advantage of potential bugs that exist in your code?

Unfortunately, most programmers have been trained in terms of writing code that implements the required functionality without considering its many security aspects. Most software vulnerabilities derive from a relatively small number of common programming errors that lead to security holes. For example, according to SANS (Security Leadership Essentials For Managers) two programming flaws alone were responsible for more than 1.5 million security breaches during 2008.

In 2001 when software security was first introduced as a field, information security was mainly associated with network security, operating systems security and viral software. Until then, there were hundreds of millions of applications implemented but not with security in mind. As a result, the vulnerabilities “hidden” in these (now legacy) applications can still be used as backdoors that lead to security breaches.

Although, nowadays computer security is standard fare in academic curricula around the globe, few courses emphasize on secure coding practices. For instance, during a standard introductory C course, students may not learn that using the gets function could make their code vulnerable to an exploit. Even if someone includes it in a program, while compiling he or she will get the following obscure warning: “the ‘gets’ function is dangerous and should not be used.”. Well, gets is dangerous because it is possible for the user to crash the program by typing too much into the command prompt. In addition, it cannot detect the end of available memory, so if you allocate an amount of memory too small for the purpose, it can cause a segmentation fault and crash.

The situation is similar in web programming. Programmers are not aware of security loopholes inherent to the code they write; in fact, knowing that they program using higher level languages than those prone to security exploits, they may assume that these render their application immune from exploits stemming from coding errors. Common traps into which programmers fall concerns user input validation, the sanitization of data that is sent to other systems, the lack of definition of security requirements, the encoding of data that comes from an untrusted source and others which we will have the opportunity to discuss later on this blog.

Today there are numerous books, papers and security bulletin providers that you can refer to about software security. Building Secure Software by Viega et al., Writing Secure Code by Howard et al. and Secure Coding: Principles & Practices by Graff et al. are three standard textbooks that you can refer to. Furthermore, there are some interesting lists that quote secure coding practices like OWASP’s (The Open Web Application Security Project), CERT’s (Carnegie Mellon’s Computer Emergency Response Team) and Microsoft’s. Also it is interesting to check from time to time the various lists that contain top software defects like CWE’s (Common Weakness Enumeretion) Top 25 and OWASP’s Top 10. But do not panic, you are not obliged to become an expert in secure coding. There are numerous tools that can help you either build secure applications or protect existing ones.

Eyes Clouded by Distributed Systems

You are probably reading this article with a dual- or quad-core processor, and perhaps with even more cores. Your computer is already a distributed system, with multiple computing components—cores—communicating with each other via main memory and other channels such as physical buses—or wires—between them. As you browse multiple web pages you are interacting with the largest distributed system ever created—the Internet.  We recently celebrated IPv6 Day [0]: IPv6 is a new form of addressing devices connected to the Internet because its sheer scale has outgrown the previous standard IPv4’s list of addresses—all 4+ billion of them.  Every Internet company depends on distributed systems, and, by extension, the economies of the world are now tied to them.

Companies such as Google, Facebook, and Amazon are all interested in building highly efficient large-scale distributed systems which power their businesses. Over the previous decade, Google has described their Google File System (GFS) [1]—a file system spanning thousands of computers to store more data than any single computer system, and a technology that has shaped almost every form of large-scale computing since publication: MapReduce [2].  MapReduce is distributed computing for the masses because it distills everything down to two functions—Map and Reduce—and once they are specified it handles all other aspects of coordinating thousands of computers on behalf of the programmer. Facebook has released open source projects such as Thrift [3] for implementing communication between programs in different programming languages. Amazon built the first, and largest, public cloud EC2 [4] by inventing new distributed systems designed to bring datacenter scale to the masses—with EC2 you can easily start 100 servers within minutes.   Amazon has offered many other services to enhance their overall cloud such as a storage substrate called S3 [5]—think of it as a building block for a GFS—and CloudFront [6], a content distribution network (CDN) designed to distribute data around the world for low latency and high bandwidth access. Akamai [7] also helps deliver the web’s content with one of the largest CDN networks in the world. Netflix has their own distributed CDN [8] as they outgrew solutions provided by Akamai and Amazon.

Continue reading

If You Think Network Security is a Safety Issue, You’ll Need to Deal with Cost-Benefit Analysis

The first thing I want to do on this blog is give credit where much-deserved credit is due: a conversation I had at the wonderful Hive76 in Philadelphia inspired this first post for XRDS. (Support your local hackerspace!)

I gave a short presentation at Hive76 recently, and after it was over I hung around answering questions. One man asked me, repeatedly, whether the FDA has codified security standards for networked and wireless medical devices like insulin pumps and pacemakers. The answer wasn’t satisfactory for either of us; he was sincerely alarmed, and I couldn’t reassure him. When I said they have no such standards, he asked me why not. If their mandate is to ensure the safety of medical devices, he said, how could they reasonably neglect such an obvious security risk? I could only say that, in general, network security is thought of as being categorically separate from “health and safety.” For days I tried to figure out why it had felt like we were talking past each other. It wasn’t until weeks later that I realized something important was missing from our conversation.

Many of us believe that this kind of security bears seriously on human safety and human rights. Insecure networks can pose a very real threat to people living under authoritarian regimes, or to members of persecuted minorities. Both are examples of people for whom safe, private communication may be profoundly important. The man I spoke to was deeply worried that poorly designed or nonexistent security in medical devices endangers people’s lives. I think this perspective is common in engineers, designers, and hackers — people for whom network and wireless security are tangible realities, who are accustomed to translating abstract into concrete.

And yet, government agencies tasked with ensuring our safety seem oblivious to the danger posed by insecure networks. That’s why the man I spoke to was so frustrated — he saw technological security as a natural extension of the FDA’s mandate to ensure the safety of medical implants and devices. It seemed to him like they were failing to adequately do their job. Look: civil servants are as able as anyone to comprehend the danger that insecure pacemakers or insulin pumps might pose. But they’re also required to prepare and present cost-benefit analysis reports to the Office of Management and Budget before they can do anything.

Cost-benefit analysis (CBA) is probably familiar to most readers of this blog. It’s a method of decision-making that aims to maximize welfare along an economic model, and the easiest way to explain it is with by illustration: A project is any action that causes a change in the status quo. To evaluate a project, we compare the future “project state of the world” (P) with the “status quo state of the world” (S) — any benefits that arise from maintaining the status quo are treated explicitly as benefits that “S” enjoys and “P” lacks. After accumulating data on the benefits and costs of each project, the analysis determines whether the benefits of project P outweigh the costs. In a simplified world, the only problem is the practical challenge of collecting the data. In the real world, of course, CBA is exponentially more complicated, but at its heart it always relies on weighing costs and benefits of action against the costs and benefits of inaction.

Since 1981, all federal agencies have been required by law to use cost-benefit analysis to determine how they will carry out their individual mandates. Even if they have other reasons for considering a course of action, they must show that their choice is justified by CBA. The question they need to answer is not whether a project would be useful, or a logical extension of an agency’s mandate, but whether the project is economically rational.

Economic rationality is a special value. Its boundaries are more clearly defined than “rationality” as it is generally understood, let alone “responsibility” or “ethicality.” My conversation in Philadelphia met an impasse when we assumed we were talking about the same values – or maybe we ignored the value dimension of the discussion altogether. To use the language of economics again: I assumed the FDA wants to maximize its economic rationality; he assumed the FDA wants to maximize some other value. I still don’t know how he would have described his idea of the agency’s responsibilities and duties. But the exchange made me more careful about assumptions. Even if we use the classic economic catchall term “welfare” to describe what we want to improve or maximize, it’s not enough. People understand “welfare” in different ways, and I think that’s why the man in Philadelphia was so frustrated. Ethical, responsible behavior may coincide with economically rational behavior, or it may not. It can be hard (believe me) to understand how anyone could justify something unethical, but in the tightly bounded world of cost-benefit analysis, the only values permitted are economic.

The risk posed to human life by bad or nonexistent security is difficult to quantify. CBA requires the quantification and precise evaluation of risk, which is complicated and time-consuming even for comparatively straightforward situations (quantifying lives saved by seatbelts in cars, for example). And each federal agency has, over time, developed some institutional competency in evaluating certain kinds of risk and harm. The FDA would need to do so much outside work just to prepare itself to do these initial evaluations that it’s unlikely to ever seem worth it compared to the current status quo. It is possible that investing in security — developing standards, hiring or outsourcing experts, training employees in an area with which they are presumably not familiar — just isn’t an economically rational thing for the FDA to do.

There is no federal agency that regulates the network security of consumer devices, and it seems unlikely that existing agencies will find it economically rational in the near future to invest in learning how to evaluate the risk and harm that might come of bad security. If security experts want to persuade the administrative state to take a serious interest in this problem — and many of us believe that it should — the language and values of cost-benefit analysis are important to consider. If CBA can be used to make a case for device security, we should make it. If the numbers don’t add up in our favor, it’ll be all the more essential to articulate why agencies like the FDA should look at network security in a different way.

 

Lea Rosen writes about technology and law, with a focus on civil and human rights issues inherent in the creation and adoption of new technologies. She holds a JD from Rutgers School of Law, Camden, and a B.A. in Humanities from New College of Florida. She has worked with the Electronic Frontier Foundation, the ACLU, the Center for Constitutional Rights, and the National Lawyers Guild, and she tweets as @lea_rosen. 

2012 ICPC World Finals

Students from St. Petersburg State University of Information Technology, Mechanices & Optics take the Gold at IBM-Sponsored ACM International Collegiate Programming Contest.

Out of 112 universities from around the world, St. Petersburg State University of IT, Mechanics & Optics emerges as winner of “World’s Smartest Trophy”

The Winning Team - Andrey Stankevich (Coach), Eugeny Kapun (Contestant), Mikhail Kever (Contestant), Niyaz Nigmatullin (Contestant)

The competition begins…

Once a team solves a problem a balloon is tied to their workstation as a visual indicator of who has solved the most problems. Each problem has a certain color balloon. There are also live scoreboards projected on the wall, however during the last hour of competition the scoreboards are frozen so contestants don’t know the exact standings!

Top 12 Finishers

The Ending

The reading of results had everyone on the edge of their seats. It was announced that the University of Warsaw had successfully solved 9 problems, moving them ahead of St. Petersburg State University of IT, Mechanics & Optics however St. Petersburg still had one question to be resolved.

After moments of heart pounding anticipation the last problem was resolved and it turned out that St. Petersburg State University of IT, Mechanics & Optics had also solved 9 questions and since their time was 337 minutes shorter they became the first place World Champions!

About the Contest

The contest challenges students to solve the most computer programming problems in the least amount of time. Demonstrating their elite problem solving and programming skills, St. Petersburg State University of IT, Mechanics & Optics successfully solved nine problems in five hours. The World Champions will return home with “The World’s Smartest Trophy,” as well as awards and a guaranteed offer of employment or internship with IBM.

“This is a sport. Many teams go to camps specifically to train; it’s mental gymnastics.”
– Sal Vella of IBM

Problem Set of the 2012 ICPC Finals, click here.
Official Results of the 2012 ICPC Finals, click here.

 Coming soon..

We will have an interview with the winning team once they return home from Warsaw. Stay Posted!

2012 ICPC Winners

Congratulations to the World Champions, St. Petersburg State University of IT, Mechanics & Optics. The hometown favorite, the University of Warsaw, place second. Tying with the gold medal team for the number of problems solved. In third place was the Moscow Institute of Physics & Technology. There were a total of 12 problems, only 9 of which were solved. Harvard University and the University of Waterloo were the only North American team to place in the Top Ten.

Here is a list of the final results.

Check back later for Shawn Freeman’s riveting recap of the competition.