Unbreakable Cryptography in 5 Minutes

What if I told you that unbreakable cryptography exists? What if I told you that this article has content which is illegal in certain countries, and may be under export control in the US? Well, that’s precisely what I’m about to tell you. Unbreakable cryptography exists. Cryptographic technology is illegal, or heavily restricted, in certain countries [0], and some forms remain under export control within the vagaries of US law [1].

This article will teach you: (1) a bit of the history of cryptography, and (2) how to implement unbreakable cryptography on your own in 5 minutes or less.
Continue reading

To share or not to share: The Security Risks of Social Networking

If you are not using Facebook, Twitter, YouTube, Linked In, or any of the social networks that are currently monopolizing the interest of web users, please stop reading. Well, given the fact that many of them are included in the Top 20 list of the most visited websites I presume that you are still here.

There are numerous reasons for people to join such a network including: keeping up with old friends, sharing music, photos and videos, finding job opportunities, starting up a small business, promoting it, connecting to causes and many others. To deliver such services, social networks contain and distribute huge amounts of sensitive information. This fact raises many security threats that involve scam artists, stalkers, identity thieves and companies that gather information to gain marketing advantages. Even if it is impossible to escape all the social network-related risks that exist, there are a few steps that you can take to reduce them.

First of all you have to be skeptical about the information you share. Some people share too much while their information can be shared more widely than they wished if they don’t set the network’s privacy controls. Before you share something, you have to be discreet and wary. Specifically, never type anything that would expose you to unwanted persons and remember that people on the Internet are not always who they seem to be. Also, you have to keep in mind that there are social networks that do not guarantee the security of the information shared through a profile, a group etc. For instance, as of May 7, 2010, Facebook’s privacy policy mentioned that it could not guarantee that only authorized persons will view a user’s information. The social network-related security flaws that come to light very often exhibit the above fact.

Most social networks employ some sort of application system where a developer is able to write code and develop third-party applications, which are executed within the social network and have access to content that is only available to the network’s provider (for example the public information of a user). Such applications include quizzes, games to play with other users, polls and others. Apart from the public information of a user, a third-party application may access some private information even if the user is not aware that this is happening. Actually, such an application may also gain access to the personal information of a user’s contacts even if those contacts have not granted any explicit permission to the application. Also, this kind of applications may contain malware designed to attack your computer or maliciously use your data. In addition, scammers can also utilize such applications in order to waste your time and resources. So when you are about to use a new application that was suggested to you remember to always think twice.

A malicious user does not have to develop an application to harm you. A simple personal message could be enough to form a phishing-like attack. Hence, you don’t have to click on everything that is sent to you. Especially on all these shortened URL’s that are popping up everywhere and are commonly accepted as links to relevant and valuable information despite their “disguise”.

Advertisers can be very interested in using the data that social networks collect (i.e. exploring the “favorite movies” section of millions of user profiles could be vital for a film studio). Such data can be used as a basis for behavioral targeting. But currently there are no limits on the ways advertisers can gather and use this kind of data. As a result, there are several concerns regarding this kind of advertising since user privacy attracts little consideration. For example, there are third-party applications that transmit specific information to companies without notifying users.

Social network security has attracted the interest of researchers and practitioners many times. If you want to learn more about this field you can check this white paper by Brad Dinerman called “Social networking and security risks”. Furthermore, you can have a look on how a person’s mishandlings in the social network context, can pose a security threat to her university and college network in “Who’s really in your top 8: network security in the age of social networking” by Robert Gibson. Finally, if you want to learn more about the security and privacy in social networks and how they can be ensured, you can refer to “Security and privacy in online social networks: A survey”, which is a very good survey by Novak et al.

How secure is your software?

When you are implementing an application, your first goal is to achieve a specific functionality. For instance, if you want to implement a specific algorithm that was given to you as an exercise from your informatics course professor, or you just want to create your personal website, the first thing that comes to mind is how to “make it work”. Then of course, you will follow some code conventions during implementation while simultaneously check your code quality. But how about security? How secure is your code? Is there a way for a malicious user to harm you or your application by taking advantage of potential bugs that exist in your code?

Unfortunately, most programmers have been trained in terms of writing code that implements the required functionality without considering its many security aspects. Most software vulnerabilities derive from a relatively small number of common programming errors that lead to security holes. For example, according to SANS (Security Leadership Essentials For Managers) two programming flaws alone were responsible for more than 1.5 million security breaches during 2008.

In 2001 when software security was first introduced as a field, information security was mainly associated with network security, operating systems security and viral software. Until then, there were hundreds of millions of applications implemented but not with security in mind. As a result, the vulnerabilities “hidden” in these (now legacy) applications can still be used as backdoors that lead to security breaches.

Although, nowadays computer security is standard fare in academic curricula around the globe, few courses emphasize on secure coding practices. For instance, during a standard introductory C course, students may not learn that using the gets function could make their code vulnerable to an exploit. Even if someone includes it in a program, while compiling he or she will get the following obscure warning: “the ‘gets’ function is dangerous and should not be used.”. Well, gets is dangerous because it is possible for the user to crash the program by typing too much into the command prompt. In addition, it cannot detect the end of available memory, so if you allocate an amount of memory too small for the purpose, it can cause a segmentation fault and crash.

The situation is similar in web programming. Programmers are not aware of security loopholes inherent to the code they write; in fact, knowing that they program using higher level languages than those prone to security exploits, they may assume that these render their application immune from exploits stemming from coding errors. Common traps into which programmers fall concerns user input validation, the sanitization of data that is sent to other systems, the lack of definition of security requirements, the encoding of data that comes from an untrusted source and others which we will have the opportunity to discuss later on this blog.

Today there are numerous books, papers and security bulletin providers that you can refer to about software security. Building Secure Software by Viega et al., Writing Secure Code by Howard et al. and Secure Coding: Principles & Practices by Graff et al. are three standard textbooks that you can refer to. Furthermore, there are some interesting lists that quote secure coding practices like OWASP’s (The Open Web Application Security Project), CERT’s (Carnegie Mellon’s Computer Emergency Response Team) and Microsoft’s. Also it is interesting to check from time to time the various lists that contain top software defects like CWE’s (Common Weakness Enumeretion) Top 25 and OWASP’s Top 10. But do not panic, you are not obliged to become an expert in secure coding. There are numerous tools that can help you either build secure applications or protect existing ones.

If You Think Network Security is a Safety Issue, You’ll Need to Deal with Cost-Benefit Analysis

The first thing I want to do on this blog is give credit where much-deserved credit is due: a conversation I had at the wonderful Hive76 in Philadelphia inspired this first post for XRDS. (Support your local hackerspace!)

I gave a short presentation at Hive76 recently, and after it was over I hung around answering questions. One man asked me, repeatedly, whether the FDA has codified security standards for networked and wireless medical devices like insulin pumps and pacemakers. The answer wasn’t satisfactory for either of us; he was sincerely alarmed, and I couldn’t reassure him. When I said they have no such standards, he asked me why not. If their mandate is to ensure the safety of medical devices, he said, how could they reasonably neglect such an obvious security risk? I could only say that, in general, network security is thought of as being categorically separate from “health and safety.” For days I tried to figure out why it had felt like we were talking past each other. It wasn’t until weeks later that I realized something important was missing from our conversation.

Many of us believe that this kind of security bears seriously on human safety and human rights. Insecure networks can pose a very real threat to people living under authoritarian regimes, or to members of persecuted minorities. Both are examples of people for whom safe, private communication may be profoundly important. The man I spoke to was deeply worried that poorly designed or nonexistent security in medical devices endangers people’s lives. I think this perspective is common in engineers, designers, and hackers — people for whom network and wireless security are tangible realities, who are accustomed to translating abstract into concrete.

And yet, government agencies tasked with ensuring our safety seem oblivious to the danger posed by insecure networks. That’s why the man I spoke to was so frustrated — he saw technological security as a natural extension of the FDA’s mandate to ensure the safety of medical implants and devices. It seemed to him like they were failing to adequately do their job. Look: civil servants are as able as anyone to comprehend the danger that insecure pacemakers or insulin pumps might pose. But they’re also required to prepare and present cost-benefit analysis reports to the Office of Management and Budget before they can do anything.

Cost-benefit analysis (CBA) is probably familiar to most readers of this blog. It’s a method of decision-making that aims to maximize welfare along an economic model, and the easiest way to explain it is with by illustration: A project is any action that causes a change in the status quo. To evaluate a project, we compare the future “project state of the world” (P) with the “status quo state of the world” (S) — any benefits that arise from maintaining the status quo are treated explicitly as benefits that “S” enjoys and “P” lacks. After accumulating data on the benefits and costs of each project, the analysis determines whether the benefits of project P outweigh the costs. In a simplified world, the only problem is the practical challenge of collecting the data. In the real world, of course, CBA is exponentially more complicated, but at its heart it always relies on weighing costs and benefits of action against the costs and benefits of inaction.

Since 1981, all federal agencies have been required by law to use cost-benefit analysis to determine how they will carry out their individual mandates. Even if they have other reasons for considering a course of action, they must show that their choice is justified by CBA. The question they need to answer is not whether a project would be useful, or a logical extension of an agency’s mandate, but whether the project is economically rational.

Economic rationality is a special value. Its boundaries are more clearly defined than “rationality” as it is generally understood, let alone “responsibility” or “ethicality.” My conversation in Philadelphia met an impasse when we assumed we were talking about the same values – or maybe we ignored the value dimension of the discussion altogether. To use the language of economics again: I assumed the FDA wants to maximize its economic rationality; he assumed the FDA wants to maximize some other value. I still don’t know how he would have described his idea of the agency’s responsibilities and duties. But the exchange made me more careful about assumptions. Even if we use the classic economic catchall term “welfare” to describe what we want to improve or maximize, it’s not enough. People understand “welfare” in different ways, and I think that’s why the man in Philadelphia was so frustrated. Ethical, responsible behavior may coincide with economically rational behavior, or it may not. It can be hard (believe me) to understand how anyone could justify something unethical, but in the tightly bounded world of cost-benefit analysis, the only values permitted are economic.

The risk posed to human life by bad or nonexistent security is difficult to quantify. CBA requires the quantification and precise evaluation of risk, which is complicated and time-consuming even for comparatively straightforward situations (quantifying lives saved by seatbelts in cars, for example). And each federal agency has, over time, developed some institutional competency in evaluating certain kinds of risk and harm. The FDA would need to do so much outside work just to prepare itself to do these initial evaluations that it’s unlikely to ever seem worth it compared to the current status quo. It is possible that investing in security — developing standards, hiring or outsourcing experts, training employees in an area with which they are presumably not familiar — just isn’t an economically rational thing for the FDA to do.

There is no federal agency that regulates the network security of consumer devices, and it seems unlikely that existing agencies will find it economically rational in the near future to invest in learning how to evaluate the risk and harm that might come of bad security. If security experts want to persuade the administrative state to take a serious interest in this problem — and many of us believe that it should — the language and values of cost-benefit analysis are important to consider. If CBA can be used to make a case for device security, we should make it. If the numbers don’t add up in our favor, it’ll be all the more essential to articulate why agencies like the FDA should look at network security in a different way.

 

Lea Rosen writes about technology and law, with a focus on civil and human rights issues inherent in the creation and adoption of new technologies. She holds a JD from Rutgers School of Law, Camden, and a B.A. in Humanities from New College of Florida. She has worked with the Electronic Frontier Foundation, the ACLU, the Center for Constitutional Rights, and the National Lawyers Guild, and she tweets as @lea_rosen.