Language Bureaucracy

Laziness, impatience and hubris are the three great virtues that each programmer should have, at least according to Larry Wall [1]. My experience so far showed me that he was right. All programmers have these characteristics, if they do not, usually they are not real programmers. Since they are expressing these values with the usage of several programming languages, they tend to compare them. Usually this comparison ends up with a phenomenon called flame wars. The programmers are participating in endless quarrels, exchanging arguments regarding language features, their standard (or not) libraries, etc. Continue reading

Big Data, Communication and Lower Bounds

As the size of the available data increases, massive data sets cannot be stored in their entirety in memory of a single machine. Furthermore due to the small amount of memory and computation power available on a single node, one needs to distribute the data and computation among multiple machines when processing tasks.

However, transferring the big data is very expensive. In fact, it is more expensive than the computations on the datasets. Thus, in the distributed model, the amount of communication plays an important role in the total cost of an algorithm and the aim is to minimize the amount of communication among processors (CPUs). This is one of the main motivations to study the theory of Communication Complexity, which originates from Big Data processing. Continue reading

Software Packages for Theoreticians by Theoreticians

Brown University’s ICERM recently hosted a workshop titled “Electrical Flows, Graph Laplacians, and Algorithms,” where top researchers convened to present and discuss their recent progress in spectral graph theory and algorithms. Richard Peng opened up the workshop with an overview talk on efficient solvers for linear systems with graph Laplacians as the coefficient matrix. He presented a thorough history of the topic and set the stage for the variety of technical talks on fast algorithms for graph sparsification, spectral clustering, computing max flow, as well as a variety of other local and approximation algorithms.

His talk (as well as many of the rest) are archived and available thanks to ICERM. I will focus on one highlight – a point that resonated with the conclusion of Richard Peng’s talk – a call for more software implementing these new, fast algorithms. In this light, I’d like to briefly discuss some of the software packages out there for spectral graph theory and the analysis of large graphs being developed by theoreticians active in the area. Continue reading

How You Can Be Part of the Future of Computing

With 50 years of history behind IBM’s mainframe computers, these powerful machines are here to stay. IBM has been making a continuous push to encourage industry and educational institutions to adopt this technology, and provide more educational tools and resources to teach mainframe computing.

In celebration of IBM’s Mainframe 50th Anniversary, this year’s Master the Mainframe competition was one of a kind.  This was not only the first World Championship, but a record number of students participated—about 20,000 students from all over the world competed over a three-month period. Those who qualified completed all three stages of the competition; but only the top 43 contestants with the highest scores were invited to the World Championship. Continue reading

Yong-Siang Dominates at the 2014 IBM Competition

…and we have a new IBM Mainframe World Champion!

On Tuesday April 8th, all six student finalists were officially introduced during the IBM Mainframe50 event; and the final results were delivered to a NYC audience that could not wait to meet the winners. Yes, it was a tight competition and all 40 contestants are already winners. Here we leave you with the top three. Continue reading