About XRDS Staff

Parallel Programming through Dependence Analysis – Part I

 “As soon as an Analytical Engine exists, it will necessarily guide the future course of the science. Whenever any result is sought by its aid, the question will then arise — by what course of calculation can these results be arrived at by the machine in the shortest time?”

Charles Babbage (1864)

Points to Ponder

Would it not be wonderful, if we could write all our simulations as serial programs, and parallelized code (highly optimized for any given supercomputer) would be generated automatically by the compiler? Why is this not the case today? How come supercomputing centers require teams of highly trained developers to write simulations?

Introduction

Scientists around the world develop mathematical models and write simulations to understand systems in nature. In many cases, simulation performance becomes an issue either as datasets (problem size) get larger, and/or when higher accuracy is required. In order to resolve the performance issues, parallel processing resources can be utilized. Since a large number of these simulations are developed using high level tools such as Matlab, Mathematica, Octave, etc., the obvious choice for the scientist is to use the parallel processing functions provided within the tool. A case in point is the parfor function in Matlab, which executes iterations of a for-loop in parallel. However, when an automation tool fails to parallelize a for-loop, it can be hard to understand why parallelization failed, and how one might change the code to help the tool with parallelization. Continue reading

Big Data, Communication and Lower Bounds

As the size of the available data increases, massive data sets cannot be stored in their entirety in memory of a single machine. Furthermore due to the small amount of memory and computation power available on a single node, one needs to distribute the data and computation among multiple machines when processing tasks.

However, transferring the big data is very expensive. In fact, it is more expensive than the computations on the datasets. Thus, in the distributed model, the amount of communication plays an important role in the total cost of an algorithm and the aim is to minimize the amount of communication among processors (CPUs). This is one of the main motivations to study the theory of Communication Complexity, which originates from Big Data processing. Continue reading

How You Can Be Part of the Future of Computing

With 50 years of history behind IBM’s mainframe computers, these powerful machines are here to stay. IBM has been making a continuous push to encourage industry and educational institutions to adopt this technology, and provide more educational tools and resources to teach mainframe computing.

In celebration of IBM’s Mainframe 50th Anniversary, this year’s Master the Mainframe competition was one of a kind.  This was not only the first World Championship, but a record number of students participated—about 20,000 students from all over the world competed over a three-month period. Those who qualified completed all three stages of the competition; but only the top 43 contestants with the highest scores were invited to the World Championship. Continue reading

Yong-Siang Dominates at the 2014 IBM Competition

…and we have a new IBM Mainframe World Champion!

On Tuesday April 8th, all six student finalists were officially introduced during the IBM Mainframe50 event; and the final results were delivered to a NYC audience that could not wait to meet the winners. Yes, it was a tight competition and all 40 contestants are already winners. Here we leave you with the top three. Continue reading

2014 IBM Master the Mainframe World Championship

4/7/14, 10:30am: Mainframe 50th Anniversary

Only the highest scored students have the opportunity to participate in the IBM Master the Mainframe World Championship. In this competition, 43 student contestants from five continents, will compete for the grand prize of mainframe computing, which has 50 years of history behind it. This two-day event will be taking place in New York City, April 7-8, and XRDS will be there covering every detail live for you.  Continue reading