Jan 30, 2009

Besides Cell/B.E., the Programming Contest for GPU Just Started in Japan



January 28 was a deadline of application for “Cell Challenge 2009” of the Multi-core Programming Contest in Japan.

The specified assignment is "Calculation of Edit Distance between Two Character Strings." A champion will be determined by the sum of scores in the preliminary rounds and the final rounds. The final rounds will complete on March 20.

Three special interest groups in Information Processing Society of Japan lead and Kitakyushu Foundation for the Advancement of Industry, Science and Technology (FAIS) and four Cell/B.E. developers - Toshiba, etc., support the contest.

GPU Challenge 2009” newly started on January 21 as an attached program with the Cell Challenge 2009. It is supported by the Global Scientific Information and Computing Center (GSIC), Tokyo Institute of Technology, NVIDIA, and so on.

The specified problem of GPU Challenge 2009 is the same as the Cell Challenge 2009. Applicants should run their program on the computers with NVIDIA GPUs (equivalent to Tesla S1070-400) under a CUDA programming environment that the GPU Challenge 2009 executive committee provides. For the specified problem, an applicant has to use the computer implementing NVIDIA's GPU in Global Scientific Information and Computing Center (GSIC), Tokyo Institute of Technology.

The deadline of application and final program submission is February 13 and March 25 respectively.

In the almost same period as above, Fixstars which has been building up proven performances towards a Cell/B.E. total solution company is carrying out the Cell Programming Contest in Japan called “Hack the Cell 2009” by themselves.

This deadline of application and final program submission is January 31 and March 6 respectively.

The assignment of the Hack the Cell 2009 is "Optimization of Mersenne Twister Random Number Generator."

The champion of a student category will be awarded Fixstar's scholarship priority of 600kyen per year, and invitation to five-day travel to San Francisco.

Fixstar's CTO shows Japanese message on the contest's web site like this; "We are clearly lacking smarter programmers who can get great performances out of current high performance processors like Cell/B.E. Many programmers also are hidden without opportunities to demonstrate their superiorities. We can recognize that these situations should be serious problems and it is meaningful to provide opportunities for excellent programmers to be admired. "

Most of HPC people can agree with his message, I imagine.

Jan 25, 2009

Potsdam Scientists to Tackle New Type of Weather Simulations with IBM iDataPlex

The Potsdam Institute for Climate Impact Research (PIK)'s new iDataPlex computer was put into operation in January and offers 30 teraflops of processing power as described below.

Potsdam Scientists to Tackle New Type of Weather Simulations with IBM iDataPlex

One of the key reasons that iDataPlex stands apart from other high-performance computer platforms is its energy efficiency, approximately 230 megaflops of performance per watt estimated.

Part of IBM’s “Big Green” initiative, iDataPlex maximizes performance per watt with innovative cooling techniques such as a rear-door heat exchanger.

Although IBM iDataPlex should be known much more as one of smarter commodity-based clusters, we have not met a good explanation so often (only in Japan?).

I just happened to meet an chatty and interesting article about iDataPlex by Linux Magazine’s HPC editor, Douglas Eadline.

Doug Meets The iDataPlex

He visited Dave Weber, the program director of the Wall Street Center of Excellence, Worldwide Client Centers for IBM, and held a dialogue with Dave about iDataPlex.

First things first, he knew iDataPlex meant "Large Scale Internet Data Center" and he hoped he was not in the wrong meeting because he was an HPC guy. However during the meeting, he found "this was obviously not your typical 1U server node. Indeed, it was almost like someone ask a group of HPC users to design a node.", and continued his finding about low powered fans, cabling, smarter combination of commodities, energy efficiency, cost performance and TCO, and so on.

Finally Doug commented:
Instead of calling it the “iDataPlex for Web 2.0″, they should have just called it the “iDataPlex for HPC 2.0!

Doug's message can be shared with me very well, and IBM should talk iDataPlex values in HPC better and more like him, shouldn't they?

Jan 21, 2009

HPC will grow under the Obama Government

The House Appropriations Committee released the bill text and the accompanying committee report "THE AMERICAN RECOVERY AND REINVESTMENT ACT OF 2009 [DISCUSSION DRAFT]" on January 15.

Computing Research Policy Blog quickly analyzed it in

More Detail on 2009 House Dem Stimulus and Recovery Plan (January 15, 2009):
http://www.cra.org/govaffairs/blog/archives/000715.html

The blog concluded that "In summary, though, this looks awfully good to us and will likely go a long way towards recharging the Nation's innovation engine."

The conclusion looks natural because a large amount of additional investmentis scheduled for 2009 in the report. For example, the Office of Science in the Department of Energy will see an increase of $2 billion under this plan, NSF will see an increase of $3 billion overall and NIH will receive $1.5 billion for grants to improve university research facilities and another $1.5 billion in new research funding. These are well known players driving HPC.

Therefore I believe that possibility for postponing the U.S. Petascale Computing projects such as on-going Blue Waters Project, due to the U.S. economic crisis, was clearly wiped away.

According to a revised IDC New HPC Market Forecast that I received from Earl Joseph yesterday morning, the new HPC base case forecast predicts a decline in 2008, followed by an additional modest decline in 2009, then a return to growth in 2010 to 2012, resulting in an overall CAGR from 2007 to 2012 of 3.1% in revenues.

In 2008-2012, although HPC market looks very severe in major industrial sectors, such as U.S. automotives, finance, they assume that the government and academic sectors are nuetral and the national security, energy, game, digital-contents sectors and Petascale initiative will be accelerator. I think that it is reaonably consistent with the Stimulus and Recovery Plan.


While the Obama Government began such a stimulus and recovery plan for resuscitation rapidly, our Japanese government has to spend time by childish argument in the Diet and cannot show aggresive plan yet, far from the Obama Government. That may be very anxious.