Jan 30, 2009

Besides Cell/B.E., the Programming Contest for GPU Just Started in Japan



January 28 was a deadline of application for “Cell Challenge 2009” of the Multi-core Programming Contest in Japan.

The specified assignment is "Calculation of Edit Distance between Two Character Strings." A champion will be determined by the sum of scores in the preliminary rounds and the final rounds. The final rounds will complete on March 20.

Three special interest groups in Information Processing Society of Japan lead and Kitakyushu Foundation for the Advancement of Industry, Science and Technology (FAIS) and four Cell/B.E. developers - Toshiba, etc., support the contest.

GPU Challenge 2009” newly started on January 21 as an attached program with the Cell Challenge 2009. It is supported by the Global Scientific Information and Computing Center (GSIC), Tokyo Institute of Technology, NVIDIA, and so on.

The specified problem of GPU Challenge 2009 is the same as the Cell Challenge 2009. Applicants should run their program on the computers with NVIDIA GPUs (equivalent to Tesla S1070-400) under a CUDA programming environment that the GPU Challenge 2009 executive committee provides. For the specified problem, an applicant has to use the computer implementing NVIDIA's GPU in Global Scientific Information and Computing Center (GSIC), Tokyo Institute of Technology.

The deadline of application and final program submission is February 13 and March 25 respectively.

In the almost same period as above, Fixstars which has been building up proven performances towards a Cell/B.E. total solution company is carrying out the Cell Programming Contest in Japan called “Hack the Cell 2009” by themselves.

This deadline of application and final program submission is January 31 and March 6 respectively.

The assignment of the Hack the Cell 2009 is "Optimization of Mersenne Twister Random Number Generator."

The champion of a student category will be awarded Fixstar's scholarship priority of 600kyen per year, and invitation to five-day travel to San Francisco.

Fixstar's CTO shows Japanese message on the contest's web site like this; "We are clearly lacking smarter programmers who can get great performances out of current high performance processors like Cell/B.E. Many programmers also are hidden without opportunities to demonstrate their superiorities. We can recognize that these situations should be serious problems and it is meaningful to provide opportunities for excellent programmers to be admired. "

Most of HPC people can agree with his message, I imagine.

Jan 25, 2009

Potsdam Scientists to Tackle New Type of Weather Simulations with IBM iDataPlex

The Potsdam Institute for Climate Impact Research (PIK)'s new iDataPlex computer was put into operation in January and offers 30 teraflops of processing power as described below.

Potsdam Scientists to Tackle New Type of Weather Simulations with IBM iDataPlex

One of the key reasons that iDataPlex stands apart from other high-performance computer platforms is its energy efficiency, approximately 230 megaflops of performance per watt estimated.

Part of IBM’s “Big Green” initiative, iDataPlex maximizes performance per watt with innovative cooling techniques such as a rear-door heat exchanger.

Although IBM iDataPlex should be known much more as one of smarter commodity-based clusters, we have not met a good explanation so often (only in Japan?).

I just happened to meet an chatty and interesting article about iDataPlex by Linux Magazine’s HPC editor, Douglas Eadline.

Doug Meets The iDataPlex

He visited Dave Weber, the program director of the Wall Street Center of Excellence, Worldwide Client Centers for IBM, and held a dialogue with Dave about iDataPlex.

First things first, he knew iDataPlex meant "Large Scale Internet Data Center" and he hoped he was not in the wrong meeting because he was an HPC guy. However during the meeting, he found "this was obviously not your typical 1U server node. Indeed, it was almost like someone ask a group of HPC users to design a node.", and continued his finding about low powered fans, cabling, smarter combination of commodities, energy efficiency, cost performance and TCO, and so on.

Finally Doug commented:
Instead of calling it the “iDataPlex for Web 2.0″, they should have just called it the “iDataPlex for HPC 2.0!

Doug's message can be shared with me very well, and IBM should talk iDataPlex values in HPC better and more like him, shouldn't they?

Jan 21, 2009

HPC will grow under the Obama Government

The House Appropriations Committee released the bill text and the accompanying committee report "THE AMERICAN RECOVERY AND REINVESTMENT ACT OF 2009 [DISCUSSION DRAFT]" on January 15.

Computing Research Policy Blog quickly analyzed it in

More Detail on 2009 House Dem Stimulus and Recovery Plan (January 15, 2009):
http://www.cra.org/govaffairs/blog/archives/000715.html

The blog concluded that "In summary, though, this looks awfully good to us and will likely go a long way towards recharging the Nation's innovation engine."

The conclusion looks natural because a large amount of additional investmentis scheduled for 2009 in the report. For example, the Office of Science in the Department of Energy will see an increase of $2 billion under this plan, NSF will see an increase of $3 billion overall and NIH will receive $1.5 billion for grants to improve university research facilities and another $1.5 billion in new research funding. These are well known players driving HPC.

Therefore I believe that possibility for postponing the U.S. Petascale Computing projects such as on-going Blue Waters Project, due to the U.S. economic crisis, was clearly wiped away.

According to a revised IDC New HPC Market Forecast that I received from Earl Joseph yesterday morning, the new HPC base case forecast predicts a decline in 2008, followed by an additional modest decline in 2009, then a return to growth in 2010 to 2012, resulting in an overall CAGR from 2007 to 2012 of 3.1% in revenues.

In 2008-2012, although HPC market looks very severe in major industrial sectors, such as U.S. automotives, finance, they assume that the government and academic sectors are nuetral and the national security, energy, game, digital-contents sectors and Petascale initiative will be accelerator. I think that it is reaonably consistent with the Stimulus and Recovery Plan.


While the Obama Government began such a stimulus and recovery plan for resuscitation rapidly, our Japanese government has to spend time by childish argument in the Diet and cannot show aggresive plan yet, far from the Obama Government. That may be very anxious.

Jan 18, 2009

Japan's Next Generation Supercomputer R&D Budget in FY2009

According to A-Tip news article (12 January 2009 issued by ATIP), Japan's Next Generation Supercomputer (NGSC) Project will receive a budget of 19,000 Myen for FY2009. The total plan (FY2006 – FY2012) is expected to reach 115,447 Myen.
The Budget was recently sent to the Diet and it is presently under discussion.

The contents of the NGSC Project budget in FY2009 are as follows:
- Pilot manufacturing and testing of system : 10,992 Myen
- Grand Challenge Software R&D: 1,877 Myen
- Facility Construction: 6,131 Myen

The Grand Challenge Software R&D Budget includes the following projects:
1. "Next Generation Integrated Nanoscience Simulation Software" Project, managed by the Institute for Molecular Science (IMS) at Okazaki
2. “Next-Generation Integrated Living Matter Simulation” Project, managed by RIKEN Wako Institute.

Jan 10, 2009

Symposiums for HPC in January in Tokyo



・ International Workshop for Peta-scale Application Development Support Environment

The 3rd Advanced Supercomputing Environment (ASE) meeting will be held as International workshop for peta-scale application development support environment on January 20, 2009 in the large conference room in Information Technology Center (ITC) of the University of Tokyo.

The invited speaker is Dr. Jonathan Carter (NERSC, Lawrence Berkeley National Laboratory) who talks "Optimizing Scientific Applications for Multicore Architectures and T2K Open Supercomputer".

The details of the announcement is shown in
http://nkl.cc.u-tokyo.ac.jp/seminars/0901-ASE003/


・ HPCS2009
High performance computing and computational science symposium 2009


This symposium will be held on January 22 - 23, 2009 in VLSI Design and Education Center (VDEC) of the University of Tokyo. A discounted registration fee is applied by January 15. In particular, students can register free of charge instead of 17,000 yen (normal registration fee).
The organizer must be generous-minded.

In the morning session for large-scale application, a presentation about Blue Gene/P , not so often in Japan,
"Optimization of the first principle molecular dynamics software PHASE in Blue Gene/P.", H. Imai and T. Moriyama (IBM Japan)
is scheduled.

Dr. Dr. Jonathan Carter (NERSC) will deliver the keynote address:
"Optimizing Scientific Applications for Multicore Architectures" in the second day.

The final session will be focused on GPU. The best paper award of the symposium、"Speed-up by CUDA of the singular value decomposition for the square matrix", T. Fukaya, Y. Yamamoto (Nagoya University), T. Uneyama, Y. Nakamura (Kyoto University) will be presented in this session. This may imply that large interest and expectation about accelerator is growing in Japan, too.

The details of the symposium are shown in http://www.hpcc.jp/hpcs/

Jan 1, 2009

Happy New Year from Tokyo!




Greetings for the new year!

Hope this year is filled with good times, happiness and sucess!







(Mt. Fuji from Amagi highland)