There have to be some security implications regarding this feat:
A home-brew supercomputer, assembled from off-the-shelf personal computers in just one month at a cost of slightly more than $5 million, is about to be ranked as one of the fastest machines in the world.
Word of the low-cost supercomputer, put together by faculty, technicians and students at Virginia Polytechnic Institute, is shaking up the esoteric world of high performance computing, where the fastest machines have traditionally cost from $100 million to $250 million and taken several years to build.
The Virginia Tech supercomputer, put together from 1,100 Apple Macintosh computers, has been successfully tested in recent days, according to Jack Dongarra, a University of Tennessee computer scientist who maintains a listing of the world’s 500 fastest machines.
The official results for the ranking will not be reported until next month at a supercomputer industry event. But the Apple-based supercomputer, which is powered by 2,200 I.B.M. microprocessors, was able to compute at 7.41 trillion operations a second, a speed surpassed by only three other ultra-fast computers.
How is this going to impact exports?
Well, you could probably still export the computers separately, but the plans or coding for letting them co-process information would likely be export controlled.
I did hear a rumor back when the PS2 came out that they couldn’t export it to unfriendly countries like Iraq because it was the first console system that had enough processing power to calculate ballistic missile trajectories. Never could confirm if that was true or not, though.
Damn, and I only budgeted $4.7 million for my next PC…
What, and pass up tint control? For shame, RW…
Affect exports of what? You’ve always been able to cluster processors together in a similar way. FYI, the number one supercomputer in the world is in Yokohama. Note also that number ten is in France; so much for export controls.
Cluster computing is not a new thing; cluster computing operating systems are open-source and freely available anywhere. There are a number of benchmarks with which a cluster computer can be assessed; the larger clusters (such as the ones at Los Alamos and LLNL) aren’t designed expressly to score well on those benchmarks, they’re designed to attack a particular class of computational problems.
I’m thinking the export-control issue is a non-issue. Even Sweden, the Russian Federation, and China show up in the top 100 supercomputer list.
It is sorta interesting to note that TotalFinaElf comes in at #66.
What is unstated here is that while computing power is merrily doubling on the schedule laid out by Moore’s law, the problems we’re worried about, nuclear weapons development simulations, biowar scenario gaming, et al stay fixed in their complexity. Moore’s law can be expressed as a function of dollars as well as computing power. When so organized it means that for a particular computational power, the cost halves every couple of years. We’re at the point where serious export control would mean controlling our garbage dumps because the monthly computing power going into the average large metro garbage system is sufficient to do these calculations. Once you realize that fact, it’s obvious that the game is up.
This is one more reason why it is no longer tolerable to isolate and contain the various lunatics running sovereign nations around the world. They now have access to too much destructive power.
Computing power cannot be export-controlled, unless you somehow regulate processors. Which would require something akin to NRC for Intel, NEC, HP and a host (NPI) of other processors. Therefore, large-scale computing capability cannot be export-controlled.
Hell, even encryption methodologies can’t be controlled; encryption is just math, and even countries that are still fairly far behind us in technology aren’t necessarily lagging at all in the mathematics of encryption. We can export-control advanced encryption algorithms all we want, but that’s not going to do much to prevent equally good algorithms from being used.
Still, the ability to simulate nuclear bombs is of limited usefulness. Who is really going to mourn when Sim Milwaukee is blown off the simulated map? What the large-scale sims MIGHT be able to do is enable simulated testing of advanced nuclear weapons design that might lead to more compact weapons. One would STILL have to obtain the materials, and one would STILL have to test to validate the simulations. And, of course, one would STILL have to engage a rather significant programming effort in order to make use of all that computing power. The programming effort is a much, much larger technological challenge than throwing together a Beowulf cluster. So is actually getting the cluster to work properly, and figuring out separate the computational problem into pieces that can be solved in parallel.
Emperor Misha I
I’m just thinking:
If you can get that sort of power out of 1,100 *Macs*, imagine the sort of power you’d get by using a similar number of *real* computers!
[runs away, cackling crazily ;-) ]
Lonewacko: currently in NC and homesick for the southwest. At least a little bit.
1100 Macs? What’s that, this month’s production?
But seriously, people have been stringing together computers for decades. However, most of them would (hopefully) choose to string together cheaper computers, and not those with such a large profit margin. Unless Apple gave them a discount of course. But, in that case their success is a bit unusable for others.
The cheaper solution would seem to be to have bought a bunch of bare boards and put them in nitrogen cooled cages.
Partially agree, Lonewacko. Still need ethernet support and (possibly) a local hard drive. Almost without question, a cluster of, say, 2500 P4s could be constructed that would be faster and cheaper. For the dual G5, you’re paying nearly $3k just for the box. I believe you can get 2.4 GHz P4 complete systems (sans monitor) for under $750. From what I’ve seen, Apple has once again skewed its benchmark testing to hobble the P4s as compared with the G5; neutral testing has shown that the PPC used in the G5 is not faster than comparable P4s.
If you can build said wire cage cheaply, and can more than realize the cost difference by elimination of the unneeded hardware (video card, case, etc) then I fully agree that your idea is probably a better one.
Um, the technical specs are available at the respective sites as are prices but no, P4s are not so price competitive that you can buy 2500 for the price of 1100 PPC G5 machines. Nitrogen cooling costs money.
The 1100 G5 machines are also dual processor, something that the P4 is not built for. Also, the VMX/Altivec/Velocity Engine vector processing unit runs rings around Intel’s offering which is the real reason they went PPC. The problems they’re studying are particularly compatible with vector processing computation and PPC’s where it’s at in that field.
Finally, Intel’s road map is that the next 12 months will have their P4 go from 3.2Ghz to 3.6Ghz, a gain of 600Mhz. The IBM roadmap for the next 12 months has the PPC 970 (that’s IBM’s chip designation for the Mac G5) going from 2.0Ghz to 3.0Ghz, a gain of 1000Mhz. Since 2.0Ghz PPC machines are performance competitive with 3.2Ghz P4 machines. I can only imagine what the results will be when the clock cycle difference is cut 50% in PPC’s favor.
Intel’s likely to be the slow chip vendor for the coming year as Apple and AMD both blow their doors off. AMD is likely to edge out Apple but for heavy vector processing tasks, Apple will be #1.
Lutas, you can buy a 2.4 GHz machine, right now, in volume of 1, for $750. For an extra forty bucks you can throw in a 10/100/1000 ethernet card. So, if you care to do the math, 1100 dual G5s will cost in the neighborhood of three million bucks. 2500 P4s will cost under two million. Dual-processor machines are actually slower, when doing parallel tasks, than two single-processor machines.
Not saying the P4 is the best or most cost-effective approach, mind you. The Athlon is arguably better. The hard thing to know is which set of benchmarks to believe, which allows you to calculate how best to achieve your performance goals.