Heterogeneous computing, where hardware vendors mix a variety of processors (graphics processors, CPUs, embedded chips orDSPs) on a server to increase energy efficiency and processing speed, will become a reality in the data center in the next decade, says an IBM executive. Such arrangements increase complexity and can cause headaches for developers and customers, but cloud computing could alleviate some of those problems.
“The cloud could make heterogeneous computing possible,” says Tom Bradicich, fellow and VP of Systems and Technology Group at IBM, who spoke with me yesterday about the changes IBM is seeing in server design. Commodity boxes packed with CPUs are less compelling, and hybrid computing is on the rise (although hybrid servers are still very small number of systems today). Packing servers with multiple CPUs is like throwing a team of day laborers at building a house. They’ll get the job done, but there’s likely a better way to divvy up the job among a smaller number of experts, who can do it with less wasted time and energy.
IBM makes one of these “expert” chips, called the Cell, which it’spushing into data centers and high-performance computing. Other chip vendors such asTexas InstrumentsorTensilica, which are pushing DSPs for specialty computing, and Nvidia, which is pushing GPUs would agree. Even data centeroperators are experimenting with different CPUsfor different tasks, to create a custom workflows that save energy.