For the sake of clarity and sanity, let me first point out that you are reading an article written in the fall of 2008. The importance of this will become obvious when I reveal my topic: parallel programming for the multicore age.
You thought I was about to claim first-past-the-post on a new technological challenge for semiconductor design? Well, there are more on the way – each successive process node beyond 130nm has brought forth something nasty and unexpected. However, given our existing nexus of low power pressures, expanding verification and so forth, I am in no rush to add to the list. More to the point, though, the whole issue of multicore programming is one that, I fear, may be even more thorny than is already believed.
Elsewhere in this issue, Critical Blue’s David Stewart explains the practical reasons why the Multicore Association has set up a working group aimed at discovering and promoting best practice in programming based on what software engineers are doing in the parallel world today. Yes, the future may bring forth a language that sweeps away current concerns. A certain rhetorical cynicism aside, the semiconductor business is built on such timely innovation. But, without stealing too much of David’s thunder, he and his Multicore Programming Practices (MPP) partners are right to observe that we cannot simply stop and wait for that to happen. Products that exploit parallelism are already on sale – they include some of the latest EDA tools.
My beef lies elsewhere. A little research around the multicore and parallel programming fields quickly reveals that this is hardly virgin territory. Languages that overcome Amdahl’s Law and deliver 20X, 40X and even 60X boosts in performance? Well, there are hundreds that have tried to do that – literally hundreds.
And there’s the rub. Powerful, smart players in both industry and academia (among them the high-end operations at IBM, Intel and Cray) have been trying to crack this nut in the supercomputing and extreme communications spaces for years, decades even. Yet we come back to the fact that most projects are based on the clever use of existing languages, tools and coding methodologies, with an occasional resort to brute force. A great deal more money and research time will now be thrown at the problem as it becomes ‘mainstream’. However, you might suspect that it will still be with us in fall 2009, 2010, 2011, etc. That background fear actually makes the MPP project even more relevant. What if it is the only game in town?