You do not see anyone publishing papers with titles like “N-fold speed up of algoritm X by using N computers”.
The first time I heard a similar criticism was by Alexandros Stamatakis, who alerted that we should always compare the performance of a GPU algorithm to the same effort focused on a multi-core CPU environment. Since then I try to be more cautious about grand statements of improvement. And yes, I completely agree with them, even though I don’t give so much importance to speed gains.
My feeling is that thinking about novel GPU algorithms is always worth the effort since a different path for doing a given computation, though currently slow or inefficient, can lead to faster algorithms in the future. And can also give feedback to vendors as to where they should put their efforts into – provided there’s competition, which in end is the problem with the GPU approach…
(via Jason Stajich)