The following paper describes an implementation of R in parallel on a graphics processing unit (GPU).
- Buckner et al., The gputools package enables GPU computing in R, BIOINFORMATICS, Vol. 26 no. 1 2010, pages 134–135
In the experimental section, on a 4-core computer, they compare the performance of the program running on the GPU with the performance without the GPU. The write the following:
We chose to use a single thread of the R environment in our test, since this is the way that most users interact with R.
So, the authors find the baseline by running their experiments using a single core (running in serial).
But, the experimental conditions for the GPU side is unclear (to me). When using a GPU, for efficiency we should simultaneously make use of the CPUs. If the authors used the remaining CPUs in the computer (which would be sensible to do in an optimised algorithm), then the speedup would be based on additional CPUs as well as GPUs over the baseline (and thus be artificially inflated by a factor slightly less than 4).
How should this experiment be interpreted?
In particular, I would like to know if my above interpretation is the correct one, and if so, what does this experiment actually tell us.