Methods like k-means rely on a distance measure like the euclidean distance. The intention of the authors was probably to make the clustering problem on somewhat trivial data sets like iris harder by adding additional features, as you correctly assumed, that are not very discriminative with respect to the actual cluster structure in the data.
How I understand it, the authors want to 'weaken' the notion of similarity that the distance measure provides by adding these unrelated features in order to be able to demonstrate the quality of their approach.
Sadly I cannot view the referenced paper, otherwise I could go into more detail. Personally I'm a bit surprised, because this is the first time I hear of such a procedure.
Edit:
Regarding your question
when the author mentions two or four new noisy features, how to predict which of the original feature(s) he used to add noise ( uniformly random noise)?
I only skipped through the paper briefly, not seeing any indication that the noisy features are actually derived from existing features, merely that the additional data sets are derived from the original data set.
The only detail they give regarding that is
we have derived datasets from the four originals,
containing features with uniformly random noise.
Which sounds to me like they just generated several random uniformly-distributed variables and added them as an additional feature. Given that the contribution of their approach is some kind of feature weighting, this probably serves to demonstrate that their method recognizes that these new features are just distracting and ignores it in the clustering process.