* Update *
As a comment mentions, Keogh and Lin have a great paper on this subject. To my mind, they conclusively show that although the technique itself is applied commonly as detailed in my answer below, the usefulness of this technique is called into questions. Read the paper yourself, it is well worth it:
http://www.cs.ucr.edu/~eamonn/meaningless.pdf
* Update *
I have seen k-means used successfully on time-series data.
The general concern with k-means is due to the number of dimensions. More specifically, when you structure a times-series problem for the k-means algorithm, you usually have windows/stride of data within which contiguous portions of time-series reside. You will usually produce many window/stride samples that could be non-overlapping, or over-lapping, depending on your use-case.
In this situation, each time-step within the stride is considered a dimension by the k-means algorithm. In such a situation, if the number of time-steps within your window is large, you are introducing a lot of dimensionality: each time-step is cast as a dimension. It is in this case that k-means usually suffers.
Example
If you have a raw time series comprised by n time-steps, your objective might be to establish whether any arbitrary period of time within this time-series is similar to a previous period.
Preparation may include creation of a rolling window of width/stride-length m time-steps:
ts = [1,2,3,4,1,2,9,8,3,7,9,8,1,7,9,8,1,7,2,9,3,8,7,1,9,8,2,7,3]
window1 = [1,2,3]
window2 = [2,3,4]
window3 = [3,4,1]
window4 = [4,1,2]
...
Where m is 3 in this example. You now have a number of samples created from your original data, where each sample has a dimension of 3. The stride/window length m and dimensional are the same thing. Imagine each time-step within the window is a different independent variable from your experiment. Using the k-means algorithm as normal you could now produce clusters that are prototypical of your underlying time-series.
You can read more on this topic here:
How to understand the drawbacks of K-means
Furthermore, it is a known problem that understanding when two time-series are 'close together' in Euclidian space can be misleading. A good write-up on this topic with code is available here: http://alexminnaar.com/time-series-classification-and-clustering-with-python.html. In short, finding a similarity measure can be challenging, and Dynamic Time Warping is one option to help address this.