I cannot think of a general case where this would consistently prove to be better/worse. Data-specific cases may arise, but the only thing required to make the k-means algorithm work is any kind of metric.
In general, for $L_p$ being the norm of choice, the higher you choose $p$, the more important the largest single feature/variable distance becomes. Taking this to the extreme, for $p \rightarrow \infty$ and observations $x_{1}$ and $x_{2}$, $distance(L_p, x_1, x_2) = max_i\{x_{1,i} - x_{2,i}\}$. (Here, we assume we have $x_{1} \in \mathbb{R}^n$ and $1 \leq i \leq n$, so $i$ indexes the features).
Building on this, you could say that the larger you choose $p$, the more weight your metric will put on the largest distance between two observations when clustering. The opposite counter-extreme is $p=1$, where all distances receive the same weight and the combination of the absolute valued differences is linear.
I hope this helps you - let me know if I have not been clear enough.