Entropy and therefore mutual information depend on the probability distribution of your data, not on the values your data takes.
In your case the you have the vectors:
V1 = {1,2,3} and V2 = {1,1,1}
In the first case, the probability of each element is 1/3, in the second case, the probability of each element is 1:
p(V1) = {1/3,1/3,1/3} and p(V2) = {1}
This means that your first vector (V1) has an entropy(total information) of 1.584 bits. You need on average 1.584 bits to describe your vector.
In the case of your second vector, there is no information. You know it always contains a 1. Hence it's entropy equals 0.
When you compute the mutual information between a random variable and itself, you always end up with the total information of your random variable:
I(X,X) = H(X)