Some recommended standards for statistical notation are presented in Halperin, Hartley and Hoel (1965) and Sanders and Pugh (1972). Most of the current notation comes from conventions that were established by the biometric statisticians in the late 19th and early 20th century (most of it was done by Pearson and Fisher and their associates). A useful list of early uses of notation is maintained by the economist John Aldrich here, and a historical account of the English biometric school is published in Aldrich (2003). (If you have further enquiries about this topic, Aldrich is probably the world's foremost living expert in the history of notation in statistics.)
Aside from this explicit work, there are a lot of books that give introductions to the field, and these are careful to define notation consistent with common conventions, defining notation as they go. There are many well-known conventions in this field that run consistently through the literature, and statisticians are well-acquainted with these through practice, even without having read the recommendations of these researchers.
Ambiguity of the distribution-centric notation: The use of the "distribution-centric" notation is a standard convention that is used throughout statistical literature. However, one interesting thing to point out about this notation is that there is a bit of wiggle-room as to what it actually means. The standard convention is to read the object on the right-hand-side of these statements as some kind of description of a probability measure (e.g, a distribution function, density function, etc.) and then read the $\sim$ relation with meaning "...has distribution..." or "...has probability measure...", etc. Under this interpretation the relation compares two distinct sets of things; the object on the left-hand-side is a random variable and the object on the right-hand-side is a description of a probability measure.
However, it is also equally valid to interpret the right-hand-side as a reference to a random variable (as opposed to a distribution) and read the $\sim$ relation as meaning "...has the same distribution as...". Under this interpretation the relation is an equivalence relation comparing random variables; the objects on the left- and right--hand-sides are both random variables and the relation is reflexive, symmetric and transitive.
This gives two possible (and equally valid) interpretations of a statement like:
$$X \sim \text{N}(\mu, \sigma^2).$$
Distributional interpretation: "$X$ has probability distribution $\text{N}(\mu, \sigma^2)$". This interpretation takes the latter object to be some description of a normal probability measure (e.g., its density function, distribution function, etc.).
Random variable interpretation: "$X$ has the same probability distribution as $\text{N}(\mu, \sigma^2)$". This interpretation takes the latter object to be a normal random variable.
Each interpretation has advantages and disadvantages. The advantage of the random-variable interpretation is that it uses the standard symbol $\sim$ to refer to an equivalence relation, but its disadvantage is that it requires reference to random variables with similar notation to their distribution functions. The advantage of the distributional interpretation is that it uses similar notation for the distributions as a whole, and their functional forms with a given argument value; the disadvantage is that it uses the $\sim$ symbol in a way that is not an equivalence relation.
Aldrich, J. (2003) The Language of the English Biometric School International Statistical Review 71(1), pp. 109-131.
Halperin, M., Hartley, H.O. and Hoel, P.G. (1965) Recommended Standards for Statistical Symbols and Notation. The American Statistician 19(3), pp. 12-14.
Sanders, J.R. and Pugh, R.C. (1972) Recommendation for a Standard Set of Statistical Symbols and Notations. Educational Researcher 1(11), pp. 15-16.