As far as I can tell, Kohonen-style SOMs had a peak back around 2005 and haven't seen as much favor recently. I haven't found any paper that says that SOMs have been subsumed by another method, or proven equivalent to something else (at higher dimensions, anyhow). But it seems like tSNE and other methods get a lot more ink now-a-days, for example in Wikipedia, or in SciKit Learn, and SOM is mentioned more as a historical method.
(Actually, a Wikipedia article seems to indicate that SOMs continue to have certain advantages over competitors, but it's also the shortest entry in the list. EDIT: Per gung's request, one of the articles that I'm thinking about is: Nonlinear Dimensionality Reduction. Note that SOM has less written about it than the other methods. I can't find the article that mentioned an advantage that SOMs seem to retain over most other methods.)
Any insights? Someone else asked why SOMs are not being used, and got references from a while ago, and I have found proceedings from SOM conferences, but was wondering if the rise of SVMs or tSNE, et al, just eclipsed SOMs in pop machine learning.
EDIT 2: By pure coincidence, I was just reading a 2008 survey on nonlinear dimensionality reduction this evening, and for examples it mentions only: Isomap (2000), locally linear embedding (LLE) (2000), Hessian LLE (2003), Laplacian eigenmaps (2003), and semidefinite embedding (SDE) (2004).