If you have lots of inputs but only a few that matter, the neural network may try to overfit to that noise. L1 regularization can help in that aspect as it leads to sparse weight vectors, resulting in noisy / unimportant variables having the weights linked to them set to 0 (you need to find a correct regularization strength). You can read L1 Norm Regularization and Sparsity Explained for Dummies by Shi Yan if you want to get an intuitive grasp of how that works.
The reason for using L1 norm to find a sparse solution is due to its special shape. It has spikes that happen to be at sparse points. Using it to touch the solution surface will very likely to find a touch point on a spike tip and thus a sparse solution.
With lots of noise, the training might require more data as you increase the search space and may require larger batch sizes and smaller learning rates to avoid going in the wrong direction.
However, adding a little noise can prove beneficial in some cases.
Train Neural Networks With Noise to Reduce Overfitting by Jason Brownlee explains how
The addition of noise during the training of a neural network model has a regularization effect and, in turn, improves the robustness of the model. It has been shown to have a similar impact on the loss function as the addition of a penalty term, as in the case of weight regularization methods.