Not clear what your exact scenario is. When you say "set the same weights", it can be interpreted as either a) initialising weights before training or b) constrain weights during training.
weights are typically initialised to small random values (not 0 or 1) and thus I am not sure if this is your scenario.
assuming that weights are constrained using a well known technique like weight decay or using a CNN/pooling technique.
What will happen if we set the same weights to two neurons in the same layer?
This kind of weight "sharing" will cause the neurons to learn the same features. In the context of a CNN, the effective number of parameters that have to be learnt reduces as a result. This improves the runtime efficiency of the network.
What will happen if weights are 0 (zero)?
It has the effect of reducing the capacity, this is equivalent to L1 weight decay regularisation.