Minor but real effects are more likely to be identified if you have a larger sample size.
This is generally a good thing, so long as it is presented in a helpful way. For example, reporting confidence intervals in sensible units (the intervals will be narrower with a larger sample) rather than $p$ values can help the reader understand whether a reported result is worth worrying about.
False positives, i.e. apparently statistically significant results despite the null hypothesis being correct, are just as likely with small or large sample sizes. But with large sample sizes they will probably seem less important when quantified.
As for bias, such as Glen_b's self-selection point, it may be more, less or the same. You should consider why the response was higher than originally expected: did a single person or group run a campaign to generate particular responses; or did almost everybody you ask agree to respond when you had expected only a few to do so?