I would first suggest that you think about what you really need. If your intent is to address failure rates, than construct a sample estimate of the observed Weibull based failure rates over time. A regression of the observed natural log of the failure rates (also called hazard rates) should be linear versus the log of time, with the slope equaling the shape parameter minus one. For a two parameter Weibull distribution, the implied scale parameter can be determined by dividing the shape parameter by exp(regression intercept) and taking the result to the power (1/shape parameter). Source, per Wikipedia on the Weibull distribution, to quote: "Linear regression can also be used to numerically assess goodness of fit and estimate the parameters of the Weibull distribution. The gradient informs one directly about the shape parameter k and the scale parameter lambda can also be inferred".
I would argue there are a couple of advantages with this regression approach in addition to numerical simplicity over the MLE and being able to visualize the appropriateness of failure hazard rate model. One comment above notes a possible issue in auto correlated errors, which can be readily addressed in a regression setting. There is also a more subtle difference relating to experimental design with a possible significant increase in efficiency of parameter estimates. While the log regression is linear, the plot of the failure rate law itself for the Weibull can be highly nonlinear in time in the early phase for a shape parameter different from one (see, for example, the graph of the hazard rate for various shape parameters at http://www.weibull.com/hotwire/issue14/relbasics14.htm ). This suggests to me, based on experience, the need for more observations in this early phase to gain a more accurate estimate of the scale parameter (as derived from the estimate of the regression intercept). Such a censored sampling scheme in reducing tail observations could also reduce the possible introduction of other unwanted noise distributions (for example, a Weibull model of web browsing time until exiting the website could have noise introduced from the likes of a phone call, door bell,..,causing an inflated estimate of the actual browsing time).
Conveniently, regression theory provides the basis for confidence intervals for the parameters and even prediction intervals for future failures. One can also compare slope estimates for related regression regimes.
For those sticking with MLE, here is reference showing a derivation via MLE on the hazard rate (see http://www.weibull.com/hotwire/issue131/relbasics131.htm ). Note, in general, one can also compute MLE in the case of censored data.
[Edit] Nonparametric techniques are sensitive to violation of the independence assumption. Also, as hazard rates plots can confirm the validity of the well known versatile Weibull distribution as an appropriate failure distribution in this case, the argument for the use of nonparametric models is suspect, in my opinion.