First, presumably you're getting NAs because the function you're using for the power transformation isn't returning real roots of negative real numbers, so either you've made a programming error—for example, you'd need to write your own function to return the real cube root of a negative real number rather than rely on ^(1/3)
—or no real roots exist. If the latter, note that to avoid making assumptions about the data-generating process under which your observations are impossible is a sound principle. In any case you can hardly hope to proceed with analysis having mangled the data like this.
Second, as you're modelling the expectation of the dependent variable conditional on the values taken by the independent variables, there's no reason to expect it to follow a normal distribution, in general. Imagine making several rather precise measurements of the length of a column of mercury at 10, 20, & 30 °C: the distribution of the length measurements will be trimodal— that's no reason to shun linear regression. It's the errors that are (sometimes) assumed to follow a normal distribution. See What if residuals are normally distributed, but y is not? & What is a complete list of the usual assumptions for linear regression?. (Also bear in mind that the ordinary least squares standard error estimates don't apply when using LASSO: Standard errors for lasso prediction using R.)
So you may have no need of any transformation, but @ars's answer to How should I transform non-negative data including zeros? explains an extension of the Box–Cox transformation to include a shift parameter. Also see Yeo & Johnson (2000), "A new family of power transformations to improve normality or symmetry", Biometrika, 87, 4.