Let $Y = \sum a_iX_i$.
In general the cost of computing $\mathbb{E}(|Y|)$ exactly will likely grow exponentially with $n$, because calculating $\mathbb{P}(|Y| = y)$ involves deciding whether there exists a subset of the $a_i$ whose sum is either $y$ or $-y$, which is the NP-complete 0-1 knapsack problem.
There are variants of the Central Limit Theorem for non-identically distributed independent random variables like your $a_iX_i$, such as the Lyapunov CLT. Based on this, for large $n$ you could try approximating $Y$ with a normal distribution with mean $\sum{a_ip}$ and variance $\sum{a_i^2p(1-p)}$, and $|Y|$ with the corresponding folded normal distribution.
But this will be innaccurate if most of the variance in $Y$ comes from only a few of the $a_i$ - for example, if one of the $a_i$ is much larger in magnitude than all the others then the distribution of $Y$ will be bimodal rather than normal. (If your unit vector is uniformly chosen from the $n$-dimensional unit sphere this is unlikely to happen for large $n$. A random unit vector can be chosen by taking coordinates from $n$ i.i.d. normal distributions and normalising the results, and since normal distributions have thin tails, it's unlikely that a small number of these coordinates will be significantly bigger than the rest.)
It might be easiest to just approximate the expectation using Monte Carlo (simulate $|Y|$ many times and take the mean of the simulations).