I have two sets of data with one (shown in red below) being considered correct. I'm trying to quantify the magnitude of the difference between the correct data (in red) and the comparison data (in green). The motivation for the comparison is to quantify the effect of making certain additional assumptions, which cause the difference seen below. My issue is if I'm just using the normal percent error formula, the calculated value goes to infinity (or negative infinity, as the case may be) as the actual data approaches zero.
$$ \%ERROR=\frac{Approximate-actual}{actual} $$
I should note that the fact that actual value crosses zero is mostly an artifact of the situation being analysed. In some cases, the actual result would be more like the green line and be entirely negative.
Additionally, I should also note that I also found this post here (link), which is insightful, but I don't think it's applicable in my case since I consider the green results to be significantly less reliable than the red results.
EDIT:
Ultimately, the goal of the percent error is to obtain a comparison of the distance between the actual value and the approximate value relative to the actual value. This assumes that that distance increases as the magnitude of the actual value increases. However, in my case, that isn't necessarily true. In fact, I'd guess that in most cases where both positive and negative data is present, the results are not going to be dependent on the magnitude of the actual value. However, it still necessary to have some kind of a relative comparison because it's hard to define limits to the error in absolute terms. People are much more accustomed to understanding that the error should be less than %5 or 10%.
EDIT 2:
The data being shown in the graph above is the deformation of the centerline (oriented vertically) a plate. The graph is therefore oriented the way it is because next to it, I'm showing an image of the deformed plate from the simulation.