There are many, many, many ways of assessing forecast or prediction accuracy. This chapter in a very much recommended free online forecasting textbook gives a few of them.
Unfortunately, as you have found, different accuracy measures can give different answers to the question which forecast (or forecasting method) is better. Here, the first prediction has a lower Absolute Percentage Error (APE), but a higher Absolute Error (AE)... and the other way around for the second prediction. Scientific publications in forecasting will usually use multiple accuracy measures and hope that a consistent picture emerges.
Unfortunately, there is no "right" accuracy measure. You will need to think about what you actually want to do with your forecast. Which decision will you base on the forecast? Why are you forecasting the price of used cars? What are the consequences of a wrong forecast?
- Do the consequences depend on the absolute error of the forecast? If so, use MAE.
- Or do they depend on the percentage of the error? If so, use MAPE.
- Do you need to get an interval or quantile forecast so you have enough "safety stock" in cash? If so, assess interval coverage.
However, note in particular that MAPE has a couple of serious problems. It is bounded for underforecasts, because you (probably) will never forecast below zero, so an underforecast cannot have a worse APE than 100%. But there is in principle nothing to keep you from forecasting too high - the car you predicted to cost 27,000 EUR could end up only costing 3,000 EUR, yielding an 800% error. Thus, the MAPE incentivizes you to forecast too low, that is, to bias your forecast. This effect is stronger if your actuals have a larger spread - as they seem to do in your case. Here is a simple illustration of this effect (full disclosure: I wrote that article.).
Bottom line: think about what you want to do with your forecast. Think carefully about whether your accuracy measure incentivizes you to get a "good" forecast. Consider looking at multiple accuracy measures.