How is this fixed in general?
By having the reward function represent what you want the agent to achieve. If there is no differentiation in sum of rewards for any behaviour, then you have defined a problem where all behaviour is optimal and there is nothing to solve.
You might be missing here that the optimal policy $\pi^*(s)$ is derived from the optimal value function $V^*(s)$ like so:
$$\pi^*(s) = \text{argmax}_a \sum_{r,s'} p(r,s'|s,a)(r + \gamma V^*(s'))$$
or in other words, the expected immediate rewards for transitioning to next states are important and taken into account.
Do I have to introduce an extra reward for finishing or is this just a sign of badly formulated problem?
You don't have to introduce a reward for finishing, but it is normal to do so, if you are setting a problem where the goal is to finish an episode in a particular way. An absorbing terminal state, with $V^*(s) = 0$, would then be attractive because of the immediate reward associated with transitioning to it. If the problem is open-ended (the agent has control over whether to end the episode at all), then you may also need a discount factor $\gamma < 1$ to make it more attractive to take actions with a high probability of transitioning to it than to other states.
A common alternative, where the goal is to finish as quickly as possible is to set a fixed negative reward for all state, action pairs - except for transitions from the absorbing state to itself. An absorbing terminal state, with $V^*(s) = 0$, is then attractive because the other non-terminal states all have a negative value.