This is a "sawtooth pattern" is somewhat common in PR curves; it is due to batches of sample instances that correlate well with the outcome variable resulting mostly in True Positives. Similarly sharp drops are usually batches of sample instances that result mostly in False Positive, resulting in a drop in Precision but without a hit in Recall.
(usually these are sample instances have very similar features, so either we get them all right or all wrong)
In a bit more detail: we have a little "good batch" of corrected classified samples, that allows us to move upwards in terms of both Precision and Recall performance. This allows us to move upwards (almost) diagonally. Then if this "good batch" is followed by a "bad batch" of wrongly classified points we have a sharp drop in Precision which nevertheless leaves our Recall unchanged. As such we observe our PR line going straight down. If we have a few of these batches around we get nice sawtooth pattern all around.
It usually nothing pathological with our classifier and most likely related to the sample at hand.
Please see also the CV.SE thread on: *Starting point of the PR-curve and the AUCPR value for an ideal classifier I have a simulated example exactly on how to recreate such a sawtooth in the PR curve. I think it will help your understanding further.