You can't convert perplexity to $f$ measure. These are totally different evaluation metrics that try to evaluate a model of yours using different assumptions.
The $f_1$ measure is essentially the harmonic mean of precision and recall. One of the assumptions is that a model's performance can be determined by testing it using some set of testing samples (that we know their true solutions) and see how the model solves them. We then compare model's solution against the true solution that we know, and based on that evaluate the model. This is an extrinsic evaluation method by which we evaluate the model using external stuff, without digging deep to look deep inside the model.
Perplexity -on the other hand- is an intrinsic evaluation method that is usually used when you wish to evaluate your model at difficult times when you don't have a set of testing problems with true answers that you know. So what you do is simply test your model by a set of testing samples, and monitor how the model is working internally. If your model works internally in some way that looks nice, you assume that your model is probably nice. Basically, perplexity is used (as far as I have seen) to test models that are probabilistic. The assumption is that a model is nice if the probabilities that are associated with its predictions are very diverse (and not too close to each other).
Trying to convert $f_1$ measure to perplexity, is like trying to convert the speed of a car into the weight of a car.