I would leave the data as is, unless you have some domain knowledge which makes a categorical interpretation sensible. There is no data-independent best approach for this though, you can't know for sure which feature transformations will help without experimenting.
Xgboost (at least the default tree booster) when using the "exact" split finding method will consider every possible split point when building a tree. So you should not be too worried about how the model will handle skewed data, the predictions will be almost (*) invariant to monotonic transformation anyway.
The other thing you should know is that for computational purposes xgboost already has some heuristics for split finding which amount to binning the data before searching for a split (though it won't treat the bins as categorical).
https://xgboost.readthedocs.io/en/latest/treemethod.html
*: I say almost invariant because xgboost will use the midpoint between two training samples as its split point, which will be affected by monotonic transformations.