This question was answered in a question concerning Cook's distance How to read Cook's distance plots? but I will provide a summary for expository purposes.
First, the terms DFFIT and DFBETA mean "difference in fit" and "difference in beta".
These are descriptive terms, being how the non-inclusion of an individual observation changes respectively its fitted value and the beta coefficients. DFFIT is the difference in fit of removal of an individual observation whereas Cook's D is the average change of a fit of an individual observation. From https://stats.idre.ucla.edu/stata/webbooks/reg/chapter2/stata-webbooksregressionwith-statachapter-2-regression-diagnostics/ Cook’s D and DFITS [note that Stata uses a single "F" in the command for DFFIT] "measures both combine information on the residual and leverage. Cook’s D and DFITS are very similar except that they scale differently but they give us similar answers.". Thus in practice Cook's D and DFITS are similar.
As @whuber says in a comment "I would add that influential cases are not usually a problem when their removal from the dataset would leave the parameter estimates essentially unchanged: the ones we worry about are those whose presence really does change the results.". This suggests that when I asked for "guidance of when to prefer one or the other" in the question this is a false dichotomy, as they do different things. A typical workflow may be something like first to run DFFIT or Cook's distance, and use this to investigate influential data points, which can be useful information to understand your data set better. If there are influential points, you can then run DFBETA and investigate if these change the betas. Alternatively if all you care about is "stability" of regressor coefficients you might just run DFBETA.
@gung-ReinstateMonica says in the answer "Cook's distance can be contrasted with dfbeta. Cook's distance refers to how far, on average, predicted y-values will move if the observation in question is dropped from the data set. dfbeta refers to how much a parameter estimate changes if the observation in question is dropped from the data set.". Also "Cook's distance is presumably more important to you if you are doing predictive modeling, whereas dfbeta is more important in explanatory modeling.". I interpret "explanatory modeling" to be part of statistical inference.
The final paragraph of the answer is useful perspective and is copied below in full. "There is one other point worth making here. In observational research, it is often difficult to sample uniformly across the predictor space, and you might have just a few points in a given area. Such points can diverge from the rest. Having a few, distinct cases can be discomfiting, but merit considerable thought before being relegated outliers. There may legitimately be an interaction amongst the predictors, or the system may shift to behave differently when predictor values become extreme. In addition, they may be able to help you untangle the effects of colinear predictors. Influential points could be a blessing in disguise.".