I am working on problems in the field of medical imaging where the need for a simple and interpretable model is important from a clinical perspective. This means that I have to explain the algorithm's prediction to non-experts (well non-experts in Mathematics).
My questions is twofold:
As far as I know, an interpretable model is a model that assigns weights to each feature or weak classifier to form a strong classifier. Are there other methods that make a model interpretable?
Which classification methods are interpretable? I know that the linear SVM and AdaBoost are. So, are there other methods?