Explaining the concept of causality from a philosophical perspective is a big issue, and it's probably best left to philosophers, though it is useful for a statistician to have a working knowledge of this philosophical topic. Personally, I have found the explanation by Peikoff to be the best explanation I have heard (i.e., that causality is the "law of identity" applied to action). Under this view, asking about causality is really just asking about the actions that things take when subjected to various kinds of stimulus/intervention by other things. Asking why something happens is really just asking for a deeper description of what happens when one thing interacts with another thing.
I have always found that the best way to look at probabilistic causality is within the "subjective" Bayesian framework, which renders it compatible with deterministic metaphysical accounts of causation. Under the Bayesian view, probability is considered to be merely an epistemological tool that allows the user to measure his or her uncertainty about unknown quantities, constrained by rationality criteria. Under this viewpoint, probability is interpreted epistemologically, and so probabilistic causality is likewise interpreted epistemologically, not metaphysically. This view recognises that we are often exposed to data that appears to us to be "stochastic" (i.e., well described by probabilistic methods) even if it is formed by an underlying process that is deterministic. One of the advantages of this framework is that it is agnostic on determinism --- i.e., it is compatible with both deterministic and non-deterministic metaphysical accounts of causality.
The point here is that these different "types" of causality need not be seen as competitors that contradict each other. In the "subjective" Bayesian framework, when we talk about "probabilistic causality", we are really just talking about how we can infer causality using the tools of probability theory. "Causality" may still be interpreted as an underlying metaphysical condition that we are seeking to infer with epistemological tools. Moreover, it can be interpreted deterministically. In this framework the probability conditions are not a form of causality, but rather, conditions that allow inference of an underlying metaphysical causality. For this reason I have never really liked the name "probabilistic causation"; it should be called "probabilistic methods of inferring causation" instead.
Example (smoking): Applying this framework to smoking allows you to accept a simple cellular medical explanation of what cancer is, while also using probabilistic tools to try to infer causation. At a deep physical level, cancer is "caused" by smoking in the sense that various cells in the body (e.g., in the lungs) begin to engage in abnormal (harmful) growth when exposed to smoke particles in a particular way. This process might be entirely deterministic, despite the fact that it occurs at a level of detail that precludes our ability to accurately predict when a particular cell will become cancerous under exposure to smoke. Indeed, it is reasonable to conjecture that smoking leads cells to be exposed to smoke particles of different chemical compositions from different angles, frequencies, etc., and that even if the process of cancer growth in cells is deterministic, the exposure conditions vary so wildly that they appear to us to be stochastic.
We can accept a deterministic account of causation here and still use statistical trials and probability theory to infer causality. Ideally this would involve controlled-trials, or if it involves uncontrolled observation, it comes with appropriate caveats on inference. We try to control for confounders and avoid controlling for colliders so that we get an inference about the underlying causality, rather than just a predictive inference.
Explaining this in lay terms: Boy, that's tough! The first step is to explain it to ourselves in technical terms (see above), so that we get our concepts properly integrated and consistent before subjecting those poor laypersons to our views! If I were trying to explain the above in lay terms, I would say something like this:
When we ask about cause, we are really just asking for a more detailed explanation of how a thing acts when it is exposed to a certain type of condition. For example, how do the lungs of a person react to smoking? Do they become cancerous? Statisticians tell us that the best way to figure that out is by running an experiment where you randomly expose lungs to smoke (e.g., randomly make some people smoke and others not smoke) and then see who gets cancer. We then use statistical methods on this data to see if the people who smoked were more likely to get cancer.
Sometimes we can't do a randomised controlled trial like this. In this case, it might be unethical/impractical to instruct people to smoke, who don't want to smoke. (The trial only works if the randomly assigned smokers agree to smoke!) In these cases we still watch who gets cancer and who smokes, but it is harder to figure out if smoking is causing cancer. There is quite a bit of complicated theory for how to do this; it is quite hard and it is easy to make mistakes.
Statisticians have a lot of well-developed theory about how to tell when one thing causes another. They need this because it is hard to tell when things cause each other. Anyway, this theory of using probability to figure out cause is sometimes referred to as "probabilistic causality"; it tells us the conditions we need to be able to infer that some stimulus causes a particular response.