In many queuing models it is assumed that the service time follows an exponential distribution with parameter $\mu=1/\lambda$, where $\lambda$ is the average rate of service. An example might be a bank teller who, on average, is able to service customers at a rate of 1 every 10 minutes.
This assumption is obviously unrealistic in some respects. For example, there is certainly a minimum service time under which a teller could never complete a service routine. However, the exponential distribution has no such lower bound (other than a service time of 0) and in fact seems to assume that these extremely low service times are the most likely outcome. How is it that exponential service times end up being a reasonable approximation when they seem to imply that very small service times are the most likely outcome (which seems unrealistic) and don't account for any sort of floor on what service times are capable of being achieved?