Preliminaries
Write
$$\mathcal{I}_p(\epsilon) = \int_0^\infty p(x) \log\left(\frac{p(x)}{(1+\epsilon)p(x(1+\epsilon))}\right)\, dx.$$
The logarithms and the relationship between $p(x)$ and $p(x(1+\epsilon))$ suggest expressing both $p$ and its argument as exponentials. To that end, define
$$q(y) = \log(p(e^y))$$
for all real $y$ for which the right hand side is defined and equal to $-\infty$ wherever $p(e^y)=0$. Notice that the change of variables $x=e^y$ entails $dx=e^y dy$ and (taking $p$ to be the density of a distribution) that the Law of Total Probability can thereby be expressed as
$$1 = \int_0^\infty p(x)dx = \int_\mathbb{R} e^{q(y)+y} dy.\tag{1}$$
Let us assume $e^{q(y)+y}\to 0$ when $y\to\pm\infty$. This rules out probability distributions $p$ with infinitely many spikes in density near $0$ or $\infty$. In particular, if the tails of $p$ are eventually monotonic, $(1)$ implies this assumption, showing it is not a severe one.
To make working with the logarithms easier, also observe that
$$1+\epsilon = e^\epsilon + O(\epsilon^2).$$
Because the following calculations will be performed up to multiples of $\epsilon^2$, define
$$\delta = \log(1+\epsilon).$$
We might as well replace $1+\epsilon$ by $e^\delta$, with $\delta=0$ corresponding to $\epsilon=0$ and positive $\delta$ corresponding to positive $\epsilon$.
Analysis
One obvious way in which the inequality can fail would be for the integral $\mathcal{I}_p(\epsilon)$ to diverge for some $\epsilon \in (0, 1]$. This would happen if, for instance, there were to be any proper interval $[u, v]$ of positive numbers, no matter how small, in which $p$ were identically zero but $p$ were not zero on the interval $[u-\epsilon, v-\epsilon]$. That would cause the integrand to be infinite with positive probability.
Because the question is unspecific concerning the nature of $p$, we could get bogged down in technical issues concerning how smooth $p$ might be. Let's avoid such issues, still hoping to gain some insight, by assuming that $q$ everywhere has as many derivatives as we might care to use. (Two will suffice if $q^{\prime\prime}$ is continuous.) Because that guarantees $q$ remains bounded on any bounded set, it implies that $p(x)$ is never zero when $x \gt 0$.
Note that the question really concerns the behavior of $\mathcal{I}_p(\epsilon)$ as $\epsilon$ approaches zero from above. Since this integral is a continuous function of $\epsilon$ in the interval $(0,1]$, it attains some maximum $M_p(a)$ when $\epsilon$ is restricted to any positive interval $[a,1]$, enabling us to choose $c = M_p(a)/a^2$, because obviously $$c\epsilon^2 = M_p(a) \left(\frac{\epsilon}{a}\right)^2 \ge M_p(a) \ge \mathcal{I}_p(\epsilon)$$
makes the inequality work. This is why we need only be concerned with the calculation modulo $\epsilon^2$.
Solution
Using the changes of variable from $x$ to $y$, from $p$ to $q$, and $\epsilon$ to $\delta$, let's calculate $\mathcal{I}_p(\epsilon)$ through second order in $\epsilon$ (or $\delta$) in the hope of achieving a simplification. To that end define
$$\mathcal{R}(y, \delta) \delta^2 = q(y+\delta) - q(y) - \delta q^\prime(y)$$
to be the order-$2$ remainder in the Taylor expansion of $q$ around $y$.
$$\eqalign{
\mathcal{I}_p(\epsilon) &= \int_\mathbb{R}e^{q(y) + y} \left(q(y) - q(y+\delta) - \delta\right)\, dy \\
&=-\int_\mathbb{R}e^{q(y) + y} \left(\delta + \delta q^\prime(y) + \mathcal{R}(y, \delta) \delta^2 \right)\, dy \\
&= -\delta\int_\mathbb{R}e^{q(y) + y} \left(1+q^\prime(y)\right)\, dy
-\delta^2\int_\mathbb{R}e^{q(y) + y} \mathcal{R}(y, \delta)\, dy.
}$$
Changing variables to $q(y)+y$ in the left hand integral shows it must vanish, as remarked in the assumption following $(1)$. Changing variables back to $x=e^y$ in the right hand integral gives
$$\mathcal{I}_p(\epsilon) = - \delta^2 \int_\mathbb{R} p(x) \mathcal{R}(\log(x), \delta)\, dy = -\delta^2 \mathbb{E}_p\left(\mathcal{R}(\log(x), \delta)\right).$$
The inequality holds (under our various technical assumptions) if and only if the coefficient of $\delta^2$ on the right hand side is finite.
Interpretation
This is a good point to stop, because it appears to uncover the essential issue: $\mathcal{I}_p(\epsilon)$ is bounded by a quadratic function of $\epsilon$ precisely when the quadratic error in the Taylor expansion of $q$ doesn't explode (relative to the distribution) as $y$ approaches $\pm\infty$.
Let's check some of the cases mentioned in the question: the Exponential and Gamma distributions. (The Exponential is a special case of the Gamma.) We never have to worry about scale parameters, because they merely change the units of measurement. Only non-scale parameters matter.
Here, because $p(x) = x^k e^{-x}$ for $k \gt -1$, $$q(y) = -e^y + k y - \log\Gamma(k+1).$$ The Taylor expansion around an arbitrary $y$ is $$\text{Constant} + (k-e^y)\delta - \frac{e^y}{2}\delta^2 + \cdots.$$ Taylor's Theorem with Remainder implies $\mathcal{R}(\log(x),\delta)$ is dominated by $e^{y+\delta}/2 \lt x$ for sufficiently small $\delta$. Since the expectation of $x$ is finite, the inequality holds for Gamma distributions.
Similar calculations imply the inequality for Weibull distributions, Half-Normal distributions, Lognormal distributions, etc. In fact, to obtain counterexamples we would need to violate at least one assumption, forcing us to look at distributions where $p$ vanishes on some interval, or is not continuously twice differentiable, or has infinitely many modes. These are easy tests to apply to any family of distributions commonly used in statistical modeling.