In a comment, I proposed Data Clustering, but it appears to be an overkill, since we have uni-dimesnional data.
Consider the following algorithm.
1) Take a series and calculate the lengths between visits
2) Obtain all unique values in the differenced series, as well as their frequencies
3) Single out the value in the differenced series with the highest frequency.
(check: is this value also the minimum length value? For an automated visitor, it should be, since it will be the "base period")
4) Divide all other values with this maximum-frequency value
5) Criterion: are all results of the previous divisions integers? If yes, you have an automated visitor.
The probability that a non-automated visitor will satisfy the criterion in 5) is next to zero. And I think the above is easy to program.
ADDENDUM
New information from the OP revealed that the observed lengths between visits are noisy. So here is a model, full of needed assumptions in order to provide a tangible result. The estimation method used is Method of Moments.
Let $\ell_i$ be the length between visits at time $t_i$ and $t_{i-1}$. Let $\tilde \ell$ be the "base-length" followed by a visitor (not necessarily an automaton), without the noise. Let $u_i$ be the noise contaminating the exact time of the visit.
Assumption 1: $u_i$ is an i.i.d. white noise (zero mean, constant variance, no autocorrelation). I understand that the contamination comes also in integer values, so we assume that $u_i$ follows a discrete distribution taking integer values, and it is symmetric around zero.
Finally, let $m_i=1,2,3,...$ be the parameter indicating whether the current length between visits is a multiple of the base period, and if it is ($m_i>1$), how many multiples is it. Assumption 2: $m_i$ follows a geometric distribution of the first variant, with parameter $0 \leq p \leq 1$.
Assumption 3: $u_i$ and $m_i$ are independent, across indices also.
Given all the above, the value of $\ell_i$ can be written as
$$\ell_i= m_i\tilde \ell +u_i $$
and it is a strictly stationary process. Note that this is a valid representation of lengths between visits from both automated and non-automated visitors.
We also have
$$E(\ell_i)= \frac 1{p}\tilde \ell, \;\;\;\; \text{Var}(\ell_i) = \frac {1-p}{p^2}\tilde \ell^2 + \sigma^2_u$$
But of greater interest is the 3d central moment of $\ell_i$, and this is due to the assumptions of a) symmetry of the distribution of $u_i$ around zero, and 2) Independence of $m_i$ and $u_i$. A little careful algebra gives
$$E(\ell_i - E(\ell_i))^3 = \tilde \ell^3E[m_i-E(m_i)]^3 = \tilde \ell^3\left(\frac{1-p}{p^2}\right)^{3/2}\cdot \frac {2-p}{\sqrt {1-p}}$$
where for quick derivation we have used the relation between the 3d central moment and the skewness coefficient (related to the $m_i$ r.v.). This magnitude is of interest because it does not contain the variance of $u_i$ (and it neither contains its 3d raw moment because given our symmetry around zero assumption, it is zero). Simplifying we get
$$E[\ell_i - E(\ell_i)]^3 = \tilde \ell^3\frac{(1-p)(2-p)}{p^3}$$
From the expression for the mean of $\ell_i$ we have $\tilde \ell = pE(\ell_i)$. Substituting into the 3d central moment, re-arranging, simplifying and decomposing, we get
$$\frac {E[\ell_i - E(\ell_i)]^3}{\Big [E(\ell_i)\Big]^3} = (1-p)(2-p)$$
$$\Rightarrow p^2 - 3p + \left(2-\frac {E[\ell_i - E(\ell_i)]^3}{\Big [E(\ell_i)\Big]^3}\right) = 0 $$
The ratio of expected values can be calculated from the sample, call this estimate simply $\hat q$. Then only one of the roots of the polynomial is meaningfull,
$$\hat p = \frac {3 - \sqrt {1+4\hat q}}{2}$$
which in turn will give us an estimate of $\tilde \ell$, and then, through the sample variance, an estimate also of $\sigma^2_u$.
What have we learned?
Well, consider what is the interpretation of $p$: it is the probability that $m_i$ will take the value $1$, and so in a long series, $1-p$ is the proportion of times the visitor will have missed one or more visit.
It seems to me that the lower $p$ is (the higher $1-p$ is -the more times the visitor misses a visit or more in a row), the less probable it appears that it is an automaton.
Also, the value of $\sigma^2_u$ should be expected to be lower for an automated visitor (assuming that it is less moody and idiosyncratic than a human being).
As a composite measure, the coefficient of variation standardizes the variability with respect to the mean value
$$\text{CV}(\ell_i) = \frac {\sqrt {\text{Var}(\ell_i)}}{E(\ell_i)} = \sqrt {(1-p)+\left(p/\tilde \ell\right)^2\sigma^2_u}$$
and we expect low values for it, for automated visitors.
Of course, since the above will be applied to all series individually, and this estimates will be obtained for all visitors, automated or not, it remains to determine a classification criterion. I 'll leave this task for the OP, if, that is, he decides to implement this method (for example, if there exists some series for which we know with certainty that they come from automated visitors, then we can work with them to obtain benchmark values for the thresholds related to the parameters).
A final note: if one wants to assume a specific distribution for $u_i$, one could also apply maximum likelihood, by obtaining the probability mass function of $\ell_i$, through convolution for discrete random variables.