The sign test statistic is binomially distributed, so your R code works and your intuition was right. The test statistic is the the number of pairs for which one outcome (say bicycling) was greater than the other (say walking). So if you adopt the sign convention that bicycling > walking, then the test statistic is 25.
Because the binomial distribution is symmetric about the mean $np$ when $p=0.5$, it doesn't matter which way you pick the sign: the $p$-value will be the same.
> binom.test(x=30, n=55, p=0.5, alternative="two.sided")
Exact binomial test
data: 30 and 55
number of successes = 30, number of trials = 55, p-value = 0.5901
alternative hypothesis: true probability of success is not equal to 0.5
95 percent confidence interval:
0.4055449 0.6802993
sample estimates:
probability of success
0.5454545
Compare to:
> binom.test(x=25, n=55, p=0.5, alternative="two.sided")
Exact binomial test
data: 25 and 55
number of successes = 25, number of trials = 55, p-value = 0.5901
alternative hypothesis: true probability of success is not equal to 0.5
95 percent confidence interval:
0.3197007 0.5944551
sample estimates:
probability of success
0.4545455
The only thing different is the confidence intervals, but if you do ?binom.test()
you will find the curious admonition:
Confidence intervals are obtained by a procedure first given in Clopper and Pearson (1934). This guarantees that the confidence level is at least conf.level, but in general does not give the shortest-length confidence intervals.