Bayes' theorem tells us:
$$
p(n \mid k, \pi) = \frac{p(k \mid n, \pi) p(n \mid \pi)}{\sum_{m=k}^\infty p(k \mid m, \pi) p(m \mid \pi)}
$$
We know $p(k \mid n, \pi)$. That's the Binomial distribution. However, we don't know the form of the prior, $p(n \mid \pi).$ But any possible prior for $n$ can give you an answer to the question. Therefore this question is underspecified. You would get a different answer if $n$ was Poisson distributed (with different parameters), or Negative Binomial distributed, or even distributed from a different Binomial distribution. But regardless, if you knew the distribution you would simply calculate the above expression.
As to your comment that we have $p(\pi \mid k,n)$ in the form of the Beta distribution, that has a similar issue to $n$. That is, we require a prior over $\pi$ to determine a posterior distribution for $\pi.$ For example, a uniform distribution prior gives a posterior of $\text{Beta}(k+1, n-k+1)$ with an expected value $(k+1)/(n+2)$. But a frequentist would estimate $\pi$ as having an EV of $k/n.$ Neither of them are "wrong", it just depends on your assumptions.
$k$ is different from $n$ or $\pi$ in that knowing the other two quantities gives you the information you need to know the distribution over $k$ exactly. The same is not true for either $n$ or $\pi.$
If you want to try an example for an assumed prior of $n$, let's suppose $n$ is Poisson distributed with mean $\lambda.$ Just keep in mind that this is an example, but the above answer still holds; that there's no definite answer to your question, it depends on the prior distribution of $n$.
If $n$ is distributed as $\text{Poisson}(\lambda)$ then the evidence function (the denominator) is,
$$
\begin{split}
p(k) &= \sum_{m=k}^\infty {m \choose k} \pi^k (1-\pi)^{m-k} \frac{\lambda^m e^{-\lambda}}{m!} & \\
&= \frac{ \left( \pi \lambda \right)^k } {k!} e^{- \lambda} \sum_{m=k}^\infty \frac{ \left[ \lambda (1-\pi) \right]^{m-k} } {(m-k)!} \\
&= \frac{ \left( \pi \lambda \right)^k } {k!} e^{-\lambda} e^{\lambda (1-\pi)}\\
&= \frac{ \left( \pi \lambda \right)^k } {k!} e^{-\lambda \pi}.
\end{split}
$$
It's interesting to note that the evidence function, i.e. $p(k)$ is the Poisson distribution with mean $\lambda \pi.$ This seems intuitively obvious when you think about it.
The numerator would simply be the summand of the first line of the above expression, with $n$ replacing $m$. Taking that and dividing the evidence, we get,
$$
p(n \mid k, \pi) = \frac{ \left[ \lambda (1-\pi) \right]^{n-k} } {(n-k)!} e^{-\lambda (1-\pi)}.
$$
Thus $n$ would be distributed such that $n-k$ is Poisson distributed with mean $\lambda (1 - \pi).$