You cannot get exactly the same, without implementing an optimization for the MinChisq estimate of the mean of your poisson, $\hat{\lambda}$.
So below is an example using the "ML" option to estimate $\hat{\lambda}$, and you still get a chi-sq test in the end. This is convenient because the MLE estimator of lambda will be the mean:
set.seed(111)
x.poi<-rpois(n=200,lambda=2.5)
gf = goodfit(x.poi,type= "poisson",method= "ML")
summary(gf)
Goodness-of-fit test for poisson distribution
X^2 df P(> X^2)
Likelihood Ratio 6.874807 7 0.4420304
gf$par
$lambda
[1] 2.545
mean(x.poi)
[1] 2.545
write.csv(x.poi,"x.poi.csv")
We calculate the expected using the mean as $\hat{\lambda}$ :
import pandas as pd
import numpy as np
import scipy
x_poi = pd.read_csv("x.poi.csv")['x']
obs = np.bincount(x_poi)
Lambda = x_poi.mean()
expected = scipy.stats.poisson.pmf(np.arange(len(obs)),Lambda)*len(x_poi)
You need to use the G-test and not the pearson chi-square, so in python it will be:
scipy.stats.power_divergence(obs,expected,lambda_="log-likelihood".,ddof=1)
[Power_divergenceResult(statistic=6.874807063434596, pvalue=0.44203040359775747)