That is not how it works. The inference based on logistic regression is not correct when you incorporate weighting. You need to estimate the variance of the IPTW estimator, which happens to be inversely related to the propensity score. So large weights also lead to large variance estimates and thus larger p-values.
(Also, with IPTW, all weights are larger than one since it is the inverse of a probability).
Here is a ultra mini-lesson on IPW estimators. Suppose you observe the data structure $(X,A,Y)$ where $X$ is a vector of covariates, $A$ is a binary treatment, and $Y$ is some outcome. Let $\pi_0(x):= P(A=1|X=x)$ be the propensity score. Suppose we are interested in estimating the treatment-specific mean parameter $Psi := E_XE[Y|A=1,X]$. Consider the identity
$$E_XE[Y|A=1,X] = E_X \left[\frac{E[Y|A,X]1(A=1)}{\pi_0(X)}\right] = E_X \left[\frac{Y1(A=1)}{\pi_0(X)}\right],$$
which follows from a conditioning argument. This suggests the following IPW estimator of $\Psi$:
$$\hat \Psi_n := \frac{1}{n} \sum_{i=1}^n \frac{Y_i1(A_i=1)}{\pi_0(X_i)}$$
where we unrealistically assume that $\pi_0$ is known. Since $\hat \Psi_n$ is just an average of a random variable, inference is easy. We have
$$\sqrt{n}(\hat \Psi_n - \Psi) \rightarrow_d N\left(0, \text{Var}_0\{Y1(A=1)/\pi_0(X)\} \right).$$
Now, note that the variance depends on the weight $w_0=\frac{1}{\pi_0(X)}$, so a larger weight gives a larger variance.
You are free to replace $\pi_0$ with a logistic regression estimate of $\pi_n$, but this is usually not a $\sqrt{n}$-consistent or asymptotically normal estimator without very strong conditions.
Note, $E[Y|A=1]$ and $E_XE[Y|A=1,X]$ are two very different parameters.