Since you don't specify a specific test or comparison, I am going to talk "loosely" about the general idea that you could present a comparison of a component from PCA with the dataset from which it was generated and test for a statistical relationship. This will necessarily be a bit vague, since you have not specified a test, but I don't think it matters too much to my answer. The short answer here is: yes, your intuition is correct --- PCA is a statistical fitting procedure and so it involves an underlying optimisation taken with respect to the observed data. Standard caveats apply when taking the outputs of such methods and then testing them for "significance".
Because PCA is a fitting procedure, use of any of the components already uses an underlying optimisation step. Moreover, PCA orders the principal components by their explained-variance contribution (largest to smallest), which means that the variance contributions of the components are order statistics. As with any application of hypothesis tests to ordered quantities, a proper test should take account of the "optimisation" implicit in this ordering. If you were to take the first principal component (with the largest explained-variance contribution) but treat it as if it were just a random component in a test (e.g., perform a hypothesis test where you use the distribution of a random component variance as the null hypothesis) then you would get a biased test in a manner that is analogous to p-hacking.
So yes, you are essentially correct. If a researcher applies PCA to a dataset and uses this to identify a "principal" component (especially the one with the highest explained-variance contribution) they will automatically tend to pick a component that is highly predictive of the overall data, since it is the result of a fitting procedure to that data. If this were analysed or presented as if it were just a component selected a priori then that would bias the analysis in favour of confirmation of the existence of a statistical relationship between the component and the overall data it was selected from. Instead of doing this the researcher should adjust the p-value for any testing they do by taking account of the fact that they "maximised" over the explained-variance of the components. (The latter is quite complicated, but it can be done.)