First, let's start with $p$-value, because it seems you have a wrong understanding of $p$-values. They are not the "probability of observing this individual observation, or a more extreme observation, from the distribution". Quoting Wikipedia
$p$-value is a function of the observed sample results (a statistic)
that is used for testing a statistical hypothesis. Before performing
the test a threshold value is chosen, called the significance level of
the test, traditionally 5% or 1% and denoted as $\alpha$. If the
$p$-value is equal to or smaller than the significance level
($\alpha$), it suggests that the observed data are inconsistent with
the assumption that the null hypothesis is true, and thus that
hypothesis must be rejected and the alternative hypothesis is accepted
as true. When the $p$-value is calculated correctly, such a test is
guaranteed to control the Type I error rate to be no greater than
$\alpha$.
An equivalent interpretation is that $p$-value is the probability of
obtaining the observed sample results, or "more extreme" results, when
the null hypothesis is actually true (where "more extreme" is
dependent on the way the hypothesis is tested).
for learning more on $p$-values check this thread.
As about checking what is the probability of observing a certain value $x$ in your empirical distribution (i.e. data) - for this you simply count how many times $x$ occurred in your data.
data <- c(1,2,3,4,2,1,4,3,1,2,6,7,8,8,1,2)
x <- 5
mean(data == x) # P(data = x)
mean(data >= x) # P(data >= x)
however, let me say it again, this is not a $p$-value.