You may regard the empirical sample distribution as your best estimate of the true population distribution. Thus to sample according to that distribution, simply sample from the dataset itself. So you could use e.g. np.random.choice()
with the default parameters (discrete uniform distribution, with replacement) to randomly pick one of the 200 sample values and voila, that is your random value, sampled according to the observed distribution.
This idea is used in a number of statistical methods, collectively known as bootstrapping.
To generate new examples instead, you will have to make some assumptions and model the distribution accordingly. The results will of course depend on the chosen model and hyperparameters in this case.
For example, you could use the kernel density estimate (kde) that you plotted. I don't know how to extract the kde distribution from the seaborn function, so I would use scikit-learn instead, which even has a convenient sample
method.
Note that the kde is not bound by the range of the original data values.
To restrict the results to the interval from 0 to 1, you could simply discard the sampled values that fall outside.
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.neighbors import KernelDensity
data = np.array([0, 0, 0, .05, .05, .05, .05, .05, .05, .05, .05,
.1, .2, .3, .4, .5, .6, 1])
sns.histplot(data, kde=True, color='blue', alpha=.3, stat='density')
X = data.reshape(-1, 1)
kde = KernelDensity(bandwidth=.1).fit(X)
data_new = kde.sample(400)
data_new = data_new[0 <= data_new]
data_new = data_new[data_new <= 1]
plt.hist(data_new, color='red', alpha=.3, density=True);
