I have two distributions: A, and B. Each distribution is filled with the numbers 1.0-10.0. These distributions are NOT simple functions, like the gaussian, but are merely empirical counts.
Essentially, I want to create a model for the probability that any given number is A. You can imagine that this is easy using histograms; we would create 1.0 sized bins, count the number of A in that bin Bin(A), count the number of B Bin(B) in that bin, and create a new bin for that range on a new histogram with the height value being the percentage Bin(A) / (Bin(A)+Bin(B)).
My question is how to do this using continuous random distributions. Using either Python or R works fine. I feel as though there is something critical I am missing or failing to understand about this problem, because while it seems like a very trivial problem to me, I can find little information on how to solve it in either of those languages, both of which I am fairly experienced with