I am going to reverse-engineer this from experience with discrimination cases. I can definitely establish where the values of "one in 741," etc, came from. However, so much information was lost in translation that the rest of my reconstruction relies on having seen how people do statistics in courtroom settings. I can only guess at some of the details.
Since the time anti-discrimination laws were passed in the 1960's (Title VI), the courts in the United States have learned to look at p-values and compare them to thresholds of $0.05$ and $0.01$. They have also learned to look at standardized effects, typically referred to as "standard deviations," and compare them to a threshold of "two to three standard deviations." In order to establish a prima facie case for a discrimination suit, plaintiffs typically attempt a statistical calculation showing a "disparate impact" that exceeds these thresholds. If such a calculation cannot be supported, the case usually cannot advance.
Statistical experts for plaintiffs often attempt to phrase their results in these familiar terms. Some of the experts conduct a statistical test in which the null hypothesis expresses "no adverse impact," assuming employment decisions were purely random and ungoverned by any other characteristics of the employees. (Whether it is a one-tailed or two-tailed alternative may depend on the expert and the circumstances.) They then convert the p-value of this test into a number of "standard deviations" by referring it to the standard Normal distribution--even when the standard Normal is irrelevant to the original test. In this roundabout way they hope to communicate their conclusions clearly to the judge.
The favored test for data that can be summarized in contingency tables is Fisher's Exact Test. The occurrence of "Exact" in its name is particularly pleasing to the plaintiffs, because it connotes a statistical determination that has been made without error (whatever that might be!).
Here, then, is my (speculative reconstruction) of the Department of Labor's calculations.
They ran Fisher's Exact Test, or something like it (such as a $\chi^2$ test with a p-value determined via randomization). This test assumes a hypergeometric distribution as described in Matthew Gunn's answer. (For the small numbers of people involved in this complaint, the hypergeometric distribution is not well approximated by a Normal distribution.)
They converted its p-value to a normal Z score ("number of standard deviations").
They rounded the Z score to the nearest integer: "exceeds three standard deviations," "exceeds five standard deviations," and "exceeds six standard deviations." (Because some of these Z-scores rounded the up to more standard deviations, I cannot justify the "exceeds"; all I can do is quote it.)
In the complaint these integral Z scores were converted back to p-values! Again the standard Normal distribution was used.
These p-values are described (arguably in a misleading way) as "the likelihood that this result occurred according to chance."
To support this speculation, note that the p-values for Fisher's Exact Test in the three instances are approximately $1/1280$, $1/565000$, and $1/58000000$. These are based on assuming pools of $730$, $1160$, and $130$ corresponding to "more than" $730$, $1160$, and $130$, respectively. These numbers have normal Z scores of $-3.16$, $-4.64$, and $-5.52$, respectively, which when rounded are three, five, and six standard deviations, exactly the numbers appearing in the complaint. They correspond to (one-tailed) normal p-values of $1/741$, $1/3500000$, and $1/1000000000$: precisely the values cited in the complaint.
Here is some R
code used to perform these calculations.
f <- function(total, percent.asian, hired.asian, hired.non.asian) {
asian <- round(percent.asian/100 * total)
non.asian <- total-asian
x <- matrix(c(asian-hired.asian, non.asian-hired.non.asian, hired.asian, hired.non.asian),
nrow = 2,
dimnames=list(Race=c("Asian", "non-Asian"),
Status=c("Not hired", "Hired")))
s <- fisher.test(x)
s$p.value
}
1/pnorm(round(qnorm(f(730, 77, 1, 6))))
1/pnorm(round(qnorm(f(1160, 85, 11, 14))))
1/pnorm(round(qnorm(f(130, 73, 4, 17))))