You can use an F-test of $H_0: \sigma_1^2=\sigma_2^2$ against $H_a: \sigma_1^2\ne\sigma_2^2$. Sometimes this is written in terms of ratios of variances:
$H_0: \sigma_1^2/\sigma_2^2 = 1$ against $H_a: \sigma_1^2/\sigma_2^2 \ne 1.$
However, you will find that three observations in each group (method) is usually not
enough for helpful testing---unless the population variances are hugely different.
For example, there is no statistically significant difference between
variances in your two samples, according to the procedure var.test
in R.
x1 = c(80,80,79); x2 = c(90,91,93)
var.test(x1, x2)
F test to compare two variances
data: x1 and x2
F = 0.14286, num df = 2, denom df = 2, p-value = 0.25
alternative hypothesis: true ratio of variances is not equal to 1
95 percent confidence interval:
0.003663004 5.571428571
sample estimates:
ratio of variances
0.1428571
If there is a 4:1 ratio of population standard deviations (that's a 16:1 ratio
for variances), then the power of this F-test (ability to reject $H_0,$ indicating a
significant difference) for only three observations in each group is less than $0.3 =30\%.$ (Such F-tests are notorious for their poor power.)
set.seed(2020)
pv = replicate(10^5, var.test(rnorm(3,0,4), rnorm(3,0,1))$p.val)
mean(pv <= 0.05)
[1] 0.28955
Ten observations in each group would give power above 95% detecting such
a large difference between population variances. (There are online 'power and sample size' procedures for this test, and many statistical program also have such procedures.)
set.seed(915)
pv = replicate(10^5, var.test(rnorm(10,0,4), rnorm(10,0,1))$p.val)
mean(pv <= 0.05)
[1] 0.97468