Are you looking for a way to test if there is a difference in the mean readings between the two devices readings en masse? If so, I'd recommend using a simple multivariate test called the Multivariate Paired Hotelling's $T$-test. You can see how this test works here: https://online.stat.psu.edu/stat505/lesson/7/7.1/7.1.8.
Alternatively, assuming your data meets the necessary assumptions, you could simply analyze this data using a simple linear mixed effects model where each subject is treated as a random effect and the device is a fixed effect (this is essentially a special case of a mixed effects ANOVA, so your intuition was pointing you in the right direction). Additional information about this approach can be found here: https://web.stanford.edu/class/psych252/section/Mixed_models_tutorial.html
Another approach that might be appropriate in this case is to fit a Generalized Estimating Equations model, which is similar to a linear mixed effects model, but the interpretation is slightly different. You can find more information about this approach here: https://online.stat.psu.edu/stat504/node/180/. See my previous answer in this question to understand the difference between this method and the linear mixed effects model: Conditional vs. Marginal models
Best of luck to you!
Update Based on Additional Information from Comments
Thanks for providing some clarifying details in the comments. Based on what you've now told me, I think your best bet is to simply carry out a Two-Sample Hotelling $T^2$-Test. Basically what you want to do is obtain averages of each of your $x$, $y$, and $z$ variables in each dataset separately. The means for dataset1
and dataset2
will be in vectors $X_1$ and $X_1$ respectively. Then you'll carry out Hotelling's $T^2$-Test. In R, this can simply be done with the following:
#Install the package if you dont' have it already installed
#install.packages("rrcov")
library(rrcov)
x1<-c(-0.364761, -0.879730, 2.001495, 0.450623, -2.164352)
y1<-c(8.793503, 9.768784, 11.109070, 12.651642, 13.928436)
z1<-c(1.055084, 1.016998, 2.619156, 0.184555, -4.422485)
x2<-c(7.091625, 4.972757, 3.253720, 2.801216, 3.770868)
y2<-c(-0.591667, -0.158317, -0.191835, -0.155922, -1.051354)
z2<-c(8.195502, 6.696732, 6.107758, 5.997625, 7.731027)
#Create the two datasets
X_1<-data.frame(x1,y1, z1)
X_2<-data.frame(x2,y2,z2)
#Carry out Hotellings T^2 test
T2.test(x=X_1, y=X_2)
Which returns the results:
Two-sample Hotelling test
data: X_1 and X_2
T2 = 305.627, F = 76.407, df1 = 3, df2 = 6, p-value = 3.596e-05
alternative hypothesis: true difference in mean vectors is not equal to (0,0,0)
sample estimates:
x1 y1 z1
mean x-vector -0.191345 11.250287 0.0906616
mean y-vector 4.378037 -0.429819 6.9457288
Since the $p$-value of this test is quite small (p-value = 3.596e-05
), there is sufficient evidence to reject the null hypothesis of: \begin{eqnarray*}
H_{0}:\boldsymbol{\mu_{1}} & = & \boldsymbol{\mu_{2}}
\end{eqnarray*}
or equivalently, if we denote the means of $x$, $y$, and $z$ from the $i$th dataset as $\mu_{ix}$, $\mu_{iy}$, and $\mu_{iz}$ respectively, for $i=1,2$, the null hypothsis:
\begin{eqnarray*}
H_{0}=\begin{pmatrix}\mu_{1x}\\
\mu_{1y}\\
\mu_{1z}
\end{pmatrix} & = & \begin{pmatrix}\mu_{2x}\\
\mu_{2y}\\
\mu_{2z}
\end{pmatrix}
\end{eqnarray*}
and conclude there is sufficient evidence, based on this data, that the two measuring devices are measuring differently.
I hope this help!