I think you're confused with the usage of the word input. Naturally, in deep learning context we mean a vector $x$ by input. However, in this passage it is the matrix $\textbf A$ that is referred to as input.
Think of the matrix $\textbf A$ not as a constant predetermined matrix, but as of a parameter that is estimated. Maybe you estimate $\textbf A$ from training data etc. So, in a way it is a random value itself, random matrix it is. Hopefully, your estimation routine is consistent so that you can improve your precision by increasing the sample training data.
Now, what the passage states is that if the matrix property called "condition number" is very large then inverting the matrix $\textbf A^{-1}$ is very sensitive to the input $\textbf A$. Any random variations (noise) in estimated $\hat{\textbf A}$ will result in widely different outcome of matrix inversion routine $\hat{\textbf A}^{-1}$. Condition number tells you how much the input noise is amplified in the output of the inversion routine