PCA can be used to scale and rotate data, if we select all transformed features instead of a subset of them. My answer here gives an example of scaling and rotating the data, but without dimension reduction. How to decide between PCA and logistic regression?
Your plot shows that if we use all 12 features, the variance explained is 100%, i.e., without information loss. But if you select number of features smaller than 12, there will be information loss.
Note that in most cases, PCA can reduce dimension but at the cost of losing information. If you want to keep 99% variance, unless you have highly correlated (redundant features), PCA will not able to help.
In other words, your plot shows there are not too much redundancies in your data set.
Here are examples of both cases (with 5 features).
set.seed(0)
x1=matrix(rnorm(1000),ncol=5)
x2 = matrix(rnorm(600),ncol=3)
x2=cbind(x2,x2[,3]*runif(200)*0.01)
x2=cbind(x2,x2[,3]*runif(200)*0.01)
you may run PCA on x1 and x2 to see the difference on variance explained respect to number of features selected.
You would see for x2, 3 features will explain most of the variance, because other two features are highly correlated with the third column of x2.