I would argue that your claims, and diagram, are incorrect generalizations and can be misleading.
Definitions and terminology
Based on the terminology and definitions from Cross Validation and Confidence Interval of the True Error we can use an $L_2$ loss function and define the Cross Validation estimator $CV$ as follows:
The data set $D$ is chunked into $K$ disjoint subsets of the same size with $m = n / K$. Let us write $T_k$ for the $k$-th such block and $D_k$ for the training set obtained by removing the elements in $T_k$ from $D$.
The Cross Validation Estimator is the average of the errors on test block $T_k$ obtained when training the algorithm $A$ on $D_k$
$$ CV(D) = \frac{1}{K} \sum_{k=1}^K \frac{1}{m} \sum_{z_i \in T_k} L(A(D_k), z_i)$$
Bias of the CV estimator
The effect of $K$ on the bias of $CV$ depends on the shape of the learning curve:
If the learning curve has considerable slope at the given training set size, increasing $K$ tends to reduce the bias as the algorithm will train on a larger data set which will improve its bias.
If the learning curve is flat at the given training set size, then increasing $K$ tend to not impact the bias significantly
Sources and further reading
Variance of the CV estimator
The impact of $K$ on the variance of the CV estimator is even more subtle as several different, and opposing effects come in play.
- If cross-validation were averaging independent estimates: then leave-one-out CV one should see relatively lower variance between models since we are only shifting one data point across folds and therefore the training sets between folds overlap substantially.
- This is not true when training sets are highly correlated: Correlation may increase with K and this increase is responsible for the overall increase of variance in the second scenario.
- In cases of instability of the algorithm leave-one-out CV may be blind to instabilities that exist, but may not be triggered by changing a single point in the training data, which makes it highly variable to the realization of the training set.

On the left an experiment with no outliers, variance falls with $K$, on the right an experiment with outliers, variance increases with $K$
Sources and further reading