I'm following a computer vision course and I have this exercise: write a program that, given an hand image, it can recognize if the hand is open, closed, in a punch, or holding an "ok" posture, using only the techniques provided till now (pixel 4/8 connected, connected region, contour finding, holes finding, blob property like centroid, area, perimeter, eccentricity, image moments, image transformation like invert/power/log/gamma correction/contrast stretching, histogram computation and equalization).
I have done it with some basic blob properties(closed hand has a low eccentricity, "ok" has a hole, open hand has a big difference between the area of the inscribed ellipse in the blob and the blob area itself with a low eccentricity).. It seems to work but the first image is a little bit problematic.
I think there could be something more to make a more robust algorithm. Maybe some kind of moment property? Could some blob axis/orientation/extreme points help?
PS test images:



