This is my first post and I am a beginner.
I am working on a dance recognition project, where I collected skeletal data from one dancer performing 5 different gestures. My goal is to detect any pre-trained gesture that take place during a long performance, so that i can instantly trigger corresponding visual effect.
My training+validation data consists of 5 labels (gestures), performed 20 times per gesture: 100 trials in total. Each trial is composed of 25 columns/features (e.g. angles between bones, distance between joints, etc.) and and each column is around 30 time samples long (each trial duration is around 1 sec. and kinect provides data at 30Hz). I am using Continuous HMM module of GRT (Gesture Recognition Toolkit).
My Goal is to reduce dimensionality by kicking out less significant features. I could manage to do correlation analysis, but would love to learn how to extract feature importance in case of a much more complex future scenario.
Any example that I found is instance-based; any row is an instance/observation/sample which have single value for each feature and a class label. In my case any single observation (trial as I previously named) consists of one class label and multiple 30 sample-long time-series.
Is there any similar example, or can anyone recommend an approach so that I can apply any of the methods RFE, RandomForest, XGBoost etc. to retrieve feature importances?