I'm fairly new to machine learning so please pardon me if I mess up terms. I'm working on a wearables project that involves hand gesture detection, looking for any pointers on potential algorithms that would be applicable.
Basically there will be sensors that pick up tendon movement at the wrist to provide input, so for simplicity I'm assuming that would mean 5 features (one for each finger), each with values ranging from say 0-100.
There are also 5 classes that the gesture could be classified as. The user would do each type of gesture several times to provide training data, and then for each gesture class I'd need an algorithm to determine the best model for it based on that training data.
After initial training, whenever the user does a gesture it would input the sensor readings into each model to see which one is the closest match. Additionally, since this is a wearable anything that optimizes for low memory and low processing power would be best.
If anyone has any input on either the type of models or the algorithm with which to generate them that I should look into I'd appreciate it!
Edit: Here is some sample data from 4 sensors during 3 different gestures. These are basically just pressure readings from sensors lying along tendons on the wrist.




