Comments
Manoj
Drew
Summary
This paper focused primarily on determining if hand postures can be used to help determine the objects a user interacts with. Another goal was to determine how different users have different hand gestures for the same interactions. They used the CyberGlove device with sampling at 10 readings a second. 8 users participated in the experiment performing 12 interactions at 5 times each. They averaged the values of each of the 22 sensors for each interaction to input into the classifier. They decided on the 1 nearest neighbor algorithm for the classifier. In the user-independent system, they performed the leave-on-out cross-validation across all users. The average accuracy was 62.5%, ranging from 41.7% to 81.7%. In the user-dependent system, they trained the classifier using only one test user. They first chose one random example of each interaction to train the classifier and then ran the classifier on the remaining four examples in testing. Next, they did 2 examples to train. Average accuracy was 78.9% for one training example to 94.2% for 4 training examples. They determined that the user-dependent system was better for recognizing user interactions in a natural, unconstrained manner.
--------------------------------------------------------
Commentary
I think this paper proves their goals of determining whether or not hand postures can determine an interaction and seeing the variability in hand postures for the same interaction across different users. However, since the experiment used the CyberGlove, I don't see how this could be useful in practice, since I don't think all office workers would agree to being required to wear a glove. I think for most practical purposes (like security mentioned in the related works), a vision based system is more helpful.
Brandon Paulson, Tracy Hammond. Office Activity Recognition using Hand Posture Cues. The British Computer Society 2007.