Tuesday, February 23, 2010

Computer Vision Based Gesture Recognition for an Augmented Reality Interface

Comments
Franck Norman
Drew

Summary
In this paper, they try to create a vision based gesture interface for an Augmented Reality system. It can recognize a 3D pointing gesture, a click gesture and 5 static gestures.

To define the gestures, they just use a closed fist, and various numbers of fingers open. They asked the users to perform the gestures in the same plane, which reduced the recognition problem to a 2D issue. The recognition method relies on a pre-segmented image of the hand. They use a color pixel-based approach to account for varying size and forms from image to image. After segmenting the hand pixes from the image, the next step is to detect the number of fingers that are outstretched. It does this by measuring the smallest and largest radii from the center of the palm where there is a non finger pixel. After this, they would recognize the point if there is only one finger detected and click was done by the thumb.

The user study was done by several users. They just said that the users adapted quickly.

------------------------------------
This paper wasn't that good because it didn't go into more detail about the results. We don't know what they meant by several, and they didn't give any quantitative analysis of the results or mention what kind of tests the users went through. Overall, it was a good idea. The results were lacking.

Monday, February 22, 2010

Eyepoint: Practical Pointing and Selection Using Gaze and Keyboard

Comments
Drew
Manoj

Summary
The authors of this paper seek to create an alternative to mouse and keyboard based interaction. It uses eye gaze tracking technology instead of the mouse.

They did an inquiry into how able bodied users use the mouse. They discovered several things in common when using the mouse
  1. use the mouse to click on links on a webpage
  2. launching applications from the desktop or start menu
  3. navigating through folders
  4. minimizing, maximizing, and closing applications
  5. moving windows
  6. positioning the cursor when editing text
  7. opening context sensitive menus
  8. hovering over buttons/regions to activate tooltips
They determined that any good gaze based pointing techniques must have single click, double click, right click, mouse-over, and click-and-drag capabilities.

EyePoint uses a two-step progressive refinement process stitched together in a "look-press-look-release" action. It requires a one time calibration. The user will look at the desired point on the screen and press a hotkey for the desired action. EyePoint will then zoom in and the user will look at the target again and release the hotkey.

They tested EyePoint on 20 subjects that were experienced computer users. 6 of the users needed vision correction either through glasses or contact lens. They evaluated 3 variants of EyePoint: with focus points, with gaze marker, and without focus points. The first study was to click on a hyperlink highlighted in orange on a webpage. Next, the test subjects had to click on a red balloon, which moves everytime it was clicked. Lastly, they would have to click a target and then type a word.

In the first study, users took 300 milliseconds longer using EyePoint than the mouse (1915 to 1576). EyePoint had an error rate of 13% vs the mouse's 3%. In the second study, EyePoint was about 100 ms slower than the mouse. In the last study, EyePoint was faster than the mouse. Overall, it was determined that EyePoint might have better performance, it was more prone to errors. Also, users were split on whether the mouse or EyePoint were easier to use or faster.

-----------------
I think this is a good technique to use for people who have trouble using the mouse due to some physical injury (ie broken hand). EyePoint also seems to use a much more accurate eye tracking software than the other eye tracking papers we've read. Also, I like the tie in with fitz's law and the book Emotional Design which I have read before.

However, one area of improvement would be to allow the program to be more usable when dealing with glasses, especially the narrow frames which had caused problems

Sunday, February 21, 2010

Motion Editing with Data Glove

Commented on:
Drew Logsdon
Franck Norman

---------------------------------------
Summary

In this paper, the authors develop a new method to edit captured motion data by using a data glove. They use the human fingers to simulate the motion of the legs of a human figure. In the case of walking, the animator would map the motion of the fingers to walking. After this, he could generate running by moving the fingers faster.

They used the P5 glove which can detect the position and orientation of the wrist. They divide the procedure to edit motion data into the capturing stage and the reproduction stage. In the capturing stage, they generate the mapping function that define the relationship between the motion of the fingers. The algorithm gathers parameters, such as the cycle of the motion, the minimum and maximum output value, duration of the motion and range. After this is completed, there is a reproduction stage, where the animator performs a new motion with the hand. It must be a similar but different motion.

Since the body can move in more directions than the fingers, they must determine the proper animated motion from the finger motions. They fixed the matching in advanced. For example, when the person walks, they move right leg forward and left arm back. They matched this motion by seting the middle finger to the left shoulder and index finger to the right shoulder.

They tested this by having an animator go through the process. First, the normal walking motion was mapped. Then, they did a hopping motion and walk in a zigzag path. It resulted in an unnatural motion.They found out many limitations of the mapping function, especially the requirement that the new motion must be similar to the old one.

---------------------------------------------------------------
I think it is a good preliminary study into the possibility of motion editing. However, I think the project would have benefited from a better glove. Also, the mapping function relies too much on the finger movements. There are a lot of factors involved in our walking motion, so they need to account for this in the simulation. A lot of these factors cannot be measured from finger motions alone.

Friday, February 12, 2010

lab days

Here are my thoughts for the lab days

cybertouch gloves
I liked the gloves because they were much easier to use than the head mounted goggles and the eye tracker. It was rather comfortable although the vibrating would get tiresome after a while. I followed up on Franck's code to try to have a button detect whether or not the hand was open or closed. When we clicked on the button, the code updates the data received from the sensors for the index finger. We determined that was good enough for a preliminary attempt. We ran the code a few times to determine the dividing value to separate open and close after adding the 2 sensor values for the index finger together. After determining this value, we added a command to start vibrating when the glove is closed, and stop when the glove is open. If we had more time, we would not require having to press the button every time to update the sensor values.

head mounted display
This was the first device I actually used. Josh had us do a user study while using the device. After turn it on, our first task was to do a basic and advanced walking test. I nearly bumped into the whiteboard on the basic test. Next, we had to stack the books on a table in a specific order while sitting and then while standing up. Then, we had to read and write on the board. Finally, we measured how wide and close our periphery vision was limited by measuring how close our hand was before it comes into view.

I thought that using this device was ok, but prolonged usage would give me a headache. It took me a while to get accustomed to the goggles because it seemed like everything was zoomed in . I couldn't see the periphery very well, and I don't want to do any difficult movements while wearing them.

eye tracker
When I used the eye tracker, I just had to figure out how to put it on, and calibrate it. I think the tracker had trouble tracking my eyes because of my glasses. Also, I don't think I calibrated it correctly because the cursor did not "follow" my eyes. I do not know much about the eyes, but the cursor did twitch a lot and quickly moved to one edge of the screen. I also couldn't "double click" by blinking very well either. I got a headache trying to use this device, so I probably should limit using this device.