Thursday, January 28, 2010

TIKL: Development of a Wearable Vibrotactile Feedback Suit for Improved Human Motor Learning

Commented on the following blogs
Franck Norman
Drew Logsdon


The authors research ways to use real time tactile feedback, through a wearable robotic system. This way would be used along with verbal and visual feedback from the teacher. In a nutshell, the teacher and student will both be wearing a robotic feedback suit. A motion capture system will track movement for both. The software will see the deviation between the teacher and student and give the student feedback through the actuators placed on the skin. The joints that move in error will receive some feedback through the system proportional to the amount of error. The software has some equation it uses to determine that amount of error.

The experiment was run on 40 individuals divided into 2 groups: 1 that receives visual feedback, the other receives visual and tactile feedback. Both groups wear the suit to make sure the movements would be about the same. After 10 minutes of calibration, the user is shown a series of still images and told to mimic them. After that, 3-10 second motion videos are shown and they were told to mimic it. To study learning over time, the videos were repeated 6 times. Afterwards, they were given a questionnaire to fill out.

The questionnaire revealed that most users felt reasonably comfortable in the suit, but the tactile group needed to concentrate more. Over time, the tactile group stated that their ability to use the feedback improved. Some issues included discomfort in the sitting and elbow positioning. Most agreed that this was a good way to teach motor skills. The authors went on to a more math based analysis of their results.

----------------------------------

I think this is one of the best papers we've read to date, since they did a good analysis of the experiment, results, and future work. They admit certain pitfalls with their project, like that the setup would most likely be too expensive for every day use. However, they used this to prove their assertion that vibrotactile learning is much better than just visual or audio learning alone. Also, they did a good job of describing the aspects of their experiment and they split the groups into a test and control group. Overall, I think this is a good start to the line of research. I think it would be useful to reteach motor skills to people that just came off of injuries or lost the ability to do certain motor skills (i'm sure some medical conditions can cause this).

Wednesday, January 27, 2010

3DM: A Three Dimensional Modeler Using a Head_mounted Display

Blogs I commented on
Franck Norman
Drew Logsdon

The author's goal was to design a 3D modelling program that uses the same techniques from other programs, but presents it in a more intuitive manner for beginning users to use. The program uses a head mounted display, which places the user "in the modelling space" Placing an object in 3D requires 6 parameters. However, using a 2D mouse and a keyboard makes the spatial relationships unclear.

3DM uses a VPL eyephone to display the image and trackers to track the head and hand. The input device was a 6D 2 button mouse from UNC-CH. Image rendering was done by Pixel-Planes 4 and 5 high performance graphics engines. The user interface has a cursor and a toolbox. Some icons are tools, where the cursor will change based on what tool was selected and commands perform a single task. Toggles can change the global setting for 3DM. There is continuous feedback for the user through predictive highlighting.

3DM has multiple methods of creating surfaces, triangle creation tools and the extrusion tool, which draws a poly line or takes an existing one and stretch it out. There is another tool which allows for the creation of standard shapes, like boxes, spheres, and cylinders. Also, there are methods to edit the surface by grasping and moving the object, scaling, cut and paste, and there is an undo/redo button as well. The grouping feature allows the user to change one copy and have that change go to every other copy.

The results show that organic shapes are easily created in 3DM. Users feel in control because they can grasp something to change it. However, there are some weaknesses like keeping two shapes parallel.

------------------

I think this paper was good for something written in 1992. I liked the way they allowed the user to get some feedback of their actions. However, their results was lacking the quantitative aspect. They didn't give any stats from the user study, like who was tested, and what they were told to do. They gave general analysis of their results. Nevertheless, I liked the research and I wonder how this would be improved with current technology. and i'm sure that this paper was referenced by another paper we read earlier.

Tuesday, January 26, 2010

Wearable EOG Goggles: Eye-Based Interaction in Everyday Environments

Commented on the following blogs
Franck Norman
Drew Logsdon

This paper presented an embedded eye tracker for context-awareness and eye-based human computer interaction. The author designed a goggle with dry electrodes integrated into the frame and a small microcontroller for signal processing. Eye gazing is a good way to express intention and attention covertly, which makes it a good input mode. However, dwelling is still used for confirmation. The authors used EOG as a substitute for other methods. It is easily implemented in a lightweight system.

The goggles were designed to be wearable and lightweight, require low power, provide adaptive real-time signal processing capabilities to allow for context-aware interaction, and to compensate for EOG signal artefacts. The system would detect and ignore blinking, detect the movement of the eyes, and be able to map the eye movement into basic directions. There would be a string representation of the eye movement, where certain combinations would be recognized as a gesture.

The trial consisted of a computer game with 8 levels where they had to perform a specific gesture. High scores are given for those who complete it in short times.They found the EOG signals can be efficiently processed to recognize eye gestures. However, 30% of the subjects had trouble focusing.

Commentary

I think that relative eye tracking is probably much better than exact eye tracking for future methods to use mouse free interaction. The user can glance in the general direction of where they want the mouse to go.

The results were relatively lacking. There were no statistics. 30% was given for the number of people who had trouble concentrating, but how many people were tested? What was the average time for each level? I would like to see where this research goes and see if there is a much more thorough testing of the system.

HoloSketch: A Virtual Reality Sketching / Animation Tool

Commented on the following blogs
Drew Logsdon
Franck Norman

Summary

HoloSketch is a tool, designed for nonprogrammers to create and manipulate images in 3D. It uses a 20 inch stereo CRT with 112.9 Hz refresh rate. A new viewing matrix is calculated separately for each eye. There is a 3D mouse with a digitizer rod, which is used to control many of the different functions in HoloSketch. The menu design in HoloSketch is engaged by pressing and holding the right wand button. The menu would then fade in. The user would select an item by poking the button and when the user releases the right wand button, the item would be selected. There are many features in HoloSketch that I would not discuss here.

According to the results, it was very easy for first time users to create complex 3D images. Most users however keep their head stationary so they don't look around the object they're creating. They also got a real artist to try to use it for a month. The artist started cold and did not get any documents to help her, but within a short time, complex objects were created with ease.

HoloSketch was designed to be a general purpose 3D sketching and simple animation system.

Commentary

This is a very interesting paper, considering it was made in 1995. I did not even think that 3D virtual imaging was around back then. I still wonder about some things in the paper. The author stated that CPU instructions that could be executed per graphics primitive rendered is steadily going down. I don't quite understand what this means. Does it mean that the CPU is being used to render graphics so it can't do something else? Also, the author did not give numbers on the results, like how many novices were in the trial, and the average time before completing a task. Also, did they give the user an instruction on what to create and they didn't define "simple creatures"

Thursday, January 21, 2010

Noise Tolerant Selection by Gaze-Controlled Pan and Zoom in 3D

This paper dealt with using eye tracking technology to try to type something without use of a keyboard. Other methods in the past used dwell time, but that was deemed to be wasteful of time.

They used StarGazer to do the panning and zooming. There is a circular keyboard for the typing. The users can zoom in on specific areas of the keyboard and type using only their eyes.

They ran some test to see if it was intuitive and easy for novice users, and to see if size of display and noise would be a factor. Noise slowed down words per minute and smaller sizes slowed it down as well. Of interest, the accuracy was not increased with noise. This showed that StarGazer was tolerant of noise

Commentary

This would be interesting to see how people can type without using a keyboard. At first, I thought this could be like the Macbook Wheel from the Onion a while ago. But this seems workable. Of course, more work needs to be done for people that have bad eyesight (but are not blind). Also, it seems prolonged usage could hurt the eyes.

Distant Freehand Pointing and Clicking on Very Large, High Resolution Displays

This paper deals with trying to using hand motions to interact with a large screen from a long or short distance (up to touching the screen). They determined 5 characteristics of a device to be used with the display
1. accuracy
2. acquisition speed
3. pointing and selection speed
4. comfortable use
5. smooth transition between interaction distances

Then, they discussed previous work with handheld indirect pointers, laser pointers, eye, hand, and body tracking device. They discussed the problems faced by each of these methods, so they implemented their own clicking and pointing techniques.

Clicking
1. AirTap
2. ThumbTrigger

Pointing
1. RayCasting
2. Relative Pointing with Clutching
3. hybrid RayToRelative pointing

They did some tests and determined that RayCasting was poor in accuracy and comfort for the subject. The actual experiment involved selecting various targets in sequence.

Commentary

This research would do well in finding ways to replace the mouse. Since computer screens are getting bigger, a mouse would not be feasible especially if the screen takes up the whole wall. Obviously there is much more work to be done, like trying to find a balance between comfort for the user and accuracy and reliability of the device and software. Also, unless they're using a mac, they would need a second click (right click). I'm sure that other tracking systems mentioned in the previous work can also help out in this area.

Wednesday, January 20, 2010



Email: shran2009 at gmail dot com
Academic: 1st semester Master of Computer Science

I am from Houston, Tx.

I am taking this course because this is one of the areas I wanted to study while I'm in grad school

In 10 years, I have no idea where I am going to be. I hope the economy will be better so I can get a job and put my 6 - 7 total years of computer science education to work.

I think the next big improvement in computer science is more integrated touch screen technology (ie, no more mouse and keyboard)

If I could meet anyone in history, I would talk to my grandfather before he went off to war against the japanese in the chinese army

My favorite movie(s) would include the star trek movies and hunt for red october.

Some interesting fact: I am a star trek fan.. although that is probably an understatement.