Microsoft Research, along with UCLA, have come up with creative solution to distinguish between multiple user inputs on large-screen touch displays. A paper titled “Your Phone or Mine? Fusing Body, Touch, and Device Sensing Multi-User Device-Display Interaction” outlines the new system, utilizing smartphone accelerometers and a Kinect camera to separate user interactions.
The paper states that this new technology could be used for touchscreen displays in public areas or private offices, and even for gaming purposes to separate players. One of the system’s biggest goals is to allow users interacting with touchscreen displays to push personalized information from the larger screen onto their smartphone with the personalized pairing.
During the user study, participants were tasked with drawing shapes from their smartphone displays onto the touchscreen in front of them, and vice-versa. ShakeID was able to accurately determine each user’s input up to 94 percent of the time.
For now, the biggest drawback of the technology is that it has trouble detecting a user when the hand holding the smartphone is stationary, as there’s no accelerometer data to pick up on without motion. Microsoft Research states that the system is “easily learned,” and that once users understand the basic idea of how it works the stationary problem will be mitigated by users making simple hand movements “such as shaking,” to maintain their ID.
Currently, due to Kinect SDK limitations, the system can only track a maximum of two users. In the future researchers hope to be able to track more users, and possibly make improvements to read smartphone movement from within a user’s pocket.
View Comments (1)
Historically, the touchscreen sensor and its accompanying controller-based firmware have been made available by a wide array of after-market system integrators, and not by display, chip, or motherboard manufacturers.