Microsoft Subnet An independent Microsoft community View more

Microsoft Research mitigates privacy issues of beaming SurroundWeb into your living room

At the IEEE Symposium on Security and Privacy, Microsoft Research tackles privacy challenges presented by using its 3D browser SurroundWeb on multiple surfaces in your living room.

Microsoft mitigates SurroundWeb privacy concerns
Credit: Microsoft Research

You may have dual or even triple monitors, but what if you could mix digital experience and real life by having every flat surface in your entire living room act like multiple monitors for a 3D web browser? Even your smartphone and tablet could be "taken over" to interact with this 3D browser; your room would be "scanned" to make a "room skeleton" in order for webpages to "spill out" and be beamed onto living room surfaces.

But a scan implies seeing into the privacy of your living room and using gestures to interact with your immersive 3D browser living room implies a user would be seen. However, a paper tackling the privacy challenges of SurroundWeb, a Microsoft Research project, will be presented today at the IEEE Symposium on Security and Privacy.

If the SurroundWeb concept sounds a bit familiar, then that could be because the same research authors worked on Microsoft's proof-of-concept system IllumiRoom; they later released a video showing off SurroundWeb when they touched on SurroundWeb's least-privilege platform to create immersive web rooms.

This time, Microsoft researchers David Molnar, Eyal Ofek, Chris Rossbach, Benjamin Livshits, Alexander Moshchuk, Helen J. Wang, and Ran Gal, along with John Vilk of the University of Massachusetts, are presenting SurroundWeb: Mitigating Privacy Concerns in a 3D Web Browser. According to the abstract:

This paper presents SurroundWeb, the first 3D web browser, which provides the novel functionality of rendering web content onto a room while tackling many of the inherent privacy challenges. Following the principle of least privilege, we propose three abstractions for immersive rendering: 1) the room skeleton lets applications place content in response to the physical dimensions and locations of renderable surfaces in a room; 2) the detection sandbox lets applications declaratively place content near recognized objects in the room without revealing if the object is present; and 3) satellite screens let applications display content across devices registered with SurroundWeb.

The researchers, who built SurroundWeb on top of Internet Explorer, resolve "tension between privacy and functionality" with the three principles introduced in the abstract; they also defined the "notions of detection privacy, rendering privacy and interaction privacy as key properties for privacy in immersive applications, and show how SurroundWeb provides these properties."

The researchers said each of the three privacy principles represent "an important goal for balancing privacy and functionality in immersive room experiences."

SurroundWeb apps running in living room

The room skeleton can be created after a one-time scan of a static room. This is accomplished by a depth camera such as Microsoft's Kinect device. But when a web app runs, "it receives only the set of screens in the room." Each screen has a resolution and a relative location to other flat surfaces that could act as screens to display content. Apps will then use this info to "dynamically adapt how it presents content" on available surfaces. Like in a regular web browser, web pages "are restricted to the interfaces furnished to them through JavaScript, HTML, and CSS."

As an example of satellite screens, Microsoft said "Xbox SmartGlass can turn a smartphone or tablet into a second screen; a web page can use satellite screens to turn a smartphone or tables into an additional screen." Both advertisements and calorie informer were listed as requiring the detection sandbox.

Detection privacy was described as meaning "an application can customize its layout based on the presence of an object in the room, but the application never learns whether the object is present or not. Without detection privacy, applications could scan a room and look for items that reveal sensitive information about a user's lifestyle." The detection sandbox is what would provide detection privacy.

The principle of least privilege is followed to provide rendering privacy, which "means that an application can render content into a room, but it learns no information about the room beyond an explicitly specified set of properties needed to render. Without rendering privacy, applications would have access to additional incidental information, which may be sensitive in nature."

Our threat model for rendering privacy is that applications are allowed to query the room skeleton to discover screens, their capabilities, and their relative locations, as we described above. Unlike with the detection sandbox, we explicitly allow the web server to learn the information in the room skeleton. The rendering privacy guarantee is different from the detection private guarantee, because in this case we explicitly leak a specific set of information to the application, while with detection privacy we leak no information about the presence or absence of objects."

The researchers said users surveyed found this approach to be "acceptable."

Microsoft Research user survey about privacy and SurroundWeb

Interaction privacy was described as meaning "that an application can receive natural user inputs from users, but it does not see other information such as the user's appearance or how many people are present. Interaction privacy is important because sensing interactions usually requires sensing people directly." The researchers wrote that without interaction privacy, an app that "uses gesture controls could potentially see a user while she is naked or see faces of people in the room."

Interaction privacy seems to be the most challenging to accomplish. The paper states:

We provide interaction privacy through a combination of two mechanisms. First, trusted code in SurroundWeb runs all natural user interaction detection code, such as gesture detection. Just as with the detection sandbox, applications never talk directly to gesture detection code. This means that applications cannot directly access sensitive information about the user.

Second, we map natural user gestures to existing UI events, such as mouse events. We perform this remapping to enable interactions with applications even if those applications have not been specifically enhanced for natural gesture interaction. These applications are never explicitly informed that they are interacting with a user through gesture detection, as opposed to through a mouse and keyboard. Our choice to focus on remapping gestures to existing UI events does limit applications.

The rest of the paper goes into more detail about SurroundWeb and about two surveys of random U.S. internet users before the researchers concluded, "We validated that our two abstractions reveal an acceptable amount of information to applications through user surveys, and demonstrated that our prototype implementation has acceptable performance."

To comment on this article and other Network World content, visit our Facebook page or our Twitter stream.
Must read: Hidden Cause of Slow Internet and how to fix it
Notice to our Readers
We're now using social media to take your comments and feedback. Learn more about this here.