Biometric security is becoming more prevalent. More than 770 million biometric applications will be downloaded every year by 2019, Juniper Research said last year, quoted by CSO then. That’s up from only 6 million identity-proving biometric apps in 2015. It will be big, then.
What we’re usually thinking about, though, when biometrics are mentioned in the context of devices, is the proving of a person’s identity, perhaps with fingerprints.
However, some scientists think that there’s another way to approach biometrics. They think it doesn’t have to simply be geared towards just the identifying and verification users. You can use it for security-related tracking too.
Behavioral researchers think that eye movement can be used to track the places a user looks at on a computer screen. Analyzing the viewed spots, including for how long, could let software provide specific messages pertaining to that content being viewed.
A use could be to advise computer users that they’re about to give away PII, or sensitive personally identifiable information online, think professors at the University of Alabama in Huntsville. A kind of phishing-prevention tool, possibly.
Ironically, in this case, the eye tracker isn’t primarily for identifying the person, as it’s usually used in biometric security. Its purpose is to stop the person getting identified. They’re using the same equipment, though.
“Displaying warnings in a dynamic manner that is more readily perceived and less easily dismissed by the user” is the goal, says the university’s press release. By creating pop-ups that appear when a user looks at a field in a form, for example, the scientists think they can produce a more effective warning than something static in a text box. It’s less same-old-same-old.
“I need to know how long the user's eyes stay on the area and I need to use that input in my research,” says Mini Zeng a computer science doctoral student, who’s been working on the project. Where the user’s eyes are on the screen and for how long is calculated in the tracking.
If the user looks away from the PII-capturing form, the warning can be made to disappear. If the user looks back again, the warning flashes on the screen again and can stay there for a pre-determined amount of time—to force the user to read it. The researchers think that it’s the unpredictability of the warning flashing on the screen that adds to the effectiveness.
"If you get a warning every single time and it becomes annoying or habitual, you are going to ignore it," says Dr. Sandra Carpenter, a psychology professor in the press release.
Although the University of Alabama researchers don’t mention, in their press release, how they see the system being implemented, presumably any web-based form that has a dubious intent could be made to display the dynamic warning, perhaps through URL whitelists and blacklists lookups. The warning could be independent of the website publisher.
And if an eye recognition biometric sensor hardware gets added to devices anyway, perhaps it could help with kids’ homework management. “Hey, you’ve been looking at that Instagram post a little too long, Get back to the work,” the message might say.
This article is published as part of the IDG Contributor Network. Want to Join?