Scientists have long envisioned brain-sensing technology that can translate thoughts into digital commands, eliminating the need for computer-input devices like a keyboard and mouse. One company is preparing to ship its latest contribution to the effort: a $399 development package for a noninvasive, AI-based, brain-computer interface.\nThe kit will let "users control anything in their digital world by using just their thoughts," NextMind, a commercial spinoff of a cognitive neuroscience lab claims in a press release.\n\nThe company says that its puck-like device inserts into a cap or headband and rests on the back of the head. The dry electrode-based receiver then grabs data from the electrical signals generated through neuron activity. It uses machine learning algorithms to convert that signal output into computer controls. The interaction could be with a computer, artificial-reality or virtual-reality headset, or module.\n"Imagine taking your phone to send a text message without ever touching the screen, without using Siri, just by using the speed and power of your thoughts," said NextMind founder Sid Kouider in a video presentation at Helsinki startup conference Slush in late 2019.\nAdvances in neuroscience are enabling real-time consciousness-decoding, without surgery or a doctor visit, according to Kouider.\nOne obstacle that has thwarted previous efforts is the human skull, which can act as a barrier to sensors. It\u2019s been difficult for scientists to differentiate indicators from noise, and some past efforts have only been able to discern basic things, such as whether or not a person is in a state of sleep or relaxation. New materials, better sensors, and more sophisticated algorithms and modeling have overcome some of those limitations. NextMind\u2019s noninvasive technology "translates the data in real time," Kouider says.\nEssentially, what happens is that a person\u2019s eyes project an image of what they see onto the visual cortex in the back of the head, a bit like a projector. The NextMind device decodes the neural activity created as the object is viewed and sends that information, via an SDK, back as an input to a computer. So, by fixing one\u2019s gaze on an object, one selects that object. For example, a user could select a screen icon by glancing at it.\n"The demos were by no means perfect, but there was no doubt in my mind that the technology worked,"\u00a0wrote VentureBeat writer Emil Protalinski, who tested a pre-release device in January.\nKouider has stated it\u2019s the "intent" aspect of the technology that\u2019s most interesting; if a person focuses on one thing more than something else, the technology can decode the neural signals to capture that user\u2019s intent.\n"It really gives you a kind of sixth sense, where you can feel your brain in action, thanks to the feedback loop between your brain and a display," Kouider says in the Slush presentation.