Skip to main content

Neurable receives $2 million to explore brain-computer interfaces

neurables gets 2 million to explore brain computer interfaces tech
Image used with permission by copyright holder
One of the principle challenges of computing is how best to interface human beings and our machines. Keyboards, mice, and touchpads have become second nature to most of us and we’re starting to get accustomed to voice commands and even haptic feedback.

But every single computer interface invented so far, even those more exotic concepts like eye tracking, share one single weakness: They are external devices that need some kind of artificial middleman to translate our intentions. One way around that limitation is to interface directly with the brain, which is something that brain-computer interface (BCI) developer Neurable recently received $2 million in seed money to explore.

Recommended Videos

Neurable has some patent-pending technology that monitors a user’s brain activity to determine their intent, utilizing real-time software combined with connected devices that are powered by the human brain. The company is creating a software development kit (SDK) specifically to enable developers to integrate its technology into virtual and augmented reality headsets and content.

University of Michigan
University of Michigan

As Ramses Alcaide, Neurable co-founder and CEO, puts it, “Our goal is to build a new platform for human-computer interaction. Our investors share our vision for the broad potential of our technology and for creating a world without limitations. We appreciate their confidence.”

The initial seed round of funding was led by Brian Shin via Accomplice’s Boston Syndicate, along with Point Judith Capital, Loup Ventures, the Kraft Group, and others. Shin was effusive in his excitement over Neurable’s technology, saying, “The team at Neurable believe that they can enable people to easily control devices and objects with their minds. The implications would be enormous. They have a chance to completely alter the way humans interact with technology which is something that I had to be a part of.”

Neurable’s technology is derived from research conducted at the University of Michigan’s Direct Brain Interface Laboratory. While working on his Ph.D., Alcaide studied under Dr. Jane Huggins, a prominent researcher in the field of brain-computer interfaces (BCI). By combining new findings on how brainwaves function, along with using new machine learning concepts to perform complex data analysis, Alcaide hopes to improve the speed and accuracy of understanding user intent.

One of the principle applications of BCI is in VR and AR applications, where it would enable completely hands-free interaction and avoid the limitations of voice commands and eye-tracking technology. Users won’t need to worry about having holes drilled into their skulls to implant electrodes into their brains, however, as Neurable’s technology is wireless, non-invasive, and uses dry electrodes to sense brainwaves.

University of Michigan
University of Michigan

VR and AR companies are expected to be the primary customers of Neurable’s technology and SDK, which is platform-agnostic and will work with Oculus Rift, HTC Vive, Microsoft HoloLens, and more. The SDK will be released in the second half of 2017, meaning that while there is no specific timeline for marketable products, it shouldn’t be too very long before we’ll just have to think about what we want our computers to do.

Mark Coppock
Mark Coppock is a Freelance Writer at Digital Trends covering primarily laptop and other computing technologies. He has…
Groundbreaking A.I. brain implant translates thoughts into spoken words
ibm-chip-human-brain-robot-overlord

Researchers from the University of California, San Francisco, have developed a brain implant which uses deep-learning artificial intelligence to transform thoughts into complete sentences. The technology could one day be used to help restore speech in patients who are unable to speak due to paralysis.

“The algorithm is a special kind of artificial neural network, inspired by work in machine translation,” Joseph Makin, one of the researchers involved in the project, told Digital Trends. “Their problem, like ours, is to transform a sequence of arbitrary length into a sequence of arbitrary length.”

Read more
The Qualcomm XR2 augmented reality chipset will power next-gen headsets with 5G
qualcomm snapdragon xr2 augmented reality news

There's a new AR in town: Qualcomm unveiled the Snapdragon XR2 chipset for a new generation of augmented reality devices at its annual Snapdragon Summit -- promising to marry the power of AR with the promise of 5G. The  XR2 is a follow-up to Qualcomm's Snapdragon XR1 and brings a range of new features to next-gen AR headsets. The announcement came alongside news of the new Snapdragon 865 platform, which is set to power the majority of flagship Android phones in 2020.

Perhaps the most notable new feature on offer by the XR2 platform is the fact that it’s the first AR chipset to support 5G connectivity. That means it will be perfect for things like streaming video or accessing large amounts of data on the go.

Read more
Mind-reading A.I. analyzes your brain waves to guess what video you’re watching
brain control the user interface of future eeg headset

Neural networks taught to "read minds" in real time

When it comes to things like showing us the right search results at the right time, A.I. can often seem like it’s darn close to being able to read people’s minds. But engineers at Russian robotics research company Neurobotics Lab have shown that artificial intelligence really can be trained to read minds -- and guess what videos users are watching based entirely on their brain waves alone.

Read more