Skip to main content

I met Samsung’s artificial humans, and they showed me the future of A.I.

Andy Boxall/DigitalTrends

This story is part of our continuing coverage of CES 2020, including tech and gadgets from the showroom floor.

What is Neon? Shrouded in mystery leading up to CES 2020, all we knew was that Neon had something to do with artificial intelligence. Was it a Google Assistant competitor? A robot? Something more?

“It’s a preview of a wonderful technology we have, and a wonderful future we can create together,” Neon’s CEO Pranav Mistry said at the start of his keynote presentation.

So what is it? It’s not hyperbole, for a start. Neon is a step closer to living with a digital creation that not only understands and emotes with us in a meaningful and relatable way, but is also able to create valuable memories with us and truly share our lives.

Four months of work

Neon CEO Pranav Mistry Andy Boxall/DigitalTrends

Explaining exactly what Neon is, how it works, and the incredible depth of technology underlying it is a considerable challenge and one that Neon itself isn’t quite sure how to tackle. To help introduce Neon, Mistry started out by saying he wants to change the way we interact with machines, and no longer say just, “Stop,” “Next song,” or even, “Hey Google, Bixby, or Siri,” because it’s not how we talk to humans.

Mistry said he wants to “push the boundaries so machines understand more about us. Whether we are tired or happy, our expressions, and our emotions.”

In turn, the more machines understand us, the more we will be able to connect with them on a deeper, human level. He believes the path to this means machines need to look and act more like us, and this is where Neon’s journey really began.

The CES demonstration came just four months after the project started. Mistry and the team began by creating a digital version of a friend, which closely emulated his facial movements during conversation. This evolved into larger, grander tests until eventually, the digital version began to do things on its own. It would make expressions the real person had not. It had “learned,” and become something individual.

The Neon booth in Central Hall at CES is covered in large screens showing people on them, all moving, smiling, laughing, or silently mouthing words to the audience. Except these aren’t videos. These are Neons. They are digital creations born from real people, and although they visually represent the model on which they’re based, the movements, expressions, and “emotions” are entirely automatically generated.

Once you understood this, it was surreal walking around the booth, looking at the Neons who in turn are looking at you, and now understanding the movements they made were of their own doing, not a repeating video or animation. What was powering the Neon, and what did Mistry have in mind for their future?

Core R3

A Neon yoga teacher Andy Boxall/DigitalTrends

The Neons are generated by the company’s own reality engine called Core R3. The R3 name refers to the principals on which the system is based — reality, real time, and responsiveness, and it’s the combination of all these that bring the Neon to life. It’s not an intelligent system, says Mistry, because it does not have the ability to learn or remember. Instead, it’s equal parts behavioral neural network and computational reality that independently generates the Neon’s “personality” by training it to emulate human behavior on a visual level — how your head moves when you’re happy, what your mouth does when you’re surprised, for example.

Once it has been created, Core R3 does not then continually run a Neon. It generates it initially, then the Neon continually relies on its own information to react based on its interactions with the real world. However, it doesn’t know you or remember you. It uses a combination of the Core R3-generated Neon, cameras, and other sensors to interact with us in the moment — but once that moment is over, everything is forgotten. In the near future, the company has big plans to change that.

Coming to life

Andy Boxall/DigitalTrends

Despite only being worked on for four months, there was a live demonstration of what a Neon can do now. There are two “states” for Neons currently, an auto mode where it does what it wants, whether it is thinking, responding, idling, or greeting you, plus a “live” mode where the Neon can be controlled remotely.

The Neon has multiple ways to respond and can choose how to do so, even when instructed to do a particular action. Tell it to smile and be happy, and it does so, but it chooses the way it will look when it does. The level of granular control is impressive, right down to eyebrow movement and the closing of eyes, along with head movements and both visual and verbal responses. This all happens with a response time of 20 milliseconds (the real-time aspect of R3), which removes the barrier between human and machine even further during any interaction. Speech is not produced by Neon at the moment, and in the demo, voice was pulled from third-party APIs, giving life to artificially intelligent voice assistants and chatbots everywhere.

The Neon is “domain independent,” Mistry said. A Neon could teach you yoga or it could help bridge language gaps around the world, for example. Potential uses for a Neon in business are obvious, such as in hotels, at the airport, or in public spaces. The Neon is an evolution of the clunky robots or lifeless video screens seen in these places around the world at the moment. But that’s not really very exciting, and certainly not the part of the Neon that’s truly groundbreaking.

Spectra

Andy Boxall/DigitalTrends

Right now, a Neon cannot know who you are or remember you. Once your interaction is over, your relationship with it is lost to the digital ether. However, over the next year, the Neon team will work on the next version of Core R3, along with a project called Spectra that will add these important traits to Neon, and arguably bring it to life.

“Spectra will provide memory and learning,” Mistry told us, revealing the true direction of Neon.

By adding memory and the ability to learn, along with the advanced human-like visuals, a Neon has the potential to become a true digital companion. Speaking to Mistry after the presentation, his eyes lit up as he talked about the characters he loved as a child, and that the connection he had with them was not affected by the fact they were not “real.” A fully fledged Neon could bring similar joy to people, in a stronger and even more personal way.

What Neon showed at CES 2020 is very much the beginning, but there’s clearly a massive amount of investment, belief, and talent involved. Not many companies would have the guts to come to Las Vegas and show off a four-month-old demo after a few weeks of hyping it up. Mistry has worked with Microsoft on the Xbox, and with Samsung on the Gear VR in the past. He’s soft-spoken and charismatic, and everyone we spoke to at Neon had a similarly strong belief in what the company is doing.

It was contagious, especially if you’ve had sci-fi dreams about artificial humans and digital companions all your life.

A long way to go

However, there is a lot to consider before you’re picking out a name for your first Neon pal. How will the Neon come to life for you and me? Mistry, in true visionary fashion, was not concerned by such things. In his presentation, when talking about the importance of thinking big to do something amazing, he said:

“We don’t understand what’s the business model of something, or how we will bring something to market, let’s figure that out later.”

A Neon team member talked to us about how the company intends to “create” Neons in the future. They won’t use real people as models, and instead generate their own looks for Neons. Think about that for a moment: An entirely artificial digital human, with its own unique looks, and the ability to speak, emote, learn, and remember. It gives me a shiver, it’s so exciting.

Given the pace with which Core R3 has evolved already, it’s no surprise to hear Mistry intends to show the first beta version of a Neon, as well as a preview of Spectra, sometime in the next 12 months at an as-yet-undefined event called Neon 2020. What Neon showed at CES is a huge leap forward in avoiding the uncanny valley, changing the way we should think about digital humans. It’s a major step toward giving life to something that naturally does not have any. There’s a long, long way to go before the Neon reaches its potential, but the very fact the journey has started at all is thrilling.

Follow our live blog for more CES news and announcements.

Editors' Recommendations

Andy Boxall
Senior Mobile Writer
Andy is a Senior Writer at Digital Trends, where he concentrates on mobile technology, a subject he has written about for…
Women with Byte: Vivienne Ming’s plan to solve ‘messy human problems’ with A.I.
women with byte vivienne ming adjusted

Building A.I. is one thing. Actually putting it to use and leveraging it for the betterment of humanity is entirely another. Vivienne Ming does both.

As a theoretical neuroscientist and A.I. expert, Ming is the founder of Socos Lab, an incubator that works to find solutions to what she calls “messy human problems” -- issues in fields like education, mental health, and other areas where problems don't always have clear-cut solutions.

Read more
Why teaching robots to play hide-and-seek could be the key to next-gen A.I.
AI2-Thor multi-agent

Artificial general intelligence, the idea of an intelligent A.I. agent that’s able to understand and learn any intellectual task that humans can do, has long been a component of science fiction. As A.I. gets smarter and smarter -- especially with breakthroughs in machine learning tools that are able to rewrite their code to learn from new experiences -- it’s increasingly widely a part of real artificial intelligence conversations as well.

But how do we measure AGI when it does arrive? Over the years, researchers have laid out a number of possibilities. The most famous remains the Turing Test, in which a human judge interacts, sight unseen, with both humans and a machine, and must try and guess which is which. Two others, Ben Goertzel’s Robot College Student Test and Nils J. Nilsson’s Employment Test, seek to practically test an A.I.’s abilities by seeing whether it could earn a college degree or carry out workplace jobs. Another, which I should personally love to discount, posits that intelligence may be measured by the successful ability to assemble Ikea-style flatpack furniture without problems.

Read more
Scientists are using A.I. to create artificial human genetic code
profile of head on computer chip artificial intelligence

Since at least 1950, when Alan Turing’s famous “Computing Machinery and Intelligence” paper was first published in the journal Mind, computer scientists interested in artificial intelligence have been fascinated by the notion of coding the mind. The mind, so the theory goes, is substrate independent, meaning that its processing ability does not, by necessity, have to be attached to the wetware of the brain. We could upload minds to computers or, conceivably, build entirely new ones wholly in the world of software.

This is all familiar stuff. While we have yet to build or re-create a mind in software, outside of the lowest-resolution abstractions that are modern neural networks, there are no shortage of computer scientists working on this effort right this moment.

Read more