Skip to main content

Playing God: Meet the man who built the most lifelike android ever

Dr. David Hanson: Android designer
Image used with permission by copyright holder

Next month, leaders in the world of robotics, neuroscience, and artificial intelligence will converge on New York City for the second annual Global Future 2045 Congress, an event devoted entirely to the quest toward “neohumanism” – the next evolution of humankind. GF2045 is the brainchild of Russian billionaire Dmitry Itskov, who’s made it his life’s goal to transpose human consciousness into a machine, thus giving us the power of immortality. (Really.)

Among those presenting during the two-day GF2045 conference is renowned roboticist Dr. David Hanson, who will unveil the world’s most lifelike humanoid android, designed in the likeness of Itskov. Founder of Hanson Robotics, Hanson is a true Renaissance Man, with a background ranging from poetry to sculpting for Disney to the creation of humanlike androids that are said to possess the inklings of human intelligence and even emotion. As we edge closer to GF2045, which takes place June 15 and 16, we chatted with Dr. Hanson over Google+ Hangouts to get his insight on mankind’s march toward the future.

Recommended Videos

Ed. note: This interview has been shortened for the sake of brevity.

Digital Trends: You will be unveiling the world’s most realistic humanlike android at GF2045. What can we expect to see?

Dr. Hanson: What we’re doing with Dmitry is making a telepresence robot. It is basically a remote-controlled version of Dmitry. So we’re bringing together the technologies. There’s some risk associated with this because the time schedule is fantastically short for this particular project. I mean, we basically received this commission about a little less than two months ago. So we’re talking about a very, very short time frame for making a redesign of the human-scale technology to improve it, customizing it to be Dmitry, and then bringing together the best of our technologies available to achieve a remote control version of Dmitry.

“Well, I think there’s good reason to be afraid. We’re creating alien minds, one way or another.”

That said, things are looking pretty good. We think we’re going to have a very nice remote controlled face that will be under Dmitry’s command, will say what Dmitry says. It will look around under Dmitry’s control – so Dmitry can see through its eyes. And control its expression, so it’s able to express his intentions and emotions. So it becomes a very high resolution representative. With enough sensory information going back and forth, it basically becomes like one of these sci-fi scenarios, where you have this hologram or a whole presentation of a person, or like the movie Surrogate, or Avatar, where you have a robot identity that looks like you’re really there. Somewhere between a cellphone and a Star Trek-style teleportation device.

What’s the user interface like that for that?

There will be a screen, and on the screen, the remote user will see what the eye of the robot is seeing. And then there will also be a wide-angle presentation of the whole scene, so the user can see what’s outside the robot’s direct field of view, giving the user an impression of the robot’s peripheral vision. The user will then have the ability to control where the robot looks. The user will speak in an actual manner, and the robot will reproduce what the user is saying with lip motions, so you’ll have the lip sync.

Will it be the actual person’s voice?

It will Dmitry’s voice actually coming through. That’s the difference. Previously when we’ve created portraits of people, we created fully automated portraits of people. They weren’t remote controlled. There was face tracking and face detection software, and speech recognition software, and then some artificial intelligence that would generate some reply. And then the robot would speak that reply and generate expressions that would be approximately appropriate based on this very, very simple emotional model.

Philip K Dick Hanson
Image used with permission by copyright holder

So in that way, you were talking with a ‘techno-ghost’ of the person. In this case, the ghost is still inside the human body. And you are simply remotely connecting that ghost through these information infrastructures.

How do you see people using these kinds of robots in everyday life?

… These kind of telepresence robots could be applied in real-world scenarios, like tele-tourism. You could explore the markets of some exotic faraway destination. … You’d basically have this teleportation kind of experience. Maybe it’s better for a board meeting, where you’re controlling the android, and it’s really reflecting the full 3D – or 4D, if you include Time – nuance of your face and its expressions, as though you’re there. You can really look into somebody’s eyes much more effectively than you could with any other kinds of 3D display technology.

You said you believe robots with human emotions could transform civilization. Is it even possible to put human emotions in a machine? And, if so, how do you see that being transformative?

I would like to point out that some of what we are talking about is speculative. We’re talking about the stuff of dreams. And so some of these propositions come under heavy fire from critics because they say that it’s ridiculous, and that it can distort people’s expectations. And there’s no proof that you can do mind uploading, or have true science of mind, or achieve artificial general intelligence. And I would just like to say that I am a dreamer in these areas. There’s not proof that it is not possible, right? But that doesn’t mean that it’s proven that it is possible. And yet, we need dreamers. We need to dream big. Because all major surges in development, all major discoveries and acts of creativity come from this node of uncertainty that is best investigated with some hard practicality combined with far dreaming practice.

“Artists hack the mind. They hack perception. They create this shortcut to perceptual phenomena that are not understood by neuroscientists yet.”

That said, okay, I believe that if we achieve these things – and there’s an ‘if’ there – if we achieve self-redesigning machines with human-level or greater-than-human-level intelligence, that they will spiral towards unimaginable levels of super-intelligence, what we might call transcendental intelligence; intelligence that just gets smarter, and we just can’t predict what it’s going to do or be capable of. That will then solve problems, and identify problems and opportunities that we can’t really perceive. And it will open up opportunities for us as people that are unimaginable. And that will be absolutely transformative.

Do you see Itskov’s goals of mind uploading having similar effects, if achieved?

Mind uploading would be transformative in a separative way – what Dmitry is proposing – because it would ‘cure’ death. In Global Future 2045, the objective of achieving immortality for all of humanity by 2045, would radically transform what it means to be human because you could live in this virtual domain, or you could occupy a robot body. If computing continues [to advance] exponentially – if Moore’s Law carries on through whether its optical computing or nanotube computing, or something – well, if it does continue, then it will be more efficient. You’ll be able to pack all of the human mind into this kind of computing space that would be potentially much less impactful on the natural environment, so you’d be able to re-stabilize the natural ecosystems of the world. These would be potential consequences.

How do you respond to critics who are afraid and pessimistic about AI and transhumanism? Some people are afraid of what this technology could unleash.

Hanson-EinsteinWell, I think there’s good reason to be afraid. We’re creating alien minds, one way or another. And most of the research doesn’t focus on social AI, or capacity for compassion, or for getting along well with people. And so, most of the funded research, by some estimates, the majority that’s coming from research institutions, AI for military applications. And there’s not anything inherently wrong with that, it’s just that you could imagine – in the short-term, anyway – that a conscience would get in the way of efficacy of these kinds of devices. In the long-term, a conscience would be really essential. The ability to see and understand potential outcomes, their consequences for what motivates people and what’s good for society for the long-term, well all of that would be really great in those kinds of machines. But if you look at the dollars that are going to toward that kind of research, they are negligible. And so the social robots, on the other hand, the theory of mind, that can lead toward machines with consciousness and conscience at the same time.

Is public opinion changing about robots?

The public’s expectations about robots are shifting. When we acculturate to robots, when we get used to them, then we open up to them. And then we expect them. Our expectations keep ratcheting higher. It’s like … the automobile companies roll out automation in self-driving cars piece-wise, feature-by-feature. Now you’re car can parallel park. Next year, maybe it will pull around to the front of the house, and be waiting for you. Maybe in 10 or 15 years, you’ll get into a living room-like space, and it’ll just drive you to work while you google the whole way.

But if you introduce a self-driving vehicle today – completely self-driving – then people’s wonder and fear would result in too much disruption. It’s kind of like, you’ve kind of got to ease into these things. So the human reaction then, it can’t be fully predicted. And it’s something I believe developers and marketing teams evaluate as you kind of inch along down this path.

Your background includes sculpture, painting, drawing, and poetry. How do artists fit into the equation of transhumanism and advanced robotics?

Being an artist, you can introduce something that is more startling and disruptive. You don’t have to worry about those incremental steps. You can introduce something that really stirs things up and see what happens. By putting the technology together in this form that may be startling, the technology itself is really incrementally advancing. … With the robots, we put together these dialog systems with today’s AI, but we do it in an artistic way that then can seem like that there’s somebody in there. And arguably, it’s just these ghost-like shreds of who that person is. There’s not really a mind in these machines, like a human mind. But you can convey an amazing impression there.

The technology itself, there’s some advances. But we have not unlocked the Holy Grail of artificial intelligence with these humanlike robots yet. What we have done is put this burning idea in people’s minds. When the robots work well, people start to say ‘Wow, we could that. Should  we do that? What could it be good for? Wow, it could be good for all kinds of things! How could it be dangerous?’ People start to think about these questions – Inspire developers to think of these questions as well, as we go forward.

 Dr. David Hanson Einstein
Image used with permission by copyright holder

In this way, I think of sort of an advance guard of artists, a sort of reconnaissance team for the world of robots. The ‘canary in the coal mine’ is how Kurt Vonnegut characterized the artist. So, I believe that artistry is under-valued in technology development and robotics. Robotics, to me, seems like the greatest artisic medium. It is the marble of our age. And it’s a little surprising that you don’t have more artists leaping in, and trying to transform robotics in these spectacular and disruptive ways.

I mean, I use the marble quite specifically because robotics as a figurative medium is so under developed at this point. There’s so much opportunity for exploring it. And in the process, what you’re doing is, you’re injecting humanity into the technology also. You’re getting the technology to do things that are beyond the understanding of science and engineering because artists hack the mind. They hack perception. They create this shortcut to perceptual phenomena that are not understood by neuroscientists yet.

What do you recommend for people who might want to get into robotics and carry on what you will do in your lifetime?

I would recommend that people get foundation skills. And that means, learn how to draw. Learn your math. And learn how to play. These are kind of the fundamental skills. If you play with robots, and tinker, and just get into tinkering with things, then see where it leads you, then you will learn all kinds of other things. You will have the incentive to learn all these other disciplines. If you just pick up a textbook, well, you know, that can be interesting in its own way. But if you are picking up a textbook because you want to build something cool, or you want to discover something – you’re on the trail of something fascinating – then that playful spirit gives you something to hang all this knowledge on. It provides a skeleton for the flesh of all those skills.

Andrew Couts
Former Digital Trends Contributor
Features Editor for Digital Trends, Andrew Couts covers a wide swath of consumer technology topics, with particular focus on…
Juiced Bikes offers 20% off on all e-bikes amid signs of bankruptcy
Juiced Bikes Scrambler ebike

A “20% off sitewide” banner on top of a company’s website should normally be cause for glee among customers. Except if you’re a fan of that company’s products and its executives remain silent amid mounting signs that said company might be on the brink of bankruptcy.That’s what’s happening with Juiced Bikes, the San Diego-based maker of e-bikes.According to numerous customer reports, Juiced Bikes has completely stopped responding to customer inquiries for some time, while its website is out of stock on all products. There are also numerous testimonies of layoffs at the company.Even more worrying signs are also piling up: The company’s assets, including its existing inventory of products, is appearing as listed for sale on an auction website used by companies that go out of business.In addition, a court case has been filed in New York against parent company Juiced Inc. and Juiced Bike founder Tora Harris, according to Trellis, a state trial court legal research platform.Founded in 2009 by Harris, a U.S. high-jump Olympian, Juiced Bikes was one of the early pioneers of the direct-to-consumer e-bike brands in the U.S. market.The company’s e-bikes developed a loyal fandom through the years. Last year, Digital Trends named the Juiced Bikes Scorpion X2 as the best moped-style e-bike for 2023, citing its versatility, rich feature set, and performance.The company has so far stayed silent amid all the reports. But should its bankruptcy be confirmed, it could legitimately be attributed to the post-pandemic whiplash experienced by the e-bike industry over the past few years. The Covid-19 pandemic had led to a huge spike in demand for e-bikes just as supply chains became heavily constrained. This led to a ramp-up of e-bike production to match the high demand. But when consumer demand dropped after the pandemic, e-bike makers were left with large stock surpluses.The good news is that the downturn phase might soon be over just as the industry is experiencing a wave of mergers and acquisitions, according to a report by Houlihan Lokey.This may mean that even if Juiced Bikes is indeed going under, the brand and its products might find a buyer and show up again on streets and trails.

Read more
Volkswagen plans 8 new affordable EVs by 2027, report says
volkswagen affordable evs 2027 id 2all

Back in the early 1970s, when soaring oil prices stifled consumer demand for gas-powered vehicles, Volkswagen took a bet on a battery system that would power its first-ever electric concept vehicle, the Elektro Bus.
Now that the German automaker is facing a huge slump in sales in Europe and China, it’s again turning to affordable electric vehicles to save the day.Volkswagen brand chief Thomas Schaefer told German media that the company plans to bring eight new affordable EVs to market by 2027."We have to produce our vehicles profitably and put them on the road at affordable prices," he is quoted as saying.
One of the models will be the ID.2all hatchback, the development of which is currently being expedited to 36 months from its previous 50-month schedule. Last year, VW unveiled the ID.2all concept, promising to give it a price tag of under 25,000 euros ($27,000) for its planned release in 2025.VW CEO Larry Blume has also hinted at a sub-$22,000 EV to be released after 2025.It’s unclear which models would reach U.S. shores. Last year, VW America said it planned to release an under-$35,000 EV in the U.S. by 2027.The price of batteries is one of the main hurdles to reduced EV’s production costs and lower sale prices. VW is developing its own unified battery cell in several European plants, as well as one plant in Ontario, Canada.But in order for would-be U.S. buyers to obtain the Inflation Reduction Act's $7,500 tax credit on the purchase of an EV, the vehicle and its components, including the battery, must be produced at least in part domestically.VW already has a plant in Chattanooga, Tennesse, and is planning a new plant in South Carolina. But it’s unclear whether its new unified battery cells would be built or assembled there.

Read more
Nissan launches charging network, gives Ariya access to Tesla SuperChargers
nissan charging ariya superchargers at station

Nissan just launched a charging network that gives owners of its EVs access to 90,000 charging stations on the Electrify America, Shell Recharge, ChargePoint and EVgo networks, all via the MyNissan app.It doesn’t stop there: Later this year, Nissan Ariya vehicles will be getting a North American Charging Standard (NACS) adapter, also known as the Tesla plug. And in 2025, Nissan will be offering electric vehicles (EVs) with a NACS port, giving access to Tesla’s SuperCharger network in the U.S. and Canada.Starting in November, Nissan EV drivers can use their MyNissan app to find charging stations, see charger availability in real time, and pay for charging with a payment method set up in the app.The Nissan Leaf, however, won’t have access to the functionality since the EV’s charging connector is not compatible. Leaf owners can still find charging stations through the NissanConnectEV and Services app.Meanwhile, the Nissan Ariya, and most EVs sold in the U.S., have a Combined Charging System Combo 1 (CCS1) port, which allows access to the Tesla SuperCharger network via an adapter.Nissan is joining the ever-growing list of automakers to adopt NACS. With adapters, EVs made by General Motors, Ford, Rivian, Honda and Volvo can already access the SuperCharger network. Kia, Hyundai, Toyota, BMW, Volkswagen, and Jaguar have also signed agreements to allow access in 2025.
Nissan has not revealed whether the adapter for the Ariya will be free or come at a cost. Some companies, such as Ford, Rivian and Kia, have provided adapters for free.
With its new Nissan Energy Charge Network and access to NACS, Nissan is pretty much covering all the bases for its EV drivers in need of charging up. ChargePoint has the largest EV charging network in the U.S., with over 38,500 stations and 70,000 charging ports at the end of July. Tesla's charging network is the second largest, though not all of its charging stations are part of the SuperCharger network.

Read more