&N Dream up the future lab.

Envision the future
with Nomura Research Institute

Kevin Kelly, Editor-in-Chief of WIRED magazine
Takeshi Mori, Future Society & Economy Research Department Manager, NRI’s Center for Strategic Management & Innovation


In December 2025, NRI invited Kevin Kelly, the Editor-in-Chief of WIRED magazine, to give a special lecture entitled “The Nature of AI: What We Have Seen So Far.” Now in Part 2, we present a look at a special talk that was conducted after the lecture between Kelly and Takeshi Mori of NRI’s Center for Strategic Management & Innovation, who served as moderator. What is the “appropriate distance” between people and AI? How do we build “trust” in it? Their talk gets to the heart of the matter by addressing such questions, adding greater depth to how we think about this issue.

Back:A Compass for the AI Era ― Part 1: Visionary Kevin Kelly on Four AI Uncertainties, and Preparing for the Future

What is the “appropriate distance” between us and AI? ― The Pitfalls and Possibilities of Alignment

With regard to the uncertainties about AI, Mori raised the issue of how we should think about the level of distance between humans and AI, or in other words, about alignment (i.e., aligning AI with human values). “Developers are trying to make AI be more human, in an attempt to control them. However, AIs that aren’t aligned with human beings might make the world a richer and more interesting place.”

Kelly brought up two considerations about alignment. “The first is getting AI to be more consistent with human values. It doesn’t have to think like a human being, but we want it to behave in accordance with human values. Embedding such values within AI as code won’t be so difficult.”

The second one is precisely what we should define as “human values,” Kelly pointed out. “Our values as human beings aren’t all consistent, and sometimes they don’t run all that deep. One example would be the famous trolley problem. Should the bystander prioritize the safety of one person, or the lives of multiple people? We’ll have to decide on the answer in order to teach our AI, but there’s no consensus in human society on that. We’re demanding an ethical standard from AI when we ourselves can’t even provide an answer.”

Mori responded to this observation by citing an example. “Some researchers asked ChatGPT to answer the questions in World Values Survey, and supposedly the answers it gave tracked perfectly with Western values. Another interesting thing about it was that when they told ChatGPT to ‘answer like a Japanese person’, it gave a Japanese-like response.” This only further attested to the difficulty and importance of alignment.
*The trolley problem: a thought experiment that asks what one would do in a situation where a trolley has lost its brakes and is speeding down the track, and if it continues it will kill five people, but if one flips the switch and sends it down another path, one person will be killed.

How Can We Best Guarantee AI’s Reliability?

Mori probed even further into the topic of AI’s most cutting-edge fields, particularly about AI agents that can act autonomously. “The issue is whether that kind of personal AI agent really has your best interests in mind. For all you know it could be an advocate for the interests of some company. It would be hard for a human being to tell which AIs can be trusted,” he pointed out.

“That’s a great mystery, and this is a frontier where we don’t have a good answer yet,” Kelly replied. “There are various problems. Who owns the AI? Where should trust be placed? Is it within some system that has no owner, like the Internet, or with an individual? And ultimately whom should we trust – governments, corporations, or even a decentralized commons like an NPO (non-profit organizations)?”

It is important that we keep thinking about these problems, rather than giving hasty answers, Kelly explains. “There are difficulties, but there’s also a great chance there, and I’m sure that all of you at your companies and startups and such will consider these things and find the answers through innovation.”

Believing in the Future and Moving Forward Little By Little ― The Origin of a Protopian

Lastly, Mori asked Kelly how he has managed to remain a “protopian” (one who believes the world is slowly marching toward incremental improvement) despite being in San Francisco, a place where extreme views coexist, like those of AI Boomers who believe that AI will bring about a utopia, and AI Doomers who predict a dystopia instead.
 
Kelly said there were two catalysts involved. “The first was my experience traveling throughout Asia in my younger days. Since then, I’ve seen how different countries and regions like Taiwan, India, Pakistan, and Afghanistan have grown from poverty to wealth in 50 years, and it led me to believe from the bottom of my heart in people’s ability to make the impossible possible. The other would be the changes I’ve seen in the digital world. The arrival of the Internet allowed people to connect with each other, and this made me realize how it could unleash the ‘forces of good.’”
 
Kelly then addressed the audience directly. “In the early days of the Internet, the experts said that ‘no one’s going to buy food or cars online.’ Now, many people are saying the same kinds of things about AI. But I don’t think so. You have to expand your thinking a bit, use your imagination to envision the best-case scenario, and then move in that direction little by little. It’s important to have the ability to believe that nothing is impossible.”
 
“Mr. Kelly, you really are putting knowledge into action. Instead of being pessimistic about the future, I hope that today, we can start moving forward to prepare for the best future, armed with our imaginations,” Mori said, concluding the talk.


Profile

  • Takeshi MoriPortraits of

    Takeshi Mori

    Future Society & Economy Research Department Manager, NRI’s Center for Strategic Management & Innovation

    Mr. Mori joined Nomura Research Institute (NRI) in 1995 after graduating from the Faculty of Economics at Keio University. Following his roles as a researcher and consultant, he served as a Program Director for executive education at the Nomura School of Advanced Management from 2012 to 2019. Since 2019, he has been a researcher at the Center for Strategic Management and Innovation, NRI’s think tank division.
    He holds an MSc in Economics from the London School of Economics (LSE, 2005) and a Ph.D. in Management from Hitotsubashi Business School (2024).

* Organization names and job titles may differ from the current version.