Kevin Ashton is a British technologist who is regarded to be the author of the term “Internet of Things”. He coined the expression in 1999 when he was a young executive at Procter & Gamble. While working there, he became preoccupied with the fact that the company’s many products often didn’t make it to the stores’ shelves while lying in abundance in the storage rooms. In his interview with Steve Paikin in June 2015 Ashton explained: “I wanted to understand what was causing this problem. It wasn’t that we didn’t make enough product … The stores knew that they kind of had it in the warehouse, but they didn’t know if the individual product level was on the shelve.”
Ashton decided to track the merchandise using microchips that were sending a radio signal. That was the first time the Radio-frequency identification (RFID) technology was employed to manage physical objects. Having to explain the idea to “a bunch of old male Procter & Gamble executives”, many of whom didn’t understand how to use a computer or what the Internet was about, Ashton put together a PowerPoint presentation and titled it “Internet of Things”.
Following this, Ashton moved to the Massachusetts Institute of Technology, where he continued working on the concept and where he co-founded the Auto-ID Center (now Auto-ID Labs). The latter became the leading global research network of academic laboratories in the field of Internet of Things. Among its other achievements, Auto-ID Center has developed a global standard system for RFID.
In his 2009 post in RFID Journal Ashton argued: “We’re physical, and so is our environment. Our economy, society, and survival aren’t based on ideas or information—they’re based on things. If we had computers that knew everything there is to know about things—using data they gathered without any help from us—we would be able to count everything, and greatly reduce waste, loss and cost.” In 2015, this statement is more relevant than ever, as tech companies around the world are actively experimenting with, testing and propagating the Internet of Things.
Ashton believes that our future is completely predictable and makes some very specific forecasts regarding it. According to him, already 15 years from now very few new cars will have a steering wheel; about five years from now a lot of computer devices won’t have batteries; and, finally—our grandchildren will be blessed with having a three-digit life expectancy. In Ashton’s words, there’s no reason to be pessimistic about our future lives, because “pessimism is a cheap and intellectually lacking way of sounding clever.”
Almost twenty years after coming up with the original IoT concept, Ashton remains a prominent figure in the societal discussion regarding the ongoing computerization, ever-growing use of sensors, and overall human relationship with technology.
He has been writing about these and other topics for The Atlantic, Politico, and Quartz, to name but a few. In 2015, Ashton published a book called “How to Fly a Horse”, where he addressed the problems of human creativity and the many myths that people tend to associate with it.
We reached out to Ashton to find out what he thinks about the growing #iot hype. Below is the interview.
It’s been nearly two decades since you first coined the term “IoT”. Now, it is one of the most popular concepts in the technology industry worldwide. Back in the 1990s, did you foresee this happening? Could you predict the popularity it obtained?
No, not at all. It was a cool name for a concept, at a time when we had to come up with names for everything, because everything was new—or at least not widely known. The term spread a little in the first few years, became somewhat common between 2005-2010, and then really leapt to prominence in the last 5 years. Twitter may have helped. The hashtag #IoT is really short and distinctive, and very widely used.
Did IoT develop as you thought it would? Which aspects were, in your opinion, really foreseeable? Was there anything along the path that really surprised you, something that no one really thought IoT could bring/cause?
I think we got some of the big things right. That networked sensors would be far more valuable than standalone devices, for example; that prices would fall rapidly while performance improved massively; that the Internet would be transformational; that wireless would be everywhere. All of those things seem obvious now, but they were highly controversial in the late 1990s. I got a lot of things wrong too: the rapid advance of ad hoc technologies, like machine vision, for example; or where the big application areas would be. I would have expected RFID throughout the global supply chain long before self-driving cars, for example; it’s clearly going to be the other way around.
[pullquote align=”left” cite=”Kevin Ashton” link=”” color=”#581047″ class=”” size=””]”Some of the concerns I hear about are blown out of proportion. Others are simply ridiculous.”[/pullquote]
What is your biggest fear in the context of IoT?
I don’t really have any fears. It’s an inevitable, and important, technology. Some of the concerns I hear about, like privacy and security, while real and significant, are blown out of proportion; others, like artificially intelligent robots rising up and enslaving humanity are simply ridiculous. I’m an optimist. If you have big problems to solve, define them clearly, then treat them calmly. Fear has no place in the algorithm.
\What is the current biggest challenge to bringing out the full potential of IoT? What does this field need, that is not currently there?
Machine learning is the wild frontier right now. We need to get really good at automatically making meaning out of vast streams of disparate data, and we need to do it with high confidence in real-time. That’s a really fascinating problem in math, statistical analysis, and computer science. There is some excellent work being done, and it is really exciting to watch.
[pullquote align=”right” cite=”Kevin Ashton” link=”” color=”#581047″ class=”” size=””]”The equation of the Internet of Things Age is simple: more information equals less bullshit.”[/pullquote]
How do we, as humans, cope with the increasing pervasiveness of the digital sphere into our lives? Given the onslaught of sensors, and hence wireless communications and tracking, how can we hold onto our individuality and individual intelligence? Should we?
Those things are complementary, not opposite. Digital information enables individuality and intelligence. The more we know, the more we become. You don’t hear this often, but the rise of LGBT rights and marriage equality, for example—led from the Netherlands, of course—is in part a consequence of the rise of pervasive digital intelligence. It helped people organize, find one another, and discover the truth about themselves and others. We are seeing the same thing now with the exposure of police brutality in the United States; that is, in large part, a result of pervasive digital cameras, and the means to disseminate video and images. The equation of the Internet of Things age—in fact of every age since at least the Enlightenment—is simple: More information equals less bullshit. It’s a zero sum game—one that information, and therefore intelligence and individuality, is winning.
TEXT AND INTERVIEW Mariia Stolyga