(This post was orginally published here.)
As we ponder the world of innovation and specifically FinTech innovation I suggest we start with making sure we are asking the right question(s). It’s my view that questioning is the root of all learning, all progress and all successful innovation, technological or otherwise.
More specifically, are we asking the right questions to increase the chances of short-term innovation success while reducing the risk of downstream unintended negative consequences?
Based on the following stats the answer is emphatically NO. 75% of all startups fail. And my guess is that number is under-reported. 84% of all digital transformations fail. And according to a Harvard Business Review study, 95% of corporate innovation fails. This one blows me away. This is the big guys, with the big guns, the big money and the big brains, and they even they can’t get it right.
And even if the innovation sticks, consider this other set of statistics:
· 25% of college students in America take psychotropic drugs for anxiety, depression or both.
· 40% of Americans are obese.
· 1 in 4 of us claim we have no one to talk to. To confide in.
· And as of today, loneliness reduces average life expectancy as much as obesity or smoking. Loneliness.
Let me throw some other not so positive outcomes:
The divide in our country. The growing division in the world. The push for nationalism when it’s clear as day that the only way our planet survives is by working together. I don’t believe globalism is a choice. Do you?
Now I am not suggesting that those horrifying statistics are solely the function of innovation and specifically technological innovation. Contrary to the title and focus on this talk, I am a big believer in technological innovation. It’s what I do for a living. But I do think there’s a correlation between some of the not so positive stuff going on in our society today and technology. And it’s happening because we’re not asking the right questions, and specifically this question:
Do we understand humanity, really?
It’s my belief that the big boo of technological innovation that fails us, both short and long term, is that we simply don’t understand the user, the customer, the participant as a human. We think they’re a statistic, a demographic, a function. We don’t acknowledge behaviors, predilections, or biases. In our quest to innovate, to succeed, to make millions we delude ourselves into thinking that our customer is simple, when in actual fact he or she is decidedly not. The real problem in solving problems is that humans are complicated. Really complicated.
I don’t actually think the classic clip art image of the evolution of man is accurate. Biologically speaking the creature on the right is far more rationale than the creature on the left. Are we sure about that? Consider this:
Why we do what we do is complicated. Why we don’t do what we should do is complicated. Why we do what we should not do is complicated. And to make it more complicated, we are animals. We are primal. We are driven by basic needs. For those of you who don’t know Maslow’s Hierarchy of Needs, you should. He was a psychologist in the 50s who came up this simple construct that captures the fundamental truth of mankind. We live at the base level of need. Our subconscious, perhaps reptilian focus is always on shelter, warmth, food, and safety. We fear the wooly mammoth, the saber-toothed tiger, we fear strangers. We fear each other.
In fact, let’s prove it. Turn to the person behind you and introduce yourself.
The reason why some of you choked up is because you are afraid that this stranger might hurt you, or worse, reject you. What if you put your hand out and they don’t shake it. Horrors.
The irony is as much as we’re afraid of strangers we desperately, unconsciously, seek validation from strangers. Sean Parker, the co-founder of Facebook admitted to the fact that they realized Facebook was going to be a $1billion plus business when they realized it was all about social validation.
Here’s another scary proof point: 24% of all teenagers in the United States have engaged in sexting. I repeat engaged in sexting.
The fundamental truth is that the complexity of technology is superseded by the complexity of humanity. All the coding in the world won’t work if you don’t understand the human code.
Recognize that a human’s decision to buy or not buy is not purely rationale and is determined by a fundamental and yet complicated formula, and that is this: The adoption of any innovation requires the benefits to far exceed the costs. Sounds simple right. Well, it’s not. Because the operative, complicating factor is the letter s. We assume value equations are single variate – one benefit to one cost. When I buy a cup of coffee from Starbucks the benefit is a cup of coffee, the cost is $3.50. Straightforward, right? There are all sorts of benefits and costs involved.
Sure, there is money. Money as a cost, money as a benefit. That’s called investing.
There’s time as a cost, time as a benefit. That’s called a vacation.
And there’s need based costs that create need based benefits. That’s called dieting so you end up looking better.
So, the equation is complicated, and it’s made more complicated by another fundamental truth: all major decisions are made emotionally.
How many of you did rigorous cost/benefit analysis on whether to come to this conference? Or on where you went to university. Or on who you married? Where you live? We are all driven by basic needs and powerful emotions. If you want to understand humanity, start there.
Think about UBER. Is UBER valued at 130 billion dollars because it’s cheaper than a cab? Uber hits every one of the buttons: Money, time, function, psychological, physiological. AND you know which one matters most: the last two. People adopted Uber and car sharing generally because it makes them feel in control.
So, there’s a graveyard of failed innovations, and many, many tech innovations. Smart people who ended up doing dumb things. And there will be a fintech graveyard. I hate to say it, but to the startup folk in the room, you’re all not going to make it unless you embrace the full importance of this one word:
Innovation is not about building a function, it’s about affecting adoption of the function. Adoption means to “take by choice into a relationship”, and that a really high bar for any product or service. And it applies as much to BtoB as it does to BtoC scenarios. Because even the procurement guy is a human.
We’re not talking about buying, purchasing, licensing, consuming, we’re talking about adoption.
Do you think most people walk the earth looking to adopt? Of course not.
To motivate your customer to adopt you must know exactly who they are. You must get intimate.
This is about up close and personal, working really, really hard to understand the human, and to understand both the problem and the solution
So, if we want the innovation failure rate numbers to improve, we need to get a whole lot better at understanding humanity, otherwise known as people. And we need to look at the situation through both a short- and long-term lens.
The disconnect with the truth of humanity can cause both short-term failure and long term unintended negative consequences.
There are three kinds of unintended consequences: you get more positives benefits than you planned for, the whole thing backfires and you get the opposite of what you planned for, and the third: you get unimagined negative outcomes. And the big ones are the unintended consequences can, and some would argue, are killing us.
In 2009, Air France flight 447 took off from Rio de Janeiro, Brazil headed to Paris, France with 228 passengers and crew on board. The Airbus A330 was at the time one of the more technically advanced planes in the air. The plane hit bad weather, icing caused the autopilot sensors to disconnect, and the crew reacted incorrectly, causing the aircraft to enter an aerodynamic stall from which it did not recover. The plane crashed into the Atlantic Ocean. There were no survivors.
It is theorized that because of their reliance on the technology the pilots did not really know how to fly the plane.
An unintended, really negative consequence.
Just because consequences are unintended does not mean we can walk away from the responsibility for them.
In fact, I spoke at a conference at Harvard several weeks ago with Secretary Ash Carter, the Secretary of Defense under Barack Obama. We were co-hosting an event called Technology Innovation and Public Purpose, part of Secretary Carter’s efforts at Harvard to draw more attention to the importance of our society, of all of us, stepping up to try to mitigate or avoid some of the downstream problems being created by technology. One of his most compelling statements from the evening:
“The pace of technological change cannot be slowed but it can, in fact, it must be steered.”
And he went on to say that it must be steered with an ethical, moral and directly considerate lens of what people will do with some of these disruptive technologies. We need to match technological innovation with ethical innovation. Not ethical innovation from public policy but from us as the collective. The public sector with the private sector. We cannot rely on government to lead the way here. We all have to lead the way, we all have to care.
Humans uniting to help humans not kill themselves.
The funny thing about ethics is that we hold it up as essential component of a functioning, healthy society. And yet in the United States we don’t talk about it or even really teach it. When I went to Duke to get my MBA we had two marketing courses and one 45-minute lecture on Ethics. That was it. I don’t think it’s even taught in most high schools or colleges. I think every country in the world should have an Ethics Czar. An individual charged with making sure every facet of society a) understands what we mean by ethics and b) is held accountable to them.
Ethics need to be matched with an understanding of what these new technologies are and how they are feeding off of humankind’s vulnerabilities or Maslovian needs. An example of that was the Zuckerberg hearings back in April, after the massive Facebook data breach. Terrible on both sides. The senators showed a profound ignorance of what Facebook even is, and Zuckerberg showed a remarkable distance from the ethical and moral issues at hand. But most importantly we learned how ill-equipped Congress is to weigh nuanced questions surrounding technology, data and privacy.
So, in preparing for this talk I read up on the underbanked, the almost 2 billion people on the planet that don’t have access to financial services at a reasonable cost. Thanks to technologies like blockchain and collaborations like APIX that you’re going to hear more about later this morning, there is a real chance for greater financial inclusion and the potential for more economic equality. The caveat is that the admirable quest for financial inclusion must be accompanied by a quest for greater financial literacy. We need governments, the private sector, educators, everybody to come together to make sure that when we put these powerful tools in consumers’ hands, they know what to do with them. Otherwise you end up with situations like the sub-prime mortgage debacle and the ensuing global economic crash.
Again, the technology code must be integrated with the human code. And because it’s often not, we seem to be losing our capacity or desire to do basic things.
Intellectually, we are losing the ability to think critically, to know beyond what is fed to us. I believe America is becoming more stupid, or at least less thoughtful. By the way the lexicon of the average American has dropped from 15,000 words to 5,000 in the last 50 years.
Physically, we don’t move anymore. Playing today means staring at a screen.
Emotionally, we have a really hard time with intimacy. We were never that good at it but we’re getting worse. Maybe in part because we’re staring at those screens and not each other.
Economically, we are less and less able to create unique value in any form. If you stop thinking critically, you lose the ability to create distinct opinion and perspective. In a way, we have become the machine.
Spiritually, we have no grounding anymore. We are increasingly desperate for community.
We need to get some of these basic capacities back. Or at least turn the curve around. And that could be an opportunity for technology to help us, to create positive and intended societal consequences.
The good news is there is a growing movement. Silicon Valley parents are uniting around the idea of reducing screen time among their kids until a certain age. Books are being written like Heartificial Intelligence, heralding the importance of bringing our humanity into the center of the AI development process. And tech industry leaders are even stepping forward, as Tim Cook, CEO of Apple did several weeks ago at a European Union Conference on data privacy. He talked about the importance of bringing humanity into the center of the technology development equation. Not as an afterthought but as a guiding light.
He talked about the role of technology as a democratization engine, a freedom fighter, and enabler of the best of people, not the worst. But that does not happen without our intention. Otherwise we end up cleaning things up on the back end. In fact, Tim went on to talk about the need for regulations regarding online privacy.
He outlined four basic human rights associated with data privacy:
1) Right to minimize personal data
2) Right to the knowledge about data collection and why
3) Right to access and manage your own data
4) Right to data security
He talked about it all being about building a foundation of trust for the people we serve.
I view this as a beta example of how we have to start thinking about building ethical and moral foundations for technology. We have to try to get ahead of this stuff before we’re so far behind we can’t catch up.
And my favorite line from Tim:
“Our responsibility is to infuse the devices (and technologies) we make with the humanity that makes us.”
To infuse. As in inject, intertwine, meld. Whatever the verb, we’re talking about bringing innovation and humanity together. When they pull apart or disassociate, that is the beginning of the end. Let me introduce you to Sir John Glubb. He wrote a piece back in the 1920s titled “The Fate of Empires.” He creates a compelling argument around this fact: every empire since the beginning of time lasted around 250 years or so. And they all eventually die because they become disconnected from their humanity. It is the loss of human-centered innovation that kills empires. It kills countries, companies and cities. Hell, it might kill us.
Technology is dead. I believe in technology. I really do. If any of you have read Stephen Pinker’s latest book Enlightenment Now, it pretty much proves that thanks to technology many more people on this planet are much better off. I also believe that what makes technology alive is when it gets connected to the human. What makes it of value is when it meets the needs and behavioral states of the human, today and tomorrow. When the inanimate is made animate by connecting to the intimate truth of the human. And that is our responsibility.
Whether your question is how to technologically innovate for the short term or long term the answer is to get closer to the human.
And the final question is not actually a question, it’s a request.
If you’re sitting in the audience as a Fintech startup my ask is that you stop working on the solution and start understanding your customer.
If you’re a country, my ask is that you start working on creating a framework for how you establish ethical standards and hold your citizens to them.
If you’re an educator, look for ways to integrate ethical education into whatever you teach.
If you’re a company, my ask is that you look beyond your financial goals and embrace your responsibility to create things that serve customers AND mankind.
And if you’re a citizen, join the movement. Step forward to help make sure that what we are creating with technology is right for us all and future generations.
Thanks very much.