“We didn’t build the interstate system to connect New York to Los Angeles because the West Coast was a priority. No, we webbed the highways so people can go to multiple places and invent ways of doing things not thought of by the persons building the roads.”—Neil deGrasse Tyson
On the topic of virtual currencies and peer-to-peer payment protocols, a common question among casual observers and skeptics alike emerges: Why does any of this matter?
It is, at first glance, a valid question. While Silicon Valley nerds might revel in the novel solution to the Byzantine Generals’ problem, and anti-establishment hackers will prize a fresh opportunity to undermine authority, why should the rest of us care—beyond the dizzying rise and fall of prices that invariably catches our eye, if only because it tickles our deepest desires to get rich quick?
Take your typical college-educated Millennial, the symbolic next generation: the practical attraction of Bitcoin is trivial. Ignoring the crypto-movement entirely would hardly leave them worse for wear—a testament to how well our system works (at least for this particular demographic).
We can buy things in person or online with little fuss—often with the simple click of a button. A growing number of apps allow us to split the check at dinner or pay back a friend with the ease of Liking a photo on Facebook. Whatever fees exist are invisible, quietly absorbed by service providers and merchants behind the scenes.
Though these costs are implicitly included in the price we pay, to us, we readily perceive them as free—as we do with services like Gmail or Twitter. And while sending large sums of money, especially internationally, can be daunting and expensive, the majority of us face such a task as frequently as we move to a new apartment or switch mobile providers, infrequently enough that we can’t be bothered.
So if the existing system is already good enough, what, then, is the point? If we’re fine as is, aren’t cryptocurrencies trying to solve a problem that doesn’t materially exist?
The underlying issue, of course, is that our concept of “good enough” is confined to what we know—which really boils down to what we’ve grown accustomed to and the expectations that come with familiarity. The answer to this question, however, is defined not by our sense of normality but rather what we don’t know and more aptly, what we can’t know.
As Tom Seargent—described as one of the smartest economists alive—noted in a list of economic tenets shared with the graduating class of 2007 at Berkeley (which recently went viral): “In an equilibrium of a game or an economy, people are satisfied with their choices. That is why it is difficult for well meaning outsiders to change things for better or worse.” Just as our bodies tend toward homeostasis, so too does the status quo.
Exact details of tomorrow will forever elude us. Confounded by chaos, we can hardly predict next week’s weather, let alone time the next financial crisis or presciently dissect the implications of invention over the next decade. Even as our methodology and processing power advance, so too does the complexity of the systems we create (our robo-fueled stock market, for instance). Obscured by a web of incalculable outcomes, there are limits to our foresight.
This myopia is reflected in our society and our institutions, where we struggle to see past the next viral meme, the next quarterly report, or the next election cycle. Bound by systematic short-termism, we eagerly succumb to the natural seduction of instant gratification and disregard the long game.
All of which is to say that the question posed is inherently flawed. Neil deGrasse Tyson puts it best in a chat with Stephen Colbert from 2011 (Editor’s note: Tyson is speaking from the perspective of US policy and the government’s investment in the sciences, but his point holds for technology and the world at large.):
Half of what gets thrown on YouTube are talks I’ve given where I’m trying to convince people—not only the public, but lawmakers and people in power—that investing in the frontier of science, however remote it may seem, in it’s relevance to what you’re doing today, is a way of stockpiling the seed corn of future harvests of this nation. Advancing a frontier, history has shown, has advanced a culture, ever since the Industrial Revolution got under way.
And we can speak more hegeomonistically about it. Anyone who has embraced the powers of technology has enjoyed economic wealth the likes of which the world has never seen attendant with strength, strength of security.
And so I see science and technology—and creative investments in it—as the most significant infusion to our economy that can possibly be conceived. The problem is that it’s not going to boost the economy next quarter. It’s got a time horizon longer than most people have the patience for and most politicians have the re-election cycle to be tolerant of. So what we need is a longer view on those investments.
The need is to be able to have the foresight necessary to make investments on the frontier of science, even if at the time you make those investments, you cannot figure out how that might make you rich tomorrow.
In other words, the impact of our discoveries is rarely apparent in the near term. Granted, Tyson is an astrophysicist and the concept of smarter payments might lack the sexiness of say, inventing electricity or sending a rover to Mars. But trade is also a fundamental tenet of human civilization and increasing trade spurs economic growth. A consistent byproduct from this persistent interchange of value is an improvement in the quality of life.
“As history shows over and over, when transaction costs fall, trade expands,” St. Louis Fed economist David Andolfatto recently told us in an interview. “This was true with the development of canals, railways, etc. I expect the new wave of technologies to promote trade in the same way.”
Because this is what we do know: while our current system is relatively convenient, that convenience is derived not from a well-oiled machine, but a patchwork of band-aids applied to an underlying apparatus devised in a bygone era.
While credit cards admirably mimic a payment solution for the digital age, their ubiquity for internet purchases is a daily reminder of our financial obsolescence. Is it logical that we sign paper receipts for no apparent reason? Is it secure that we regularly surrender the keys to the castle every time we dine out or shop online—likely without a second thought? And why is there an app for everything except paying for my morning coffee?
At a time when our smartphones have assimilated every conceivable personal accessory, our wallets still sit stubbornly in our back pocket, dumb as ever. When it comes to money, we’re using the equivalent of CD players.
It’s not just credit cards—invented in the 1950s, half a century before the rise of the personal computer and the World Wide Web—but the entire antiquated system, a mishmash of dissonant, fragmented networks. Less so physical barriers, the borders of countries are separated by financial intermediaries waiting to take their cut.
Even in the US, the richest country in the world, the transfer of funds is regularly outpaced by the Postal Service, where people can shop on Thanksgiving yet are unable to bank on Sunday. We can’t help but grimace when Chrome takes an extra half-second to load but won’t bat an eye when it takes a week to withdraw our PayPal balance.
The page impression you just made, by the way, may have earned that website half a penny in ad revenue. In a perfect world, that payment would process as it happens. In our world, micropayments aren’t economically feasible. And speaking of micro, Wall Street can now trade in the millionth of seconds. Consumer finance, meanwhile, is still stuck at three business days.
All of which we accept not because it’s “good enough” but because we’ve been conditioned to have nominal expectations.
Which is exactly why we should care—why decentralized payment protocols matter—to challenge those expectations and shine a light on the technological stagnation of an industry and the engine for our global economy.
Far from its misunderstood characterization as an ideological revolution to usurp institutions or a subversive vehicle for the dark arts, the cryptocurrency movement is about advancing the frontier, the very kind of progress Tyson espouses. It’s the embodiment of the human spirit and our collective refusal to settle. When all else fails, ingenuity drives us forward.
What happens when we can send each other money cheaply, instantly, and anywhere in the world? What happens when decentralized networks and smart contracts eliminate scores of middlemen, reduce the the cost of doing business, and bridge regional markets? What happens when the very idea of money, value, and payments is flipped on its head and left to the devices and imaginations of enterprising entrepreneurs, community leaders, and artists—suddenly empowered by a platform that’s open, agnostic, and inclusive? What happens when the overlooked finally have a chance to participate?
What happens when our financial system is designed from the ground up within the parameters of our digital reality, composed by the language of ones and zeros? What does that world look like? The honest answer is that we don’t know—we can’t know. What we do know is that a better connected, more efficient tomorrow is a step forward from today—and those steps forward, no matter how insignificant they might seem, ultimately benefit everyone.
But the real beauty is that we don’t have to wait until tomorrow to make a difference—if not for well-off 20-somethings, then for those in sub-Saharan Africa, who, on average, pay a 12 percent fee on a $200 remittance or the billions worldwide who remain under or unbanked, effectively excluded from opportunity’s machine.
And they’re not the only ones worried about the world passing them by.
In contemplating society’s potential failure to properly embrace scientific and technological innovation, Tyson had only this to say: “I don’t want to be left behind.”