Reporting Fives—Week Five
I trust this finds you in pursuit of wisdom,
Monday, 27 February 2017
Humanity in the fourth Industrial Revolution
We Sapiens possess an apparently unquenchable desire for innovation. The techno-optimists among us would say that innovation will likely solve any problem we are bound to face. But often technological interventions only change the nature of a problem, or, at worst, create an entirely new problem. This is not to say we should not pursue new technology as it comes into reach, but rather that we should not be so naive to think that new solutions won't present unforeseen negative outcomes.
As new technologies change our world, our place within it also changes. This change is in equal measures exciting, empowering, and frightening. Just as the Industrial Revolution of the early 20th century, and the Agricultural Revolution before that, changed how we lived and worked, the fourth Industrial Revolution has heralded a new paradigm for us to grapple with.
This revolution manifests in the deeply embedded nature of technology within our society. It is marked by emergent artificial intelligence, additive manufacturing (3D printing), the Internet of Things, and advanced biotechnology. For Sapiens, some speculate that this could be the end of our species. But at a minimum, the vast proliferation of automation could mean a future without work.
One solution to this problem, favoured by Silicon Valley, is not a technological one, but rather an economic one. In a piece for the New York Times Magazine, Annie Lowrey describes this solution, sometimes called basic or guaranteed income, as »... a curious piece of intellectual flotsam that has washed ashore several times in the past half-millennium...«. It's true, this is not a new idea. In fact, Lowrey goes on to explain that the concept appeared in Thomas More's Utopia in 1516 and again in Agrarian Justice published in 1797 by Thomas Paine. But in recent years it has graduated from mere musings to working prototypes lead by Finland and the Netherlands, with lesser experiments run in Canada, India and Namibia. But it is a new solution in Kenya supported by GiveDirectly that is creating the most recent stir.
The great power of new technologies is its ability to leapfrog existing paradigms. In Rwanda, drones now deliver medicine more efficiently than traditional infrastructure could ever have hoped to. The sunk cost of road and rail prevent countries with these adequate albeit 20th-century modes of delivery from fully adopting the more efficient 21st-century solution. Meanwhile, in Kenya, M-Pesa is disrupting the classic late-20th century model of funds transfer by pigging backing on SMS technology. Lowrey explains that:
> In 2007, Vodafone and the British Department for International Development together built a system, called M-Pesa, for Kenyans to transfer actual shillings from cellphone to cellphone. An estimated 96 percent of Kenyan households use the system today.
Though this system has since spread from Afghanistan to India and eastern Europe, it is most at home in Kenya where GiveDirectly is distributing »$24 million in donations for its basic-income effort, including money from founders of Facebook, Instagram, eBay and a number of other Silicon Valley companies«.
But what of the future of our developed economies? Bill Gates presents a novel idea, suggesting that by taxing robot labour we may be able to support the transition of factory workers into aged and child care—areas that are still highly underserved and are well suited to the innate empathy of humans.
Whether we adopt a universal income or new, innovative taxes, it is clear that we must consider the impact of our technocratic future.
Disclosure: Björn personally supports GiveDirectly's basic income trial with monthly donations.
Inspired by: The New York Times Magazine
Tuesday, 28 February 2017
Some 70,000 years ago we Sapiens enjoyed our Cognitive Revolution, although it is somewhat unclear exactly why. During this time we began to behave in new and ingenious ways. It was at this point in history that our ancestors began spreading to all corners of the globe. This was followed by the Agricultural Revolution roughly 11,000 years ago, which many believe to have been a great triumph for humanity. But historian Yuval Noah Harari, author of the international bestseller Sapiens: A Brief History of Humankind, takes a different view.
> Scholars once proclaimed that the Agricultural Revolution was a great leap forward for humanity. They told a tale of progress fuelled by human brain power. Evolution gradually produced ever more intelligent people. Eventually, people were so smart that they were able to decipher nature's secrets, enabling them to tame sheep and cultivate wheat. As soon as this happened, they cheerfully abandoned the gruelling, dangerous and often Spartan life of hunter gathers, settling down to enjoy the pleasantly satiated life of farmers. That tale is a fantasy. There is no evidence that people became more intelligent with time.
In chapter three of the same book, Harari acknowledges that we Sapiens, as a group, are far more knowledgeable now than at any other point in history. But that we have also become heavily reliant on one another far beyond comfortable acknowledgement.
> The Human collective knows far more today than did the ancient bands. But at the individual level, ancient foragers were the most knowledgeable and skilful people in history. There is some evidence that the size of the average Sapien's brain has actually decreased since the age of foraging. Survival in that era required superb mental abilities from everyone. When agriculture and industry came along, people could increasingly rely on the skills of others for survival, and new niches for imbeciles were opened up. You could survive and pass your unremarkable genes to the next generation by working as a water carrier or an assembly line worker.
Beyond this industrial scale codependency we've cultivated, Harari also suggests that contrary to popular belief, the Agricultural Revolution has resulted in a much harder existence for farmers when compared with hunter-gatherers.
> Rather than heralding a new era of easy living, the Agricultural Revolution left farmers with lives generally more difficult and less satisfying than those of foragers. Hunter-gatherers spent their time in more stimulating and varied ways, and were less in danger of starvation and disease. The Agricultural Revolution certainly enlarged the sum total of food at the disposal of humankind, but the extra food did not translate into a better diet or more leisure. Rather it translated into population explosion and pampered elites. The average farmer worked harder than the average forager and got a worse diet in return. The Agricultural Revolution was history's biggest fraud.
Whether or not Harari is right, we should be cautious not to invest so heavily in the next revolution that we have no means of turning back.
Inspired by: Sapiens: A Brief History of Humankind
Wednesday, 29 February 2017
In the past, before the nuclear family and the suburban sprawl they occupied, small communities could be seen as kin, pulling together for common good. By contrast, the small post-modern family units of today live in suburbs far from work and amenity. Even if we enjoy a well-amenitied suburb, our increased mobility means we often still travel great distances to a particular store or service provider. As such, we are increasingly detached from our immediate surroundings and our neighbours. For so many of us, we find communities not based on physical proximity but rather on personal values, which in many cases are mutually exclusive.
But this is not always the case, some communities remain incredibility tight knit. One such group are orthodox and conservative Jews who conform to the Mitzvah. For these groups, operating a motor vehicle on the Shabbat violates the halakha. As such, orthodox and conservative Jews walk to synagogue services on those days. For obvious practical reasons, this simple rule means that everyone in that religious community lives within walking distance from the temple and therefore one another.
For those of us living outside the bounds of conservative Judaism, our communities are defined by the range of our cars, bicycles and to a lesser degree, our mass transit. This means where we live and socialise are often separated by large distances, even if we do not perceive them to be.
Perhaps if we were to bind our lives to smaller spaces, we would build stronger communities. If we all lived, worked, shopped, learned and socialised in one place, we might build better, more democratic neighbourhoods. We might by this virtue develop neighbourhoods with localised food production, and a wide range of basic amenities so many of our suburbs presently lack. It might facilitate a departure from centralised big box stores and cultural homogenisation. It would certainly be more environmentally sound, but then again, it might just create insular and guarded tribalism.
Inspired by: HIP V. HYPE Collective Exchange
Thursday, 30 February 2017
Two years ago, Radiolab presented a story on a relatively new gene-editing technology called CRISPR. Since then, CRISPR—which is short for Clustered Regularly Interspaced Short Palindromic Repeats—has enjoyed a great deal of attention. According to Heidi Ledford in an article for Nature, it had appeared in some 600 research papers by the end of 2014, about six months before Radiolab released their story. For those who understand the possibilities of this technology, it is easy to understand why it would attract such frenzied attention. With this technology, researchers claim we could fight cancer, reduce the risk of Alzheimer's and maybe even bring animals back from extinction.
Recently Radiolab updated their story, in which they share an anecdote featuring Kevin Esvelt from the MIT Media Lab as he walked through the Emerald Necklace in Boston wondering »what if we could encode CRISPR in the genome, what if we programmed the genome to do genome editing on its own?«. With this question, the CRISPR/Cas9 endonuclease gene-drive was born.
The implication of a gene-editing genome is vast. Unlike a regular CRISPR intervention that follows the Mendelian inheritance model, gene-drive will replicate indefinitely. At first glance, this seems to be an amazing force for good if we imagine it in the context of a malaria-resistant mosquito that has a 100% chance of passing on its resistance to 100% of its offspring. But it takes just a moment to understand how dangerously uncontrollable the effects of such an intervention might be.
In fact, by 2015 the National Academies of Sciences, Engineering, and Medicine released a report which stated:
> Gene-drive modified organisms hold promise for addressing difficult-to-solve challenges, such as the eradication of insect-borne infectious diseases and the conservation of threatened and endangered species. However, proof-of-concept in a few laboratory studies to date is not sufficient to support a decision to release gene-drive modified organisms into the environment. The potential for gene drives to cause irreversible effects on organisms and ecosystems calls for a robust method to assess risks. A phased approach to testing, engagement of stakeholders and publics, and clarified regulatory over-sight can facilitate a precautionary, step-by-step approach to research on gene drives without hindering the development of new knowledge.
As it stands, researchers like Kevin Esvelt are experimenting with gene-drives, which include fail-safes that prevent it from surviving beyond just a few generations. But even so, he warns that this technology means that it is now »... at least theoretically possible for one person to decide to change the local or possibly the global environment«.
However worrisome that is, the bigger issue is the ethical implications of one person making, without consent, a genetic change to not just one organism but to all future generations of that organism.
Inspired by: Radiolab: Update: CRISPR
Friday, 31 February 2017
Practical quantum computers
Long before Intel gave us the Pentium, released with a 60 MHz clock speed in 1993, researchers have been imagining processors unconstrained by the binary nature of transistors. In fact, as early as 1980, Yuri Manin described the mathematical foundations of quantum computing in his paper entitled Computable and Uncomputable. But until now it has been notoriously difficult to fabricate stable qubits that are able to achieve both superposition and entanglement.
Thanks to the work of Leo Kouwenhoven and his team at Delft, the MIT Technology Review has listed quantum computers among its '10 Breakthrough Technologies' for 2017 noting that: »... a raft of previously theoretical designs are actually being built«. The article goes on to indicate that the growing corporate interest in quantum computing is playing a significant role in the viability of the technology as those corporations are offering financial backing to both the research and development of assorted adjacent technologies.
It's worth noting that IBM first demonstrated a working quantum computer in 2000 and has been making an iteration of this available to the public via a cloud-based service since last year. Google too has been working on its own quantum computer, while both NASA, Lockheed Martin and the Los Alamos National Laboratory are working with The Canadian company D-Wave System. However, Wired warned that:
> ... today’s quantum computers still aren’t practical for most real-world applications. qubits are fragile and can be easily knocked out of the superposition state. Meanwhile, quantum computers are extremely difficult to program today because they require highly specialized knowledge.
Russ Juskalian of the MIT Technology Review offers a further warning aimed at D-Wave's quantum annealing technology by claiming that:
> The approach, skeptics say, is at best applicable to a very constrained set of computations and might offer no speed advantage over classical systems.
But by Juskalian's own reporting, the absence of this speed advantage might soon disappear if Harmut Neven, the head of Google’s quantum computing effort, is able to deliver a 49-qubit system in the coming year as promised. This is an important milestone, as Juskalian describes:
> The target of around 50 qubits isn’t an arbitrary one. It’s a threshold, known as quantum supremacy, beyond which no classical supercomputer would be capable of handling the exponential growth in memory and communications bandwidth needed to simulate its quantum counterpart. In other words, the top supercomputer systems can currently do all the same things that five- to 20-qubit quantum computers can, but at around 50 qubits this becomes physically impossible.
Despite all this, quantum computing is still largely confined to the lab. But we should be mindful of how it might impact our lives. For instance, this technology could render state-of-the-art asymmetrical encryption based on the RSA model (the history of which was covered in a recent episode of 50 Things That Made the Modern Economy) obsolete since one of the greatest strengths of a quantum computer is factoring large numbers. While RSA will be replaced by quantum cryptography in time, it's fair to expect that there will be a period where only universities, wealthy corporations and governments will wield this power.
Inspired by: MIT Technology Review & Wired