Intelligence,  Series

When a grain becomes a heap: Our responsibility as humans in an age of artificial intelligence

Imagine you have a grain of sand. Even by a vast stretch of imagination, that is not a heap. What if you add one more grain to it? Still not a heap. If you accept the two premises above – a grain of sand is not a heap, and adding a grain of sand to a grain or collection of grains that is not a heap does not make a heap – then it stands to reason that you will never be able to get a heap of sand starting with a grain of sand.

Thus goes a variation of the Sorites Paradox – a concept that has intrigued philosophers since it was first formulated by Eubulides in the 4th century BC. It is a paradox that helps to point out that when changes are incremental, when each change by itself is a minor compared to the previous – we, as humans, find it difficult to realize the magnitude of change.

Artificial Intelligence (AI) has had a tumultuous history – the progress of intelligence has been, at best a linear progression with significant set backs. Humans started augmenting intelligence as early as 1877 with the fist calculators (if you don’t go all the way back to the cave paintings, which you could consider an augmentation of memory through external tools) . We had the golden age of artificial intelligence, starting from the mid 1950s to 1970s, where scientists across disciplines were excited about the possibilities of machine intelligence. But that was not to last – “AI winter” set in in the early 1970s, and lasted most of the decade. Technology just wasn’t advanced enough to bring the vision of the imaginative scientists to fruition. The next boom lasted about 7 years, from 1980 until 1987, at which point it lost steam again. But now, we are again at a heady period in AI history. But it wouldn’t be unreasonable to ask – really, is this really it? Why should we pay any particular attention to it ? Why now, do we think that the cumulated grains of sand is finally a heap?

Because of an unusual juxtaposition of three phenomena, none of which is a fad, and none of which are going away.

  1. Advances in AI algorithms – deep learning has seen much progress in recent years.
  2. Huge compute power available in the cloud, making compute power available to individuals and organizations like never before.
  3. Large sets of data generated by people and machines at unprecedented rates.

These three forces come together in self-reinforcing virtuous circles to create an unprecedented context for AI growth.

My intent is not to predict the future of AI, but it is to say that we have to sit up and pay attention. Not just us technologists, but all of us as human beings. It is still important to acknowledge and realize the unique moment in history that we are in. Why? Because, now more than ever, is our one opportunity to design the AI of the future, in so far as we can do it.

Parents, as much as they like to believe it, do not have much control over their children’s lives or future. Scientists are divided on the nurture vs nature equation. Even then, every well-meaning parent around the world does what they can to ensure that their children grow up with a sense of what is right and wrong – a foundational sense of values and fundamentals that will help their children find their own “right” path. Our responsibility towards AI is not all that different.

Applying the AI version of nature and nurture, an AI’s “growth” (again to borrow a human term), is determined by:

  • the technological capability (akin to our genes) – this is the ability to learn. This is what excites most engineers, and where technology companies and research organizations have made huge leaps over the past several years.
  • the learning material (akin to a child’s family and school environment) – this determines what the AI will learn. AI needs vast amounts of data, and fortunately, going back to the point#3 above, we have humongous amount of data. But is this the right kind of data we need?

If we let AI learn based on the past, what we will get is a machine-run version of the past. A past rife with inequality, biases, prejudices and violence. I am no pessimist, and I don’t deny the beauty of the world we live in, but I am also not blind (and neither should you) to the deep-rooted issues that have plagued mankind for centuries.

What if, just imagine, what if – we are at that unique moment in history where we can design a better future? An AI future, when I imagine it, regardless of the connotations the word might conjure in most minds, is not a future of glitzy gadgets and technological pizzazz. It is a world where, with the aid of machines, we overcome our own limitations, and augment our own selves in a way that we are able to create an inclusive, peaceful, empowered world. Where it is not the winner who survives but it is a world where everyone thrives.

What if, in our analogy of parenting AI, our child is no longer a toddler, but is also not yet a rebelling teenager? What if they are in that wonderful window of an age where they listen and absorb and are willing to learn from their parents? Whether we like it or not, whether we feel ready to handle the responsibility or not, we are that generation that gets to be accountable for the generation of machines that are to come.

Again, my intent here is not to predict the future. Our world is way too complex to predict a single outcome. But it is one where actions matter. If the flutter of a butterfly wing can cause a typhoon (butterfly effect is the idea that small things can have non-linear impacts on a complex system), it stands to reason that our actions have impact. Collective coordinated actions have massive impact.

Technology, like any other tool, is neutral. Whether it helps or harms depends on the human. On every single human. What is it that we need/ can do as individuals, organizations and society – we will explore that in the next posts in the series.

For now, let me end with a quote from TED curator Chris Anderson in the book, “What to think about machines that think” by John Brockman (a recommended read).

Intelligence doesn’t reach its full power in small units. Every additional connection and resource can help expand its power. A person can be smart, but a society can be smarter still…

By that logic, intelligent machines of the future wouldn’t destroy humans. Instead, they would tap into the unique contributions that humans make. The future would be one of ever richer intermingling of human and machine capabilities. I’ll take that route. It’s the best of those available.
Together we’re semiunconsciously creating a hive mind of vastly greater power than this planet has ever seen — and vastly less power than it will soon see.

“Us versus the machines” is the wrong mental model. There’s only one machine that really counts. Like it or not, we’re all — us and our machines — becoming part of it: an immense connected brain. Once we had neurons. Now we’re becoming the neurons.