Tag Archives: James Lovelock

Crossing the Creepy Line

In my last post, which dealt with the drive towards the merging of humankind with machines, I quoted Rudolf Steiner to the effect that this was something that could not and should not be stopped but was bound to happen:

“These things will not fail to appear; they will come. What we are concerned with is whether, in the course of world history, they are entrusted to people who are familiar in a selfless way with the great aims of earthly evolution and who structure these things for the health of human beings or whether they are enacted by groups of human beings who exploit these things in an egotistical or in a group-egotistical sense. That is what matters”.

(from Lecture 12 of the series The Reappearance of Christ in the Etheric, 1910)

So the real issue with artificial intelligence (AI) and the mechanisation of human beings is not whether this should happen, (because it inevitably will) but whether it can be done in a way that will be of benefit to humanity without being exploitative or dangerous. It is fundamentally a question of morality. In a lecture given in 1923, Steiner said:

“(…) we can be sure that no moral impulse intervenes in the mechanism of a machine. There is no direct connection between the moral world order and machines. Consequently, when the human organism is presented as a kind of machine, as happens more and more often in the modern scientific outlook, the same then applies to us, and moral impulses are only an illusion. At best, we can hope that some being, made known to us through revelation, will intervene in the moral world order, reward the good, and punish the evil people. But we cannot see a connection between moral impulses and physical processes inherent in the order of the world”.

(from Earthly Knowledge and Heavenly Wisdom, 9 lectures given in Dornach in February 1923, GA221)

If there is no connection between moral impulses and physical processes inherent in the order of the world, then it is clear that, if we are to ensure that AI does not become a disaster for humankind, then human morality becomes of overwhelming importance. But as we know, in situations where there is money to be made and power to be exercised, then morality can become the first casualty.

In a famous interview in 2010, Eric Schmidt, who up until this year was Google’s chairman, said in relation to AI that Google’s policy is to get right up to the “creepy line” but not to cross it. His view at the time was that to implant electrodes in somebody’s brain would be to cross that creepy line, “at least until the technology gets better”.

In an earlier post, I quoted the tech showman and entrepreneur extraordinaire Elon Musk’s warning that AI is more dangerous than the threat posed by dictator Kim Jong-un’s regime in North Korea. Mr Musk took to Twitter to say: “If you’re not concerned about AI safety, you should be. Vastly more risk than North Korea.” He posted the comment along with an image of the anti-gambling addiction poster with the slogan: “In the end the machines will win.” Mr Musk added: “Nobody likes being regulated, but everything (cars, planes, food, drugs, etc) that’s a danger to the public is regulated. AI should be too”. Mr Musk has warned in the past that AI should be better regulated since it poses an “existential threat” to human civilisation. He has also compared developers creating AI to people summoning demons they cannot control.

2017 Vanity Fair Oscar Party Hosted By Graydon Carter

Elon Musk (photo via Fortune)

Elon Musk knows whereof he speaks, because as well as running Tesla (electric cars) and SpaceX (putting satellites for 5G into space and working towards the ultimate aim of building a self-sustaining city on Mars), he is also the founder of Neuralink, a company whose aim is to facilitate direct communications between people and machines. Naturally, as with all these companies, the claim is that they are doing this to help people with severely damaged brains or nervous systems. But in a glitzy presentation in San Francisco earlier this month, Mr Musk spoke of more futuristic plans that can give humans “the option of merging with artificial intelligence” by exchanging thoughts with a computer, so as to augment the mental capacity of healthy people.

There are literally hundreds of companies and academic labs working in the expanding field of neurotechnology. They are all seeking to develop different types of interface between brains and computers for medical and recreational purposes. Neuralink, however, is the only one that aims for “symbiosis with AI” as a business goal. Its researchers have issued a 12-page scientific paper with details of a prototype device which has been implanted in the brains of rats and it seems likely from hints dropped at the San Francisco presentation that tests are already being carried out on monkeys. The company is hoping to get permission from the US Food and Drug Administration to begin a clinical trial in patients with brain damage next year.

Neuralink’s first experiments inserted 3,000 electrodes into the brains of rats but it plans to raise this to 10,000 in an early clinical version. The company has devised a surgical robot to insert the electrodes through small holes in the skull and then weave them through the brain in flexible threads, each thinner than a human hair. The robot has a vision system designed to avoid blood vessels and place the electrodes in specific brain regions. The version for humans will exchange neuronal data between the electrodes and an external computer via a processor with a wireless transmitter implanted behind the ear. Eric Schmidt’s “creepy line” is now being crossed.

Where Elon Musk seeks to score over competitors is by providing large numbers of electrodes in the brain, so that there can be a fast flow of information between the brain and the computer. “The thing that will ultimately constrain our ability to be symbiotic with AI is bandwidth”, he says. Mr Musk added that: “after solving a bunch of brain-related diseases, (the point is) mitigating the existential threat of AI”.

There is no doubt that Neuralink’s research into brain/computer interfaces will offer hope for restoring neurologic function for people with spinal cord injury, stroke, traumatic brain injury or other diseases or injuries of the nervous system. But as Mr Musk has implied, AI can also offer more offensive, anti-life possibilities. As artificial intelligence puts its best public face forward in terms of helping those with brain or nervous system disorders, governments around the world are increasingly investing in another of its applications: autonomous weapons systems. Many are already developing programmes and technologies that they hope will give them an edge over their adversaries, creating mounting pressure for others to follow suit.

lethal via Leading Edge

Lethal Autonomous Weapons (photo via Leading Edge)

It’s clear that we are now in the early stages of an AI arms race. Much like the nuclear arms race of the 20th century, this type of military escalation poses a threat to all humanity and is ultimately unwinnable. It incentivises speed over safety and ethics in the development of new technologies, and as these technologies proliferate they offer no long-term advantage to any one player. You might think that governments would have learned these lessons from past mistakes but this is apparently not so; the utter futility and immorality of these programmes makes one despair. The United States, China, Russia, the United Kingdom, France, Israel, and South Korea are all known to be developing AI for military purposes. Further information can be found at  State of AI: Artificial intelligence, the military, and increasingly autonomous weapons, a report by Pax.

Is all this as scary as it sounds? In his new book Novacene: The Coming Age of Hyperintelligence, the veteran scientist James Lovelock reflects on the future of life on Earth and the prospect of superintelligent machines. Lovelock made his name with his “Gaia hypothesis,” the idea that the Earth can be understood as a single, complex, self-regulating system, much like an organism. Now in his 100th year, Lovelock says that the machines of the future “will have designed and built themselves from the AI systems we have already constructed. These will soon become thousands, then millions of times more intelligent than us”. But Lovelock gives two reasons why he does not think that we should see this as the apocalypse.

Lovelock via Wikipedia

James Lovelock (photo via Wikipedia)

The first reason is that the machines will need us, because they too will be threatened by global warming: “by remarkable chance, it happens that the upper temperature for both organic and electronic life on the ocean planet Earth are almost identical and close to 50C”. Therefore both machines and humans have an interest in ensuring a cool planet and the machines will join us in finding new ways to undo the damage we have done and re-engineer the planet back to climate equilibrium. (Though why the machines will need human beings rather than trees to do this is less clear.)

The second reason is Lovelock’s view that understanding the universe is the real purpose of life. The Earth gave rise to humans as the first stage of this process but it will be our hyperintelligent machines “that will lead the cosmos to self-knowledge”. My own view of Lovelock’s planetary perspective that the rise of machines is an evolutionary inevitability is somewhat coloured by his support for nuclear power; he regards as “auto-genocide” our reluctance to embrace nuclear power in order to stop fossil-fuel-induced global warming. There is, nevertheless, some reason to believe that Lovelock’s thinking and in particular his Gaia hypothesis, has been influenced by Rudolf Steiner, whose work was brought to Lovelock’s attention by his friend, William Golding, the novelist and author of Lord of the Flies. There is a fascinating account of this in an article by Michael Ruse, published online in the Southern Cross Review.

Rudolf Steiner, who a century ago foresaw the present drive towards merging humankind with machines, was also the person who told us what would need to happen if this was not to end in disaster for the human race. Here are two quotations on this theme:

“A driving force which can only be moral, that is the idea of the future; a most important force, with which culture must be inoculated, if it is not to fall back on itself. The mechanical and the moral must interpenetrate each other, because the mechanical is nothing without the moral. Today we stand hard on this frontier. In the future, machines will be driven not only by water and steam but by spiritual force, by spiritual morality. This power is symbolized by the Tau sign and was indeed poetically symbolized by the image of the Holy Grail.”

(from The Temple Legend, The Royal Art in a New Form, Berlin, Lecture 20, January 2, 1906)

“Humanity must learn to deal with nature as the gods themselves have done: not building machines in an indifferent way, but doing everything as an act of divine service and bringing the sacramental into everything. The real demons have to be really driven out by treating the handling of machinery as something sacred.”

(from The Karma of Vocation, Dornach, April 27, 1916)

 

33 Comments

Filed under Artificial Intelligence, Merging of Humans and Machines, Neuralink