Tag Archives: Elon Musk

Crossing the Creepy Line

In my last post, which dealt with the drive towards the merging of humankind with machines, I quoted Rudolf Steiner to the effect that this was something that could not and should not be stopped but was bound to happen:

“These things will not fail to appear; they will come. What we are concerned with is whether, in the course of world history, they are entrusted to people who are familiar in a selfless way with the great aims of earthly evolution and who structure these things for the health of human beings or whether they are enacted by groups of human beings who exploit these things in an egotistical or in a group-egotistical sense. That is what matters”.

(from Lecture 12 of the series The Reappearance of Christ in the Etheric, 1910)

So the real issue with artificial intelligence (AI) and the mechanisation of human beings is not whether this should happen, (because it inevitably will) but whether it can be done in a way that will be of benefit to humanity without being exploitative or dangerous. It is fundamentally a question of morality. In a lecture given in 1923, Steiner said:

“(…) we can be sure that no moral impulse intervenes in the mechanism of a machine. There is no direct connection between the moral world order and machines. Consequently, when the human organism is presented as a kind of machine, as happens more and more often in the modern scientific outlook, the same then applies to us, and moral impulses are only an illusion. At best, we can hope that some being, made known to us through revelation, will intervene in the moral world order, reward the good, and punish the evil people. But we cannot see a connection between moral impulses and physical processes inherent in the order of the world”.

(from Earthly Knowledge and Heavenly Wisdom, 9 lectures given in Dornach in February 1923, GA221)

If there is no connection between moral impulses and physical processes inherent in the order of the world, then it is clear that, if we are to ensure that AI does not become a disaster for humankind, then human morality becomes of overwhelming importance. But as we know, in situations where there is money to be made and power to be exercised, then morality can become the first casualty.

In a famous interview in 2010, Eric Schmidt, who up until this year was Google’s chairman, said in relation to AI that Google’s policy is to get right up to the “creepy line” but not to cross it. His view at the time was that to implant electrodes in somebody’s brain would be to cross that creepy line, “at least until the technology gets better”.

In an earlier post, I quoted the tech showman and entrepreneur extraordinaire Elon Musk’s warning that AI is more dangerous than the threat posed by dictator Kim Jong-un’s regime in North Korea. Mr Musk took to Twitter to say: “If you’re not concerned about AI safety, you should be. Vastly more risk than North Korea.” He posted the comment along with an image of the anti-gambling addiction poster with the slogan: “In the end the machines will win.” Mr Musk added: “Nobody likes being regulated, but everything (cars, planes, food, drugs, etc) that’s a danger to the public is regulated. AI should be too”. Mr Musk has warned in the past that AI should be better regulated since it poses an “existential threat” to human civilisation. He has also compared developers creating AI to people summoning demons they cannot control.

2017 Vanity Fair Oscar Party Hosted By Graydon Carter

Elon Musk (photo via Fortune)

Elon Musk knows whereof he speaks, because as well as running Tesla (electric cars) and SpaceX (putting satellites for 5G into space and working towards the ultimate aim of building a self-sustaining city on Mars), he is also the founder of Neuralink, a company whose aim is to facilitate direct communications between people and machines. Naturally, as with all these companies, the claim is that they are doing this to help people with severely damaged brains or nervous systems. But in a glitzy presentation in San Francisco earlier this month, Mr Musk spoke of more futuristic plans that can give humans “the option of merging with artificial intelligence” by exchanging thoughts with a computer, so as to augment the mental capacity of healthy people.

There are literally hundreds of companies and academic labs working in the expanding field of neurotechnology. They are all seeking to develop different types of interface between brains and computers for medical and recreational purposes. Neuralink, however, is the only one that aims for “symbiosis with AI” as a business goal. Its researchers have issued a 12-page scientific paper with details of a prototype device which has been implanted in the brains of rats and it seems likely from hints dropped at the San Francisco presentation that tests are already being carried out on monkeys. The company is hoping to get permission from the US Food and Drug Administration to begin a clinical trial in patients with brain damage next year.

Neuralink’s first experiments inserted 3,000 electrodes into the brains of rats but it plans to raise this to 10,000 in an early clinical version. The company has devised a surgical robot to insert the electrodes through small holes in the skull and then weave them through the brain in flexible threads, each thinner than a human hair. The robot has a vision system designed to avoid blood vessels and place the electrodes in specific brain regions. The version for humans will exchange neuronal data between the electrodes and an external computer via a processor with a wireless transmitter implanted behind the ear. Eric Schmidt’s “creepy line” is now being crossed.

Where Elon Musk seeks to score over competitors is by providing large numbers of electrodes in the brain, so that there can be a fast flow of information between the brain and the computer. “The thing that will ultimately constrain our ability to be symbiotic with AI is bandwidth”, he says. Mr Musk added that: “after solving a bunch of brain-related diseases, (the point is) mitigating the existential threat of AI”.

There is no doubt that Neuralink’s research into brain/computer interfaces will offer hope for restoring neurologic function for people with spinal cord injury, stroke, traumatic brain injury or other diseases or injuries of the nervous system. But as Mr Musk has implied, AI can also offer more offensive, anti-life possibilities. As artificial intelligence puts its best public face forward in terms of helping those with brain or nervous system disorders, governments around the world are increasingly investing in another of its applications: autonomous weapons systems. Many are already developing programmes and technologies that they hope will give them an edge over their adversaries, creating mounting pressure for others to follow suit.

lethal via Leading Edge

Lethal Autonomous Weapons (photo via Leading Edge)

It’s clear that we are now in the early stages of an AI arms race. Much like the nuclear arms race of the 20th century, this type of military escalation poses a threat to all humanity and is ultimately unwinnable. It incentivises speed over safety and ethics in the development of new technologies, and as these technologies proliferate they offer no long-term advantage to any one player. You might think that governments would have learned these lessons from past mistakes but this is apparently not so; the utter futility and immorality of these programmes makes one despair. The United States, China, Russia, the United Kingdom, France, Israel, and South Korea are all known to be developing AI for military purposes. Further information can be found at  State of AI: Artificial intelligence, the military, and increasingly autonomous weapons, a report by Pax.

Is all this as scary as it sounds? In his new book Novacene: The Coming Age of Hyperintelligence, the veteran scientist James Lovelock reflects on the future of life on Earth and the prospect of superintelligent machines. Lovelock made his name with his “Gaia hypothesis,” the idea that the Earth can be understood as a single, complex, self-regulating system, much like an organism. Now in his 100th year, Lovelock says that the machines of the future “will have designed and built themselves from the AI systems we have already constructed. These will soon become thousands, then millions of times more intelligent than us”. But Lovelock gives two reasons why he does not think that we should see this as the apocalypse.

Lovelock via Wikipedia

James Lovelock (photo via Wikipedia)

The first reason is that the machines will need us, because they too will be threatened by global warming: “by remarkable chance, it happens that the upper temperature for both organic and electronic life on the ocean planet Earth are almost identical and close to 50C”. Therefore both machines and humans have an interest in ensuring a cool planet and the machines will join us in finding new ways to undo the damage we have done and re-engineer the planet back to climate equilibrium. (Though why the machines will need human beings rather than trees to do this is less clear.)

The second reason is Lovelock’s view that understanding the universe is the real purpose of life. The Earth gave rise to humans as the first stage of this process but it will be our hyperintelligent machines “that will lead the cosmos to self-knowledge”. My own view of Lovelock’s planetary perspective that the rise of machines is an evolutionary inevitability is somewhat coloured by his support for nuclear power; he regards as “auto-genocide” our reluctance to embrace nuclear power in order to stop fossil-fuel-induced global warming. There is, nevertheless, some reason to believe that Lovelock’s thinking and in particular his Gaia hypothesis, has been influenced by Rudolf Steiner, whose work was brought to Lovelock’s attention by his friend, William Golding, the novelist and author of Lord of the Flies. There is a fascinating account of this in an article by Michael Ruse, published online in the Southern Cross Review.

Rudolf Steiner, who a century ago foresaw the present drive towards merging humankind with machines, was also the person who told us what would need to happen if this was not to end in disaster for the human race. Here are two quotations on this theme:

“A driving force which can only be moral, that is the idea of the future; a most important force, with which culture must be inoculated, if it is not to fall back on itself. The mechanical and the moral must interpenetrate each other, because the mechanical is nothing without the moral. Today we stand hard on this frontier. In the future, machines will be driven not only by water and steam but by spiritual force, by spiritual morality. This power is symbolized by the Tau sign and was indeed poetically symbolized by the image of the Holy Grail.”

(from The Temple Legend, The Royal Art in a New Form, Berlin, Lecture 20, January 2, 1906)

“Humanity must learn to deal with nature as the gods themselves have done: not building machines in an indifferent way, but doing everything as an act of divine service and bringing the sacramental into everything. The real demons have to be really driven out by treating the handling of machinery as something sacred.”

(from The Karma of Vocation, Dornach, April 27, 1916)

 

33 Comments

Filed under Artificial Intelligence, Merging of Humans and Machines, Neuralink

Don’t be evil

I don’t suppose that Rudolf Steiner’s lectures incorporate the use of the word “fun” very often. The words “serious” and “earnest” are much more common. So I was struck by the following passage in which he did use that word – but not in a cheerful context (this is Steiner, after all). Here is the passage, from Lecture Three of the series entitled The Influences of Lucifer and Ahriman:

“The fact that — to use a colloquialism — people in the future are not going to get much fun out of developments on the physical plane will bring home to them that further evolution must proceed from spiritual forces.”

Steiner’s view was that the time when human progress was possible through purely physical means is now over. Human progress will be possible in the future but only through development on a higher level than that of the processes of the physical plane. Speaking just after the First World War, he went on to say: “This can be understood only by surveying a lengthy period of evolution and applying what is discovered to experiences that will become more and more general in the future. The trend of forces that will manifest in the well-nigh rhythmical onset of war and destruction — processes of which the present catastrophe (ie the First World War) is but the beginning — will become only too evident. It is childish to believe that anything connected with this war can bring about a permanent era of peace for humanity on the physical plane. That will not be so. What must come about on the earth is spiritual development…”

In the one hundred years since he spoke, Steiner has certainly been proved right about the impossibility of a permanent era of peace for humanity – this century has been the most terrible in the history of the world. What else did he foresee?

“Just as once in the East there was a Lucifer incarnation, and then, at the midpoint, as it were, of world evolution, the incarnation of Christ, so in the West there will be an incarnation of Ahriman. …This ahrimanic incarnation cannot be averted; it is inevitable, for humanity must confront Ahriman face to face. He will be the individuality by whom it will be made clear what indescribable cleverness can be developed if they call to their help all that earthly forces can do to enhance cleverness and ingenuity. In the catastrophes that will befall humanity in the near future, people will become extremely inventive… Humanity has no knowledge of these things as yet; but not only will they be striven for, they will be the inevitable outcome of catastrophes looming in the near future. And certain secret societies — where preparations are already in train — will apply these things in such a way that the necessary conditions can be established for an actual incarnation of Ahriman on the earth. This incarnation cannot be averted, for people must realise during the time of the earth’s existence just how much can proceed from purely material processes! We must learn to bring under our control those spiritual or unspiritual currents which are leading to Ahriman.”

YBA via antroposofie en apocalypse blog

Yeshayahu Ben Aharon (photo via antroposofy en apocalypse blog)

When is this incarmation likely to occur? Tempting though it is to point to many of today’s phenomena as indicators that the incarnation has already happened, my current sense is that things are going to get quite a bit worse yet before Ahriman himself appears in physical form. I recently read an article called Empty Hearts and Technological Singularity by Yeshayahu Ben Aharon (available for download to subscribers to the Academia website) in which he describes the coming merger of the human being with infinitely intelligent machines, as predicted by Ray Kurzweil, Google’s senior futurist.

“In esoteric terms, this means that the human’s free etheric body from the head to the heart will be totally taken over by ahrimanic, infinitely brilliant, wise and powerful intelligences. I would recommend Kurzweil’s book, The Singularity is Near, to every body interested in the coming future, as an introduction to open your mind and eyes to see where we are going from the ahrimanic point of view. I call this the building of the kindergarten of Ahriman, in preparation for his school that he will build in the 23rd century in America. He will build a school that will work with the etheric forces taken from the rest of the body as well: the head, the heart and the whole thing. This is Ahriman’s kindergarten, the forerunner of his mature school.

The singularity people promise a new kind of immortality. Human beings will be identified with Infinite Intelligence through super computers and so on, and they will experience a sort of immortality for their earthly consciousness, an indefinite life. Your whole soul life, including everything you were thinking and remembering, which you already invested externally in the infinite virtual reality, will be preserved forever. Even if you die physically, it will be preserved and it will continue to evolve and develop through Infinite Intelligence. The idea is that people will not die physically or at least live hundreds of years, since the new technology will overcome the illnesses that medicine could not conquer. But after long years, if they still die at all, all their life will remain as a virtual personality in a tech reality, continuing as it were, a second life. But this will really become the primary life, the life of this individuality as his avatar in virtual reality. If you don’t know this world very well, it will be hard for you to create a picture of it. Look into it; the children already know all about it, they are born into the Matrix, as their parents merge them with the internet immediately. This is the ahrimanic side, because everything is accomplished through Infinite Intelligence, working through virtual reality. In the future, a huge intelligent machine will have been merged into the human body, and humans and infinitely smart, powerful, and all-knowing and all entertaining AI (Artificial Intelligence) will become one and the same…”

It is worth reading the Wikipedia entry about Ray Kurzweil, including his transhumanism (ie the belief that human beings may eventually be able to transform themselves through technology into different beings with abilities so greatly expanded from the natural condition as to merit the label of  “post-human beings”) and his predictions for the future. Along with Yuval Noah Harari, whom I wrote about here, it is clear that Kurzweil, this highly intelligent man, is nevertheless one of the Useful Idiots preparing the way for Ahriman.

Kurzweil wikipedia

Ray Kurzweil (photo via Wikipedia)

As part-evidence for my statement that Kurzweil is an idiot, it should be noted that he has joined the Alcor Life Extension Foundation, a cryonics company. In the event of his declared death, Kurzweil plans to be perfused with cryoprotectants, vitrified in liquid nitrogen, and stored at an Alcor facility in the hope that future medical technology will be able to repair his tissues and revive him. I have written more about the absurd belief and practices of cryonics and Alcor here.

It should also be noted that Kurzweil was personally hired by Google co-founder Larry Page. “Don’t be evil” was the motto of Google’s corporate code of conduct, first introduced around 2000. In Google’s IPO (initial public offering of shares) in 2004, a letter from Google’s founders included the following: “Don’t be evil. We believe strongly that in the long term, we will be better served—as shareholders and in all other ways—by a company that does good things for the world even if we forgo some short term gains.”

However, surprise, surprise – following Google’s corporate restructuring in October 2015, the motto was dropped and replaced in the new corporate code of conduct by the phrase “Do the right thing”. So is AI the right thing for human beings? Is that the way we should be going?

Elon_Musk_2015 wikipedia

Elon Musk (photo via Wikipedia)

If the world won’t listen to Rudolf Steiner or to anthroposophists, perhaps they will pay more attention to a billionaire entrepreneur: Elon Musk has warned that AI is more dangerous than the threat posed by dictator Kim Jong-un’s regime in North Korea. Mr Musk, chief executive of Tesla and SpaceX, took to Twitter to say: “If you’re not concerned about AI safety, you should be. Vastly more risk than North Korea.”

He posted the comment along with an image of the anti-gambling addiction poster with the slogan: “In the end the machines will win.” Mr Musk added: “Nobody likes being regulated, but everything (cars, planes, food, drugs, etc) that’s a danger to the public is regulated. AI should be too.

Mr Musk has warned in the past that AI should be better regulated since it poses an “existential threat” to human civilisation. He has also compared developers creating AI to people summoning demons they cannot control. Exactly so.

 

107 Comments

Filed under Ahriman, Artificial Intelligence, Rudolf Steiner