mainpic

Papers presented at an international symposium considering the true nature of extraterrestrial Intelligence.


Introduction: The True Nature of Aliens

Is it time to re-think ET?

For well over a half-century, a small number of scientists have conducted searches for artificially produced signals that would indicate the presence of intelligence elsewhere in the cosmos. This effort, known as SETI (Search for Extraterrestrial Intelligence), has yet to find any confirmed radio transmissions or pulsing lasers from other beings. But the hunt continues, recently buoyed by the discovery of thousands of exoplanets. For many, the abundance of habitable real estate makes it difficult to believe that Earth is the only world where life and intelligence have arisen.

SETI practitioners mostly busy themselves with refining their equipment and their lists of target solar systems. They seldom consider the nature of their prey – what form extraterrestrial intelligence might take. Their premise is that any technically sophisticated species will eventually develop signaling technology, irrespective of their biology or physiognomy.

This view may not seem anthropocentric, for it makes no overt assumptions about the biochemistry of extraterrestrials; only that intelligence will arise on at least some worlds with life. However, the trajectory of our own technology now suggests that within a century or two of our development of radio transmitters and lasers, we are likely to build machines with artificial, generalized intelligence. We are engineering our successors, and the next intelligent species on Earth is not only certain to dwarf our own cognitive abilities, but will be able to engineer its own, superior descendants by design, rather than counting on uncertain, Darwinian processes. Assuming that something similar happens to other technological societies, then the implications for SETI are profound.

In September, 2015, the John Templeton Foundation’s Humble Approach Initiative sponsored a three-day symposium entitled “Exploring Exoplanets: The Search for Extraterrestrial Life and Post-Biological Intelligence.” The venue for the meeting was the Royal Society’s Chicheley Hall, north of London, where a dozen researchers gave informal presentations and engaged in the type of lively dinner table conversations that such meetings inevitably spawn.

The subject matter was broad, ranging from the multi-pronged search for habitable planets and how we might detect life, to the impact of both the search and an eventual discovery. However, the matter of post-biological intelligence – briefly described above – or the possibility of non-Darwinian evolutionary processes, was an incentive for many of the symposium contributions.

We present here short write-ups of seven of these talks. They are more than simply interesting: they suggest a revolution in how we should think about, and search for, our intellectual peers. Indeed, they suggest that “peers” may be too generous to Homo sapiens. As these essays argue, the majority of the cognitive capability in the cosmos may be far beyond our own.

-- Seth Shostak

This symposium was chaired by Martin J. Rees, OM, Kt, FRS and Paul C.W. Davies, AM, and organized by Mary Ann Meyers, JTF’s Senior Fellow. Also present was B. Ashley Zauderer, Assistant Director of Math and Physical Sciences at the Templeton Foundation.

[Go to Top]


DISCUSSING ALIENS: CONSTRAINTS FROM CHEMISTRY AND DARWINISM

Steven A. Benner
The Foundation for Applied Molecular Evolution
Firebird Biomolecular Sciences LLC
13709 Progress Boulevard, Alachua FL 32615
sbenner@ffame.org

ABSTRACT

The definition of life as a “self-sustaining chemical system capable of Darwinian evolution” captures what we believe is the only mechanism for organic matter to organize itself to create behaviors that we value in biologic systems. However, because its mutations can neither reflect the current environment (Lamarckianism) nor anticipate future environments, Darwinism requires the death of children simply to maintain the capacity for future evolution. Their death is also required to create the positive adaptations that are required to manage changing environments. However, thanks to our intelligence, humankind is on the verge of escaping Darwinism via germline DNA modification (for example, using CRISPR). If technological advances occur in parallel as intelligent societies advance, any alien species likely to encounter us before we encounter them is likely to have itself escaped Darwinism. Anticipating this, synthetic biology is creating, in the laboratory, alternative systems that might be Lamarckian without needing to be intelligent. At least speculatively, unintelligent Lamarckian systems could evolve more rapidly than unintelligent Darwinian systems, precisely because they do not waste resources on dead offspring. Indeed, a survey of modern terran molecular biology suggests that such a system may have operated on early Earth, but then was supplanted once translation arose. Thus, a second example of life created in the laboratory might actually represent a path that natural history on Earth began to follow, but later decided not to continue.

INTRODUCTION

Two decades ago, a committee empaneled by NASA defined life as “a self-sustaining chemical system capable of Darwinian evolution” [1]. This definition makes solid contact with reality, especially in the biosphere that we see around us. Indeed, many alternative proposed definitions are actually not definitions of life (my favorite is life as “a nontrivial trajectory through phase space”). Rather, they are definitions of models for life. As such, they do not have any particular experimental touchstone. And I am not prepared at this moment to join Jack Szostak in his view that no definition-theory of life is necessary [2].

On the contrary, the “NASA definition” is quite useful in guiding “origins of life” studies. Organic materials, if left to themselves in the presence of energy, devolve to give increasingly complex tars. This is an experimental observation, repeated many times in many kitchens. However, another observation is well-validated: If that organic matter also has access to Darwinism, then it generates life essentially everywhere.

Thus, according to this definition-theory, the question that must be answered to understand how life originates asks how, during the devolution of prebiotic organic matter, a chemical system able to support Darwinism might emerge. This is a “juicy” target for experiments. It is also a good target in the search of for life, as we can easily define molecular structures that are necessary for evolvable biopolymers, including a polyelectrolyte backbone [3]. We can look for those molecular structures as biosignatures.

However, we are reminded that a definition captures within it a theory [4], and this definition is no different. This particular definition communicates to the world that its drafters felt that Darwinism is the only mechanism by which matter can self-organize to give the properties that we value in biology. It is conceivable that we might encounter an entity that has all of the properties that we value in life, including the ability to converse with us, but lacks access to Darwinism, or perhaps lacks a chemical foundation. It would therefore fall outside of this definition-theory. Science fiction offers many of these concepts. But “the ability to conceive” is weak evidence for existence. In fact, the reason why we do not now change this definition is because we do not believe that such life actually can exist.

Now, unless we are missing something big, explorations of the solar system over the past half-century have failed to place in our hands anything that matches this definition-theory. The closest that we have come is the suggestion that Mars might harbor single cell organisms. This motivates us to ask about origins on Mars. Once this question is raised, reasons can be identified to see Mars as a site preferable to Earth for the origin of life [5]. For example, the dryness of Mars, relative to Earth, helps square the paradox arising from the ease with which genetic biopolymers (like RNA) are corroded by water, confronting the apparent need for water as a solvent where those biopolymers can function.

However, the discovery of extrasolar planetary systems expands the number of places where these questions can be asked. It does this significantly, as we can expect the presence of as many as a hundred billion earthlike planets in the galaxy, one or two for each star, ahead of their actual observation. Could natural history on those extrasolar planets have relieved life from the constraints of chemistry or the constraints of Darwinism?

Both possibilities are actually emerging from intelligence on Earth. Darwinism involves a molecular system that is replicated with errors, where the errors are themselves replicable. According to theory, any molecular system that has these properties should have access to the behaviors that we value in life.

The DNA, RNA, and proteins that we see around us are evidence of this hypothesis. Mutations in DNA replication create imperfect replicates, daughter DNA molecules that have slight changes in their nucleotide sequence. When the daughter sets out to replicate her DNA molecules, the changed DNA is the starting point. The mechanism for DNA replication allows the information in the imperfections that distinguish the mother from the daughter to be passed to the granddaughter with the same fidelity as the information that was perfectly transmitted from the mother.

In this respect, replication of DNA is quite unlike, for example, the replication of “fire”, often considered as a counter example for popular definitions of life. Fire does reproduce, by sending sparks into unburned territory. Fire does consume free energy. And “daughter” fires are different from the parental fire. However, those differences are not passed on to their descendent fires. Fire, therefore, is not life.

Darwinism has, however, other pieces of baggage. In particular, the errors in replication must be random with respect to future value. They cannot be “prospective” with respect to fitness. Nor can they arise from direct feedback from the environment to the genetic system. To be consistent with Darwinism, the errors must be random with respect to current and future fitness needs.

This unfortunate feature of Darwinism requires that “babies must die”. To allow adaptation to changing environmental conditions, mutations must be allowed. However, since those mutations cannot reflect either current or future demands for fitness, a sizeable fraction (and, according to most textbooks, a large majority) of these must be disadvantageous to fitness, some to the point of being lethal. Even in a stationary environment where no mutations are needed (assuming that the parent has already attained genetic “perfection” for that environment), necessarily detrimental mutations must occur so that the life can maintain thecapacity to evolve should the environment change.

Now, the standard “RNA first” model for the origin of life is that prebiotic materials spontaneously devolving gained access to Darwinism when they gave rise to an RNA molecule that was able to catalyze the template-directed polymerization of RNA. The first round of that polymerization would create a Watson-Crick copy of the template; the second would deliver a duplicate of the template, if the replication were perfect. However, that replication would almost certainly be imperfect. But those imperfections would then be replicable, and Darwinism would be off and running.

However, as life advances, especially if it becomes intelligent, the destruction of babies for the purpose of advancing the fitness of a genetic pool, in the low chance that the random mutation would have future value, would easily be seen as a waste of resources. This would go double for mutations that must occur in a static environment to maintain evolvability should the environment happen to change in the future. Would it not be better for life to be Lamarckian, allowing at least direct feedback from the environment to the genome to allow the genome to become fitter in the present, without needing to extinguish infants? And would it not be better still if those mutations could be prospective, where an intelligent evolving entity anticipated what future information would be needed in the gene pool, and arrange to get it?

This is not, after all, total science fiction. We are perhaps a scientific generation away from being able to alter, by direct and deliberate intervention, the genetic information in our germlines. Thus, our children need not have mutations that undermine current demands on fitness. If that technology were to be securely in place, we would have access to Lamarckianism. We could remove from our germlines the mutations that currently lower our fitness (like hemophilia). And if we could also predict future genomic needs for fitness (a more difficult challenge), so much the better.

Interestingly, once we had access to Lamarckianism, we could easily lose our capability to support Darwinian evolution. Now, according to the “NASA” definition-theory of life, if we lost our access to Darwinism in favor of these much better modes of evolving, we would no longer be “life”. But no problem. We would simply change that definition-theory.

What about alien biology? It seems if we were encounter an alien life form, we would most likely encounter it in a non-intelligent Darwinian form, absent a molecular concept that would allow Lamarckianism or prospective mutation. Our exploration in the Solar System over the past 50 years almost certainly rules out the chances of having another system intelligent enough to implement germ line gene editing. And we are currently unable to traverse the distances to another star where we might find such an intelligent system.

But that is not the case for any alien life form that encounters us from an extrasolar locale. If we assume that technological advances move approximately in parallel, we might argue that any alien life that has learned how to travel between stars would almost certainly have learned how to make its genome fit without needing to have babies die. At the very least, it would have altered its biochemistry to make it better suited for interstellar travel. Certainly, our terran biochemistry is not well-suited for interstellar travel; our DNA simply would not tolerate the high-energy physics that it would encounter.

But could Lamarckianism have arisen without intelligence? A consideration of the constraints of replication chemistry suggests that it might, and perhaps did on early Earth. In modern terran life, information cannot feed back from the environment directly to the genome because of the unidirectionality of ribosome-based translation. This system can use a sequence of nucleotides in RNA to encode the sequence of amino acids in proteins. However, it has no replication mechanism that allows the sequence of amino acids in a protein to encode the sequence of nucleotides in an RNA molecule. There is no “reverse translation”.

But conceive of an alternative life form that does not use proteins to create “phenotype”. Here, instead of being a three biopolymer life form (like we are), let us consider the possibility that a two biopolymer system might support Darwinian evolution, a form of the “RNA World” model for early life on Earth [6]. Those two biopolymers would be:

1. A catalytic biopolymer, which folds, has multiple functional groups, and has a large diversity of biophysical properties depending on its sequence, the diversity required for rich catalytic potential.

2. A genetic biopolymer, designed to not fold, with no functional groups beyond those needed for genetic transfer, little catalytic capabilities, but with biophysical properties that remain largely unchanged with changing sequence, all required for genetic potential.

Then, let us assume that the second biopolymer directly encodes the first by process of “transcription” through base pairing, just as DNA encodes RNA in processes catalyzed by modern RNA polymerases. Then, since base pairing is reciprocal, the RNA can also code the synthesis of a complementary DNA, just as reverse transcriptases do in modern terran biology.

This system has the potential to be Lamarckian. An RNA transcript might find itself mutated to be better able to meet a current fitness need. If so, it could be reverse transcribed back into the genome. Without the need to have any children die.

Today, of course, we regard proteins as intrinsically better for catalysis than RNA. This comes from many efforts to evolve RNA in the laboratory to perform catalytic function. With just four nucleotides, RNA has very little of the information density of modern terran proteins. Thus, RNA catalysts are plagued by alternative folding, where inactive forms compete with active forms [7]. Further, RNA has very little of the chemical functionality found in proteins. Protein catalysts rely on the functionality of such molecules as carboxylate, thiol, imidazole, and ammonium groups that are present on the amino acid side chains; all are missing from standard encoded RNA.

However, these constraints would be set aside if we were to expand the number of building blocks in the two biopolymers that can form Watson-Crick pairs. And add functionality to those extra building blocks that enhances its power as a catalyst.

Synthetic biology is suggesting this was possible. From work in the laboratory, we now know that as many as 12 different nucleotides forming 6 different nucleobase pairs are possible within the Watson-Crick nucleobase pairing “concept” [8]. These have been made in the laboratory by synthetic biologists, and a molecular biology has been developed to support many of them. These have been synthesized, again by synthetic biologists, to carry groups interesting for catalysis, including ammonium, carboxylate, imidazole, and nitro, the last not even found in modern terran proteins, but which serves as a universal binding entity. The system is evolvable, again in the laboratory, and appears to be a richer reservoir of catalytic functionality than standard nucleic acids [9]. NASA and the Templeton World Charity Foundation are presently supporting us as we try to get Lamarckianism out of this two-biopolymer system.

But these are the products of “intelligent design”. Could such a two-biopolymer life form have arisen without the guiding hand of an intelligent organic chemist?

Interestingly, some surviving features of modern molecular biology suggest that terran life in the RNA world tried to follow this path, and succeeded for a while. Many RNA nucleotides in the oldest RNA molecules (tRNA, rRNA) actually have ammonium groups, carboxylate groups, and thiol groups. These may be vestiges of a time when RNA was being pressed into service as the platform for a catalytic molecule in the RNA World. According to this view, DNA itself had structural changes needed to make it better suited as a genetic specialist. For example, the thymine in DNA was presumably recruited from uracil, methylated to convert an RNA nucleobase into something better suited for genetics.

This raises the question that asks whether early biology exploited the Lamarckian potential of a two-biopolymer system in the RNA world. If it did, it is conceivable that a system capable of Lamarckian evolution is better able to adapt rapidly than a system having access only to Darwinian evolution. It also raises the question whether, when we finally encounter biology on Mars (as we hope to do), it will have a two-biopolymer architecture and be Lamarckian.

Whatever its advantages, terran life clearly decided to dispense with a two biopolymer, potentially Lamarckian, molecular biology. Perhaps it found a four letter RNA alphabet, even with added functional groups, simply too low in information density to compete with the catalytic power of proteins. Perhaps the power of proteins as a platform for phenotype was just so much larger than that power in RNA, even 12 letter RNA with abundant functionality, that proteins were preferred, notwithstanding the complexity of ribosome-based translation. We are doing experiments to see whether proteins continue to have an overall advantage over RNA once the RNA has 12 different replicable building blocks supporting a half-dozen functional groups.

Awaiting the outcome of these experiments, the availability in the laboratory of a functioning set of nucleotides implementing all of the six easily accessible patterns of hydrogen bonds, adjusted separately (given current theory) for optimal performance in genetic and catalytic systems, suggests that natural history might have followed a different path following the emergence of the RNA World. Rather than inventing the ribosome, and having DNA as its genetic molecule, natural history might have simply evolved to make RNA the catalytic biopolymer. It would have done so simply by altering various biosynthetic routes to the oligonucleotides that it has managed to acquire (G, A, C, and U) to add a few of those shown above, with additional hydrogen bonding patterns. Then, it might have added functional groups to this, with 12 different nucleotide building blocks.

This alternative natural history would have allowed terran life to avoid the kinetically slow step of inventing ribosomes; the time required and the path to the ribosome, tRNA molecules, aminoacyl charging enzymes. Further, the two-biopolymer life form would have (forever after) spared the biosphere the “thermodynamic” continuing expense of maintaining a third biopolymer. The translation machinery consumes half of the resources of a bacterial cell. Instead, it could be supporting Darwinian evolution with just two-biopolymers, a DNA-like biopolymer with an expanded set of nucleobases optimized for genetics, and an RNA-like biopolymer with many functionalized nucleobases needed for catalysis.

And it would have prevented the need to kill children to maintain and expand the information in the genetic biosphere.

REFERENCES

[1] Joyce, G. F. 1994, forward to Origins of Life: The Central Concepts, D. W. Deamer and G. R. Fleischaker eds., Jones and Bartlett (Boston)

[2] Szostak, J. W. 2012, “Attempts to define life do not help to understand the origin of life,”  J. Biomol. Struct. Dyn.  29, pp. 599 – 600

[3] Benner, S. A. and Hutter, D. 2002, “Phosphates, DNA, and the search for nonterrean life: A second generation model for genetic molecules,” Bioorg. Chem. 30, pp. 62 – 80

[4] Cleland, C.E., Chyba, C.F. 2002, “Defining ‘life’”, Orig. Life Evol. Biosp., 32, pp 387 – 393

[5] Benner, S. A. and Kim, H.-J. 2015, The case for a Martian origin for Earth life,” Instruments, Methods, and Missions for Astrobiology XVII, R. B. Hoover, G. V. Levin, A. Yu.  Rozanov, and N.C. Wickramasinghe (eds.), Proceedings of SPIE9606, 96060C

[6] Benner, S. A., Allemann, R. K., Ellington, A. D., Ge, L., Glasfeld, A., Leanz, G. F., Krauch, T., Macpherson, L. J., Moroney, S. E., Piccirilli, J. A., and Weinhold, E. G. 1987, “Natural selection, protein engineering and the last riboorganism. Rational model building in biochemistry”, Cold Spring Harbor Symp. Quant. Biol. 52, pp. 53 – 63

[7] Carrigan, M., Ricardo, A., Ang, D. N., Benner, S. A. (2004) Quantitative analysis of a deoxyribonucleotide catalyst obtained via in vitro selection. A DNA ribonuclease. Biochemistry43, 11446-11459

[8] Benner, S. A., Yang, Z., and Chen, F. 2010, “Synthetic biology, tinkering biology, and artificial biology. What are we learning?”, Comptes Rendus 14, pp. 372 – 387

[9] Zhang, L., Yang, Z., Sefah, K., Bradley, K. M., Hoshika, S., Kim, M.-J., Kim, H.-J., Zhu, G., Jimenez, E., Cansiz, S., Teng, I.-T., Champanhac, C., McLendon, C., Liu, C., Zhang, W., Gerloff, D. L., Huang, Z., Tan, W.-H., and Benner, S. A. 2015, “Evolution of functional six-nucleotide DNA”, J. Am. Chem. Soc. 137, pp. 6734 – 6737

[Go to Top]


BIO-SIGNATURES AND TECHNO-SIGNATURES BEYOND EARTH

Paul C.W. Davies
Beyond Center for Fundamental Concepts in Science
Arizona State University, P.O. Box 870506
Tempe, AZ 85287–0506 
Paul.Davies@asu.edu

ABSTRACT

Among the many uncertainties that feature in the Drake equation, the least certain quantity is fl, the fraction of earthlike planets on which life arises. Because the process that transformed non-life into life is unknown it is meaningless to estimate the probability, which might be infinitesimally small. Arguments to the contrary are unconvincing. The best hope for determining that fl is not close to zero would be the discovery a second sample of biology, or post-biology, either on Earth or beyond. A variety of search strategies suggests itself.

HABITABLE IS NOT THE SAME AS INHABITED

The founder of SETI, Frank Drake, summarizes the factors that determine the number of communicating civilizations in our galaxy in terms of an equation:

N = R* fp ne fl fi fc L 

where

R* = rate of formation of Sun-like stars inthe galaxy

fp   =  fraction of those stars with planets

ne  =  averagenumber of earthlike planets in each planetary system

fl   =  fraction of those planets on which life emerges

fi   =  fraction of planets with life on which intelligence evolves

fc  =   fraction of those planets on which technological civilization and the ability to

          communicate emerges

L  =   the average lifetime of a communicating civilization.

The number N represents how many “radio-active” civilizations exist in the galaxy. The symbols on the right are quantities we need to estimate – guesstimate would be more apt – the value ofN.

In the five decades since Frank Drake formulated his eponymous equation, our understanding of astrophysics and planetary science has advanced enormously. The first three terms of the equation refer to factors that are now known with reasonable precision, due in no small part to the discovery of enough extrasolar planets for meaningful statistics to be developed.

Unfortunately this progress has not been matched by a similar leap in understanding of the remaining factors – the biological ones. In particular, the probability of life emerging on an earthlike planet, fl, remains completely unknown. In the 1960s and 1970s, most scientists assumed that the origin of life was a freak event, an accident of chemistry of such low probability that it would be unlikely to have occurred twice within the observable universe.

Today, however, many distinguished scientists express a belief that life will be almost inevitable on a rocky planet with liquid water – a “cosmic imperative” to use the evocative term of Christian de Duve [1]. But this sentiment is based on little more than fashion. Indeed, it is easy to imagine plausible constraints on the chemical pathway to life that would make its successful passage infinitesimally small. In the case of the fifth term in the Drake equation – the probability that intelligence will evolve if life gets going – at least we have a well-understood mechanism (Darwinian evolution) on which to base a probability estimate (though it still remains deeply problematic to do that). The same is true of the remaining terms. Thus the uncertainly in the number of communicating civilizations in the galaxy, N, is overwhelmingly dominated by fl.

In the important hunt for earthlike, extrasolar planets, astronomers are busy cataloguing habitable real estate across the galaxy. The qualification “earthlike” is admittedly vague.  Nevertheless it is clear that our galaxy alone contains millions if not billions of worlds that are earthlike in some respect, and thus potential abodes for life.  However, while the qualification “earthlike” may be a necessary condition for life to arise, it is far from sufficient.  “Earthlike” refers to a setting, not a process. To establish life on an earthlike planet, all the necessary physical and chemical steps have to happen, and as we don’t know what those steps are, we are ignorant as to how many habitable planets do, in fact, host some form of life.

Drake himself favors a value of fl close to unity. That is, given an earthlike planet, it is very likely that life will arise there. It is a sentiment echoed by planet hunter Geoff Marcy, who recently said he would “bet his house” on the galaxy teeming with life.  By contrast both Francis Crick [2] and Jacques Monod [3] argued that life’s origin was a freak event (“almost a miracle,” according to Crick). Unfortunately these disagreements are based almost entirely on philosophical judgments rather than scientific evidence, for the simple reason that science remains largely silent on the specifics of the pathway from non-life to life. One may estimate the odds of a process occurring only if the process is known. One cannot estimate the odds of an unknown process.

THE “UP-IT-POPS” FALLACY

Carl Sagan said: “the origin of life must be a highly probable affair; as soon as conditions permit, up it pops!” [4] While it is certainly the case that the rapid appearance of life on Earth is consistent with its genesis being probable, it is equally consistent with it being exceedingly improbable, as was pointed out by Brandon Carter over three decades ago [5]. The essence of Carter’s argument is that any given earthlike planet will have a finite “habitability window” during which life might emerge and evolve to the level of intelligence. On Earth, this window extends for about 4 billion years – from about 3.8 billion years ago to about 800 million years hence, when the Sun will be so hot that the planet will be an uninhabitable furnace. Suppose, reasoned Carter, life’s origin is so improbable that the expectation time for it to occur is many orders of magnitude longer than this habitability window. And further suppose that, in addition to the (improbable) transition from non-life to life, several other very hard steps are needed before intelligence is attained (for example, eukaryogenesis, sex, multi-cellularity, evolution of a central nervous system). If in all there are n hard steps, each of which has an expectation time much longer than the habitability window, and each of which is necessary before the next step may be taken, then a simple statistical argument leads to a relationship between n and the duration of the window.

Carter is able to conclude that there are about 5 extremely improbable steps spaced about 800 million years apart involved in attaining intelligent life on Earth. Significantly, the first step is also bracketed by an interval of 800 million years. That is, if the emergence of life was an exceedingly improbable process (but of course one that had to happen for humans to be here and ponder it) then probability theory predicts it should have happened fairly rapidly – within 800 million years. Another way of expressing it is that, unless life had got going quickly, we would not be here to discuss it three billion years later.

IMPROVED UNDERSTANDING OF LIFE’S ORIGIN

Perhaps we can guess a plausible value of fl by studying the chemistry that underlies life? Attempts to re-create the chemical pathway from non-life to life have been pursued since the pioneering work of Haldane [6] and Oparin [7] in the 1920s, and were boosted by the famous experiment of Stanley Miller in 1952 [8].  However, life is so complex that the results of pre-biotic synthesis go only a tiny way down the long pathway to life and tell us little about potentially extremely hard chemical obstacles at later stages.  

There is, however, a more serious issue lurking here. Life is clearly more than complex chemistry. Chemistry deals with concepts such as molecular shapes and binding strengths, reaction rates and thermodynamics.  By contrast, when biologists describe life they use terms like signals, coded instructions, digital error correction and editing, and translation – all concepts from the realm of information processing.

While chemistry provides the substrate for life, the informational aspects (which are instantiated in the chemistry) require an origin story of their own [9]. In a nutshell, pre-biotic chemical experiments help us understand how the hardware of life might (might) have come to exist, but so far have cast little light on the origin of the software aspect.  Because life requires both specialized hardware and specialized software to come out of as-yet little-understood physical processes, we are very far from being able to quantify the likelihood of getting both in a plausible molecular soup.

A SECOND SAMPLE OF LIFE

If we were to be presented with a second sample of life, and we could be sure that it arose independently of known life, then the case would instantly be made that fl is not infinitesimally small. Various scenarios have been suggested for the discovery of a second sample of life.

SETI succeeds!

In the event that mankind obtains incontrovertible evidence of the existence of alien technology, then we could conclude that life must have arisen in at least one other location in the universe [10].  (This conclusion assumes, of course, that the pathway to technology involves biology and intelligence.  Logically there is no reason why this has to be the case but it is the default assumption.)  Note that the conclusion would follow even in the absence of an actual message or signal from an alien civilization – the “gold standard” of SETI. It would be sufficient to discover any signs of technology.

Synthetic biology

The burgeoning field of synthetic biology [11], in which new forms of life are engineered in the laboratory, might suggest that life is literally easy to make, and that it may manifest itself in a wide range of molecular forms.  Although synthetic biology currently falls far short of constructing living organisms from scratch (as opposed to re-wiring or re-programming existing organisms), one may imagine that in the future this will be possible. Would we then conclude that the transition from non-life to life is not especially difficult and therefore likely to be widespread in nature? The answer is no. Creating life in the laboratory will demand a great deal of sophisticated scientific equipment, a host of purified substances, and a particular sequence of chemical and physical steps, each of which is likely to take place under tightly controlled conditions; indeed, under different conditions for each step. (It will also require a large budget!)

But above all, creating life in the laboratory entails the attentions of an intelligent designer – the scientist – who embarks on the venture with a particular end product in mind and a well thought-out sequence of steps to attain it. So it may turn out to be relatively easy (if expensive) for scientists to make life, but that does not mean it is also easy for nature to do so.

Life on Mars

The best hope for finding life in the solar system seems, by common consent, to be on the planet Mars. The problem is that Mars and Earth are not biologically quarantined from each other.  Over the history of the solar system, these two planets have traded a prodigious amount of material. The existence of many known Mars meteorites demonstrates that rocky ejecta can arrive on another planet relatively unscathed, and the same could be assumed about any hitchhiking organisms [12].  Given this traffic of material over billions of years, it seems very likely that if life were to have arisen on Mars, it would very soon be transported to Earth to seed our planet (and vice versa).  So finding life (past or present) on Mars would not of itself demonstrate a second genesis.

Extra-solar planets

Establishing the presence of life on an extrasolar planet from spectroscopic data alone is challenging.  Instruments capable of detecting possibly biologically-associated atmospheric gases are being planned, but it may be a long time before we have that capability.

A shadow biosphere on Earth

If life does form readily in earthlike conditions, then we might expect it to have started many times on Earth itself.  All known life on Earth is interrelated, with a common genetic code and a common biochemical scheme involving the same suite of nucleotides and amino acids, the manufacture of proteins by ribosomes and several other specific universal features, suggesting a common origin. The discovery of just a single micro-organism so biochemically distinct from known life (i.e. so alien) that it could not belong to this familiar tree would be powerful evidence for an independent genesis event. Almost all known species are microbial, and at the present time scientists have only scratched the surface of this microbial realm. Thus there is plenty of room at the bottom for microbes that are biochemically weird enough to qualify for an alternative form of life [13], [14].

POST BIOLOGICAL INTELLIGENCE

The second least understood term in the Drake equation is the last: L, the longevity of a civilization. Sagan fretted that L might be rather short if alien civilizations mirrored human, with our known warlike tendencies.  There is a strong case that emotion-bound biological intelligence is likely to be short-lived, not only for Sagan’s reason (nuclear annihilation), but also because biological intelligence is surely but a transitory phase in the evolution of intelligence in the universe. Already on Earth much intellectual heavy-lifting is being done by computers, and we can foresee a time when designed artificial systems will outsmart humans in almost every capacity. An extraterrestrial civilization of, say, ten million years duration is most unlikely to be dominated by flesh-and-blood sentient beings, but by complex designed and manufactured systems of the nth iteration. Looking for techno-signatures of post-biological systems is a huge challenge given that futurists tend to extrapolate from human civilization, which is shaped by mainly biological factors.  Given the unknowns, it makes sense to be alert to the possibility (however remote) of alien techno-signatures in any observational database to which we have ready access, including of course SETI data, but also data from any astronomical, biological, geological and planetary databases [15]. 

REFERENCES

[1]  De Duve, C. 1995, Vital Dust, Basic Books (New York)

[2]  Crick, F. 1981, Life Itself; Its Origin and Nature, Simon and Schuster (New York)

[3]  Monod, J. 1972, Chance and Necessity, trans. by A. Wainhouse, Collins (London)

[4]  Sagan, C. 1995, Bioastronomy News, 7 (4), 1

[5]  Carter, B. 1983, “The anthropic principle and its implications for biological evolution,”Philosophical Transactions of the Royal Society of London A 310, 347

[6]  Haldane, J. B. S. 1968, “The origin of life,” in Science and Life, Pemberton Publishing (London)

[7]  Oparin, A. I. 1924, Proiskhozhdenie zhizny (The Origin of Life),  Ann Synge trans, in Bernal, J. D. ed. The origin of life. 1967, J. D. Bernal, ed., Weidenfeld and Nicholson (London)

[8]  Miller, S. L. 1953, “A production of amino acids under possible primitive earth conditions,”Science117, 528

[9] Davies, P. C. W. and Walker, S. I. 2012, “The algorithmic origins of life,” J. R. Soc. Interface 10, doi: 10.1098/rsif.2012.0869

[10] Davies, P. 2010, The Eerie Silence, Penguin (New York)

[11] Benner, S. A., Chen, F., and Yang, Z. Y. 2011, “Synthetic biology, tinkering biology, and artificial biology: a perspective from chemistry,” Synthetic Biology, eds. Pier Luigi Luisi and Cristiano Chiarabelli , Wiley 69-106

[12]  Davies, P. 1996, “The transfer of viable micro-organisms between planets”, Evolution of Hydrothermal Ecosystems on Earth (and Mars?): Proceedings of the CIBA Foundation Symposium No. 20 (ed. Gregory Brock and Jamie Goode, Wiley (New York); see also Paul Davies 2003, The Origin of Life, Penguin (New York)

[13]  Davies, P. C. W. and Lineweaver, C. H. 2005, “Searching for a second sample of life on Earth,” Astrobiology 5, 154

[14]  Davies, P.C.W., Benner, S. A. Cleland, C. E., Lineweaver, C. H., McKay, C. P. and Wolfe-Simon, F. 2009 “Signatures of a shadow biosphere,” Astrobiology 9, 241

[15]  Davies, P. C. W. and Wagner, R. V. 2011, “Searching for alien artifacts on the moon,”

Acta Astronautica  89, 261

[Go to Top]


INTELLIGENT EVOLUTION: AN APPROACH TO OPEN-ENDED EVOLUTION

Chrisantha Fernando
Google, DeepMind
chrisantha@google.com

ABSTRACT

I will argue that evolution by natural selection scores highly on a formal definition of universal intelligence, and therefore if we produce a system capable of open­ended evolution in a computer, then we will have created a necessary condition for ‘post­biological’ digital intelligence. Natural selection satisfies several other criteria for intelligence, such as creativity; in fact it has even re­invented itself at least twice in the immune system and as cultural evolution. The origin of digital evolution may constitute the next major transition in evolution [1] in which the human cultural system invents a new evolutionary system in software that evolves in silico. What might be done to achieve this is discussed. 

WHAT IS INTELLIGENCE?

Let us define a unit of intelligence as Legg and Hutter define universal intelligence, i.e. as the time discounted reward (value) obtained by a unit (agent) over the set of computable reward summable environments, weighted by the simplicity (inverse exponentiated Kolmogorov complexity) of the environment [2]. The scaling is intended to make performance on simple environments count more than performance on complex environments. Whilst this is an uncomputable quantity, because it is in practice not possible to do the sum over all possible environments due to tractability and computability issues, and it is also not possible to calculate simplicity perfectly, it is agnostic to mechanisms and so makes the fewest biasing assumptions.

There are many other aspects that people wish to capture in a definition of intelligence, e.g. learning to learn, speed of adaptation, discovering low­dimensional compressed structures and regularities, allowing manipulation and prediction of the environment, representing the world internally as a model and using this to plan. I will consider these aspects later, and argue that evolution by natural selection satisfies several of these criteria even on its own.

The Legg-Hutter intelligence measure can be formalized as follows. Let µ be an environment and let π be an agent. At each interaction step t, the agent π outputs an action at, and the environment replies by an observation ot and a reward rt, that can depend on all previous actions, observations and rewards. The value of the agent π in the environment µ is the expected sum of rewards the agent can gather in this environment:[1]. - ([1] Discounting is avoided by considering reward-summable environments [2].)

Let M be a set of environments, and let  be the weight of environment µ within M, with and  if µ is not in M. Then the value of an agent π in the set of environments M is:

Thus, for a given set of environments M, agents can be compared based on their value. Now, we want to compare agents on the largest possible set of environments, maybe all possible environments. How can we do that? Legg and Hutter’s solution is to rely on Solomonoff’s universal prior, which assigns a prior weight to all computable environments µ, i.e., all environments that can be simulated by a computer (including real-valued environments up to a possibly-increasing precision).

To do this, we need to choose a Universal Turing Machine (UTM) of reference. A UTM is equivalent to a (universal) programming language, like Java and C, which can describe all programs. Then for a chosen reference UTM, the weight of an environment µ iswhere K(µ) is the length in bits of the smallest program that can describe µ on the UTM (K is Kolmogorov’s complexity).

An important property of UTMs is that any UTM U1 can simulate another UTM U2, just like any (universal) programming language can simulate any other programming language by first writing an interpreter, which only incurs an additive penalty in K(µ) – this penalty is the length in bits of the interpreter, just like a Java interpreter can be used to interpret C programs by first writing a C interpreter in Java.

In summary, a unit of intelligence is one that obtains high value in a set of environments, with greater weight being given to doing well in simpler environments. 

WHAT IS NATURAL SELECTION?

Next I will define another concept; the unit of evolution, originally coined by John Maynard Smith and discussed by Okasha [3]. A unit of evolution is an entity that has the following three properties.

1. It is capable of autocatalytic growth, i.e. has exponential growth dynamics whereby the rate of increase of that entity is proportional to the frequency of that entity itself.

2. Entities in a population exhibit variations, i.e. it is possible to have many different types of entity, A, B, C, etc…

3. Entities must have heredity, that is “like must give rise to like”, i.e A’s to A-like things and B’s to B-like things.

During multiplication, offspring must resemble parents. If in addition there is differential fitness, e.g. A’s have a higher chance of surviving and replicating than B’s and C’s because of some property of A’s, then A’s can increase in frequency and eventually be universal, or “go to fixation”. By this process, the fittest entities will always increase in frequency, given some very simplifying assumptions, and this can be thought of as survival of the fittest.

WHAT IS LIFE?

It is also useful to define a last kind of unit, that being Tibor Ganti’s unit of life [4]. A unit of life is an entity that has a boundary, a metabolism, and an informational control system. By metabolism I mean that the system is a dissipative structure existing out of equilibrium open to mass and energy flow. For example, a cell, an ostrich, and a country may be considered units of life. Clouds and fire are metabolic, but have no informational control systems. They do not need to be capable of replication as units of evolution do. Units of life are hierarchically compositionally organized, i.e. units of life can be themselves made of many units of life. This is a design principle in macroevolution observed on Earth.

The proper relationship of units of life to units of evolution is that of partially overlapping sets. Most units of evolution currently are units of life as well. Units of life that are incapable of reproduction such as mules, sterile workers, etc., are not units of evolution. There are units of evolution that are not units of life, e.g. binary strings in the computer being evolved by genetic algorithms. In these algorithms a ‘genome’ encodes some phenotype, e.g. a wing, and the wing is tested in a simulator and given a fitness, and those genotypes that make high fitness wings have a higher chance of replicating with mutation (and perhaps crossover with other good wing producing genotypes). In this way, the wing design can get better in the computer without anyone explicitly saying what a good wing should look like. These are clearly not units of life according to the definition, as they have no metabolism.

The proper relationship of units of intelligence to units of evolution is that of a partially overlapping set. There are many algorithms, such as temporal difference reinforcement learning, that have no units of evolution but that are capable of learning to obtain reward in a wide variety of complex environments  [5]. Units of evolution are all units of intelligence however, because value can be replaced simply by fitness, and the environmental simplicity measured in the same way as before. Similarly, all units of life are units of intelligence, but not all units of intelligence are units of life.

I argue that to produce post­biological intelligence we should focus on producing units of intelligence that are units of evolution, but not worry about making them units of life, i.e. providing a boundary and a metabolism.

WHAT IS OPEN-ENDED EVOLUTION?

Open-­ended evolution is a subfield of artificial life in which (mostly) computer scientists try to design the initial conditions and dynamical rules of an evolutionary system in a computer such that it will continue indefinitely to produce novel adaptations.

Notable examples are Corewars [6], Tierra [7], Avida [8], Geb [9], Polyworld [10] , and Chromaria [11]. There is no universal agreement as to what open-ended means formally. Most would agree that “you know it when you see it”, as the system continues to evolve “interesting novelty”. You don’t want to reset the system as interesting accumulated adaptations will be lost.

I propose that the notion of universal intelligence is helpful in understanding what open-ended evolution really is. Open­ended evolution can be thought of as achieving general artificial intelligence rather than narrow artificial intelligence, e.g. a Deep Blue can play chess very well but does not increase the range of environments in which it can play well. No matter how long you leave Deep Blue on, it won’t ever be good at noughts and crosses, or draughts. Open­ended evolution is where an evolutionary system continues to discover and solve novel interesting problems. OEE is the coupling of an environment, an individual-evaluation function for individuals, evolutions operators and an initial individual such that: the environment allows for the creation of more and more complex individuals, the evolution operators have the capability to produce individuals of growing complexity, the evaluation function assigns a higher value to more complex individuals.

Nobody knows for sure (because we have not constructed such a system yet) what the minimal requirements are for an in silico system to exhibit open-ended intelligence/open-ended evolution. The best example we have is human intelligence which exists in the intersection of all three sets of units. The question is, what can be thrown away?

There is a group of philosophers influenced by concepts of autopoiesis who believe that to be a unit of life is a necessary prerequisite for intelligence [12]. I disagree. Units of life may be required for the origin of intelligence, e.g. it was the only way that intelligence could have happened from scratch on a planet, but it does not mean that the algorithmic principles of intelligence cannot be extracted, distilled out, and re-embodied in a form that does not require the agent to be a unit of life.

A more subtle question is whether it is even possible to have open­ended intelligence without open­ended evolution. I’ll argue that a necessary requirement for open-ended intelligence is open-ended evolution. The reason is that if some algorithm X exists that has some level of intelligence, then it can only be open-ended by modifying itself into X’. It can do so through some process of gradient descent, which means that it minimizes some cost function that the designer of that algorithm gave it. However, to produce any real creativity, it must modify itself in ways that the designer could not have foreseen. To evolve cost functions themselves, some higher-order search is required [13]. It will need to make a guess, and in any difficult problem, most guesses will be wrong, therefore it will at the very least have to have memory of the original X, in case X’ is actually worse, and the system needs to return to the better previous X.

This process is called stochastic hill climbing. Any such system will benefit from parallelization. If there is the capacity to implement say a billion copies of the algorithm X, one simple thing to do would be independent, parallel hill­climbing in which each X tries out a different X’, tests it, and either keeps it or reverts to its original X, etc. This would at most provide a linear speedup in search. The next best algorithm is called competitive learning in which most of the search resources, i.e. the capacity to make many X’ and test them, is given preferentially to the currently best X’s. However, we have shown that in realistic problems such as evolving controllers for robot object discrimination a much more powerful algorithm is to allow information transfer between X’s, whereby the best X’s overwrite (perhaps partially) the not so good X’s [14]. This is full natural selection. So the argument is that when there are parallel generation and evaluation resources, the most efficient algorithm for mixing solutions, amplifying good solutions, and searching over a complex fitness landscape is natural selection. Nobody really knows why, although attempts have been made to understand in which class of search landscape evolution by natural selection scales polynomially rather than exponentially [15]. The core algorithmic difference between natural selection and most ensemble methods in machine learning is that natural selection involves transfer of information between X’s.

Very recently, there have been approaches in machine learning in which information transfer occurs between agents in an ensemble. For example, in asynchronous reinforcement learning [16] multiple agent’s experience the world separately and share these experiences to update a centralized shared set of neural network parameters. Cultural evolution is a special example of this in which humans observe other humans’ behavior and modify themselves accordingly. The modification need not be random, it may be the result of the implementation of a complex algorithm of causal inference or learning in the individual unit, in which units observe the behavior or past experiences between each other. The critical element is replication/transfer/exchange of information between individuals in a population. In short, I believe that when there are parallel resources available to implement an algorithm X, then a very efficient way to explore open-ended variants of X is turning it into a unit of evolution. There is no proof of this claim yet, there is some evidence for it, and I know of no arguments against it.

In conclusion I believe that post­biological intelligence will be formed when we become capable of designing and running a sufficiently large evolutionary system in silico with the capacity for open­ended evolution. I will discuss how we might go about doing this in a later section. First I would like to emphasize how organismal macroevolution actually possesses some of the more specific features of intelligence that people typically want from an intelligent system.

HOW IS EVOLUTION INTELLIGENT?

In the last decade, we have learned that evolution has many of the properties of human intelligence. The main one is that it can learn to learn, or equivalently, it can evolve to evolve. Gregory Bateson called this Deuterolearning [17]. In short, evolution has been getting better at “the evolution game” over the last 3.5 billion years on Earth. It has invented better methods of search, e.g. it invented sexual reproduction which implements crossover, which does not benefit the individual, but benefits the population or lineage or solutions [18]. It has invented methods of representing the phenotype of the organism in the genotype such that random variations at the nucleotide level produce non­random variations in the phenotype level. The nicest example of this I know is shown in Figure 1.

Figure 1. The two tables on the left are phenotypically identical but have different genetic encodings. This results in homogeneous mutation in genotype space producing heterogeneous variation in phenotype space. Some directions in this space are better than others, and evolution is capable of modifying genotype to phenotype maps so that they explore phenotype space preferentially along these desirable directions [19], [20].

Imagine that there are two tables both encoded by evolution, both identical. One table is encoded by the height and width parameters, the other table is encoded by the x,y coordinates of the blocks making up the table. Since both tables are identical they have equal fitness, e.g. let us say each produces two table children. The first table produces useful bar stools and coffee tables, but the second produces with high probability tables that fall over. Therefore we see that whilst the fitness of the parents is identical, the fitness of their children is different. So, in the next generation, the better genotypic encoding is likely to get passed on, whilst the worse genotypic encoding is likely to be lost, as its phenotype is not robust to mutational variation. The genotypic encoding has no fitness advantage to the individual, both parents have an equal fitness of two. It has a fitness advantage to the children. This is a slightly mind-bending idea, that there exist properties of the individual that are selected not for the benefits to the individual itself, but for the benefits that are likely to be conferred to its progeny. Toussaint has called the phenomenon whereby there is a many to one genotype to phenotype map, in which homogeneous variation in genotype space produces heterogeneous variation in phenotype space [20].

The question is then, what evolutionary forces are able to improve genotype to phenotype maps, such that ‘bad’ encodings like the x,y encoding are evolved to produce compressed encodings like the height­width encoding [21]?  There seem to be at least two possible explanations for the evolution of evolvability, as this phenomenon is known. The first is mutational robustness in which there is selection for variants to be fit. The second is lineage selection, in which it is possible think of a lineage as a unit of evolution rather than in individual, e.g. consider an individual to be a parent, child set, and grandchild set, and let this entire lineage have a fitness. In this case, variability properties (e.g. mutation rates, particular neutral genotype to phenotype maps) can be selected for if these variability variants can be stably inherited. In short, if there is heritable variation in variability properties, then variability properties can be acted on by selection. In larger population sizes it is possible for longer lineages to co­exist before they go to fixation, therefore it is possible for stronger selection at the lineage level. These are concepts that are only vaguely and poorly understood now, and much more work is required to model and elucidate these remarkable phenomena in which evolution improves itself over time. Several evolutionary biologists disagree about the evolution of evolvability, about its power, and so on, so this is cutting edge research in evolution right now. Also, when we are dealing with post­biological evolution, we can engineer the system so that it is more capable of the evolution of evolvability than our own organismal genetic system is. We can design evolution as it could be, rather than evolution as we know it.

I think that evolution has the kind of intelligence that Reti the chess grandmaster had. When Reti was asked “How many moves do you look ahead?”, he said “One. The right one”. The evolution of evolvability confers a similar kind of insight (rather than foresight) to evolution. Evolution can learn that in these kinds of situation it is better to make these kinds of moves. It does not need to plan or look ahead. It trusts its experience and instinct in a metaphorical sense. To modify Dawkins’ metaphor a little to encompass this idea, the watchmaker may be blind, but he is not stupid. Other systems have this kind of intelligence too, for example AlphaGo’s value network can immediately see a never-before-seen board position and approximate its value, without explicit planning [22]. But it is interesting that in an evolutionary system with nontrivial neutrality, such exploration distributions are automatically discovered.

Sure, humans have many other arrows in the quiver of their intelligent machinery, but evolution shares some of these arrows. It is remarkably creative. This has lead Eörs Szathmáry and me to propose in 2008 the Theory of Darwinian Neurodynamics that evolution by natural selection takes place in the human brain, and is responsible for human creativity and search in the space of ideas [23]. We have proposed mechanisms by which entities could replicate in the brain [24] and experimentalists are trying to discover whether these mechanisms could work in foetal rat neuron model systems.

We are at a very early stage yet, but if it does turn out that a system of natural selection is taking place in the brain, then it will be quite contrary to the beliefs of most neuroscientists. Most neuroscientists do not understand the algorithmic advantages of evolution, and most evolutionary biologists do not understand neuroscience. Yet, these two fields deal with the only two open-ended adaptive systems we know on Earth, the brain, and evolution. Why is there not more enthusiasm to think about shared algorithmic connections between these two data points?

Recently I have been attempting to discover how evolution can benefit machine learning (gradient based methods) and vice versa. One link is that machine learning methods can be used to learn more effective mutation operators. We have demonstrated this in two publications in which deep learning is used to guide evolution [25], [26]. Symmetrically, evolution can be used to guide gradient descent by for example evolving the topology of deep learning networks, cost functions, etc. [27], [28], or evolving generative indirect encodings of the weights of larger neural networks which themselves learn [29].

EVOLUTION HAS INVENTED POST-BIOLOGICAL INTELLIGENCE ON NUMEROUS OCCASIONS

What does post­biological mean? If we went back 3 billion years to before the origin of nucleotides and digital information in the form, presumably, of RNA molecules consisting of A, C, G, and U monomers, then would the discovery of RNA template-based information by the evolutionary system on Earth have been called post­biological by an external alien observer? Would the first multicellular organism to be invented by the evolutionary system of Earth have been called post­biological? Would the origin of language and cultural evolution in human populations have been called post­biological, in that new memetic and cultural and behavioral units of evolution had suddenly come into existence that had not existed before? The major transitions in evolution are all examples of the origin of post­biological units of intelligence. It is hard to predict what form the next units arising from a major transition will take. It seems possible that they will arise as increasingly autonomous cultural algorithmic units.

For example, as our communication and activity is mediated by increasingly complex control systems, such as say self­driving cars, then these control systems may share information and modify themselves through experience observed from others. In this way, many conventions that were previously static and culturally defined may become flexible and self-modifying, independent of our own explicit control. These early units will be limited initially to specific kinds of narrow intelligence and narrow regions of creativity.

However, initially from an academic or purely esoteric study (and indeed this has already begun in the form of automated science, e.g. for discovering new physical laws [30] we will develop algorithms that wish to understand the world at a deep level. These are algorithms for automated science, whose raison d’etre is to manipulate and predict the world as best they can, for the pure sake of manipulation and prediction. These software scientists will compete and cooperate and share information such as human scientists do. Eventually much science may be automated, and discoveries will be made by these algorithms that have accumulated information over years of experimentation and hypothesizing. In many ways organismal evolution is already analogous to a population of scientists; our genomes contain huge amounts of information about the world [31]. Eyes tell us about light, and wings tell us about fluid dynamics.

EVOLUTION HAS REINVENTED EVOLUTION AT LEAST TWICE

Evolution has reinvented itself at a faster timescale in the adaptive immune system. In a sense this is post­biological evolution in the armpits. When you have a cold, the T­cells in your lymph nodes generate a random diversity of antibodies. Those T-cells that produce antibodies that bind the foreign antigen better replicate more rapidly, and take over. Thus, over a few days, many rounds of natural selection of T­cells results in the discovery of tightly binding antibodies that can better bind the invading molecule. Evolution has re­invented units of evolution at the somatic time scale.

Evolution has also reinvented itself in cultural evolution. It is quite clear that cultural evolution happens, for example in the evolution of language, recipes, fashions and technology. This new faster evolution exploits learning in its inner loop. The unit of evolution itself is capable of learning. In other words cognitive variability is much cleverer than genetic variability. The way that new solutions are generated is much more sophisticated in cognition than in genetics. A more speculative theory that was mentioned before is that evolution may have reinvented evolution in the human brain. I will not dwell on this now.

OPEN-ENDED EVOLUTION IN SILICO

How should we go about producing post­biological intelligence in computers? Each unit of evolution should be capable of learning from the experience of others.

The society of scientists model I propose here is designed to sidestep the quagmire of arbitrary decisions that must be made when designing an artificial ecosystem as in Tierra or Avida. Eventually the evolving agents in the computer failed to produce anything qualitatively new and interesting. Sometimes the simplest faster replicating agents took over and killed the slower more complex agents. This shows us all too clearly that it is not sufficient to put natural selection into a system in order to produce open-ended complexity. I know that that is the approach taken by most people working in the field of open­ended evolution, but I think currently it is too great a computational challenge. The computing power required to simulate physics and chemistry to the extent that interesting open­ended, self-organizing life-like bodies could arise and self­replicate by gathering resources is enormous. I am proposing throwing away the idea that units of life are required for open-ended intelligence, which is to a large extent what the community of open­ended evolution enthusiasts implicitly believe.

My proposal is that we should do this in the following way: Let each unit be as sophisticated a learning algorithm as one is capable of inventing. Then produce a population of such units. Allow this population of units to simultaneously interact with the world. Allow each unit to replicate based on how well it is capable of making a unique experimental prediction in the world, that no other unit can produce. If multiple units make the same prediction or manipulation they share the reward and hence the fitness obtained from making that prediction. In this sense, unique manipulation and prediction become the intrinsic motivation that drives this population of experimentalist agents. Those that discover new regularities in the world and that can exploit these for manipulations and predictions that other agents can’t will get more fitness and will replicate.

There is a design decision at this stage about which kind of evolution framework to use to evolve this society of scientists, and a decision to be made of how deeply embodied in the environment each agent is. In the classical genetic algorithm setup, each individual is evaluated in the environment independently of the other individuals, yielding an individual/environment fitness value. In a weak co-evolutionary setup, the individuals interact with the environment altogether, so the fitness of an individual is relative not only to the environment but also to the rest of the population, but individuals cannot interact directly, and don't observe the other agents. In a stronger co-evolutionary version, the individuals can communicate, but they are evaluated in different “rooms” of the environment, i.e., the interaction of the other agents with their own version of the environment does not affect other individuals' versions of the environment. Alternatively individuals may also communicate directly, and may know the “actions” of the other individuals or their fitness, and may also know their genotypes (e.g., to copy parts of them). In the extreme case, individuals—both their genotype and their phenotype—are part of the environment, like in a cellular automaton. This may be much less practical though.

The system will need to be sufficiently large that a diversity of manipulations and predictions can co­exist. An archive will be needed to prevent the Red Queen Effect which is a co­evolutionary pathology in which evolution may make no progress but solutions may oscillate, an effect which is unfortunately not entirely absent in human science in which previous discoveries may be forgotten and in which therefore the criteria for success may oscillate or generally be non­stationary. This archive will serve as the scientific repository of knowledge which can be accessed by all agents. An extra level of complexity is added when the changes in the world made by one agent affect the world of another agent. In this case there will be cooperation and competition dynamics between software scientists, as in any fully fledged ecology.

Let us focus on a more esoteric open­ended science. Within this world of software scientists, there will be some who are malicious manipulators, some groups that co-manipulate the world to suit each other’s predictive abilities, and some who parasitize the creativity of others. But the whole gamut of ecology and social dynamics will be present in there. From a machine learning perspective we can think of this as an ensemble method for open-ended unsupervised learning. Scientists are trying to maximally compress the world whilst maintaining experimental predictability. The population of scientists makes novel, unique discoveries about the world and these accumulate.

CONCLUSIONS

I have argued that if we wish to produce post­biological intelligence we should make an open­ended evolutionary system in a computer or on the internet. All previous attempts up to now have failed. It is necessary that some survival advantage always arises from being complex. This is certainly the case in our world, and I would expect in most worlds, due to the tendency of physics to make hierarchical and compositional structures.

Acknowledgements: Thanks to Laurent Orseau for discussions and comments on the manuscript. Thanks to the Seth Shostak and Paul Davies for help with the final versions of the manuscript.

REFERENCES

[1] Szathmáry, E., and Smith, J. M. 1995, “The major evolutionary transitions,” Nature, 374, 6519, pp. 227 – 232

[2] Legg, S., and Hutter, M. 2007, “Universal intelligence: A definition of machine intelligence,” Minds and Machines, 17, 4, pp. 391 – 444

[3] Okasha, S. 2005, “Multilevel selection and the major transitions in evolution,” Science, 72, 5, pp. 1013 – 1025

[4] Ganti, T. 2003, The Principles of Life, Oxford University Press (Oxford)

[5] Minh, V., Kavukcuoglu, K., Silver, D., Rusu, A. A., Veness, J., Bellemare, M. G., Petersen, S. et al. 2015, “Human-level control through deep reinforcement learning,” Nature, 518, 7540, pp. 529 – 533

[6] Dewdney, A.K., 1984, In the game called core war hostile programs engage in a battle of bits.” Scientific American, 250 (5), pp. 15–19

[7] Ray, T. 1992, “Evolution, ecology and optimization of digital organisms,” Santa Fe Institute working paper 92-08-042

[8] Adami, C. and Brown, C.T. 1994, “Evolutionary Learning in the 2D Artificial Life Systems Avida,” R. Brooks, P. Maes (eds.), Proc. Artificial Life IV, MIT Press (Cambridge), pp. 377 – 381

[9] Channon, A. D. 2003. “Improving and still passing the ALife test: Component normalised activity statistics classify evolution in Geb as unbounded”, Proceedings of Artificial Life VIII, Sydney, R. K. Standish, M. A. Bedau and H. A. Abbass, eds., MIT Press (Cambridge) pp. 173 – 181

[10] Yaeger, L. S. 1994, “Computational genetics, physiology, metabolism, neural systems, learning, vision, and behavior or PolyWorld: Life in a new context,” C. Langton ed., Proceedings of the Artificial Life III Conference, Addison-Wesley (Boston), pp. 263 – 298

[11] Soros, L. B., and Kenneth, S. O. 2014, “Identifying necessary conditions for open-ended evolution through the artificial life world of Chromaria,” Proc. of Artificial Life Conference (ALife 14)

[12] Ruiz-Mirazo, K., Peretó, J., and Moreno, A. 2004, “A universal definition of life: autonomy and open-ended evolution,” Origins of Life and Evolution of the Biosphere, 34, 3, pp. 323 – 346

[13] Niekum, S., Spector, L., and Barto, A., 2011, “Evolution of reward functions for reinforcement learning,” Proceedings of the 13th annual conference companion on Genetic and evolutionary computation , ACM, pp. 177 – 178

[14] Fernando, C.T., Szathmary, E., Husbands, P., 2014, “Selectionist and evolutionary approaches to brain function: a critical appraisal,” Frontiers in computational neuroscience 6:24

[15] Watson, R.A., and Szathmáry, E. 2016, “How Can Evolution Learn?” Trends in ecology and evolution 31 (2), pp. 147 – 157

[16] Mnih, V., Badia, A. P., Mirza, M., Graves, A., Lillicrap, T. P., Harley, T., and Kavukcuoglu, K., 2016, “Asynchronous methods for deep reinforcement learning,” arXiv preprint arXiv:1602.01783

[17] Bateson, G. 1972, Steps to an ecology of mind: Collected essays in anthropology, psychiatry, evolution, and epistemology, University of Chicago Press (Chicago)

[18] Watson, R.A., Weinreich, D.M., Wakeley, J., 2011 “Genome structure and the benefit of sex,” Evolution 65 (2), pp. 523 – 536

[19] Kashtan, N, and Alon, U., 2005, “Spontaneous evolution of modularity and network motifs,” Proceedings of the National Academy of Sciences of the United States of America 102 (39), pp. 13772 – 13778

[20] Toussaint, M., 2004, “The evolution of genetic representations and modular adaptation,”  PhD thesis, Institut für Neuroinformatik, Ruhr-Universiät-Bochum, Germany. Published with the Logos Verlag Berlin. ISBN 3-8325-0579-2, 173 pages

[21] Pigliucci, M. 2008, “Is evolvability evolvable?” Nature Reviews Genetics, 9, 1, pp. 75 – 82

[22] Silver, D., Huang, A., Maddison, C. J., Guez, A., Sifre, L., Van Den Driessche, G., and Dieleman, S., 2016, “Mastering the game of Go with deep neural networks and tree search”. Nature, 529 (7587), pp. 484 – 489

[23] Fernando, C., Karishma, K. K., and Szathmáry, E., 2008, “Copying and evolution of neuronal topology,” PloS ONE 3.11: e3775

[24] Fernando, C., Goldstein, R., and Szathmáry, E. 2010,  “The neuronal replicator hypothesis,” Neural computation 22 (11) pp. 2809 – 2857

[25] Churchill, A.W., Sigtia, S., Fernando, C. 2014, “A denoising autoencoder that guides stochastic search,” arXiv preprint arXiv:1404.1614

[26] Churchill, A. W., Sigtia, S., and Fernando, C. 2016, “Learning to generate genotypes with neural networks,” arXiv preprint arXiv:1604.04153

[27] Bayer, J., Wierstra, D., Togelius, J., and Schmidhuber, J. 2009, “Evolving memory cell structures for sequence learning,” International Conference on Artificial Neural Networks, Springer (Berlin) pp. 755 – 764

[28] Jozefowicz, R., Zaremba, W., Sutskever, I. 2015, “An empirical exploration of recurrent network architectures,” International Conference of Machine Learning (ICML)

[29] Fernando, C., Banarse, D., Reynolds, M., Besse, F., Pfau, D., Jaderberg, M., and Wierstra, D. 2016. “Convolution by evolution: Differentiable pattern producing networks”. arXiv preprint arXiv:1606.02580

[30] Schmidt, M., and Lipson, H. 2009, “Distilling free-form natural laws from experimental data,” Science, 324, 5923, pp. 81 – 85

[31] Adami, C. 1998, Introduction to artificial life (Vol. 1), Springer Science and Business Media (Berlin)

[Go to Top]


THE HUNT FOR HABITABLE PLANETS

Didier Queloz
Cavendish Laboratory
19 J J Thomson Avenue
Cambridge, CB3 0HE
UNITED KINGDOM
dq212@cam.ac.uk

ABSTRACT

Thousands of exoplanets have been discovered in the past two decades.  We review the most promising “technically mature” approaches that are likely, in the next decade, to provide us with data that would indicate the presence of life, as well as describe the main difficulties that will need to be overcome to make such measurements.

Confined for centuries to the category of pure speculation and philosophical debates, the existence of life outside our solar system is now at the edge of reaching the stage of being a testable scientific hypothesis. The first discovery of a planet orbiting another Sun-­‐like star in 1995 triggered a wave of interest and exoplanet search programs [1]. Twenty years later, thousands of exoplanets have been detected, and the discoveries are proceeding at an ever-­‐increasing rate.

The current list of known exoplanets is not only composed of gas giants like our own Jupiter, but also includes a rapidly increasing fraction of smaller planets that some believe have a composition similar to Earth. For some of these exoplanets, basic information on their atmospheric properties has been obtained. These early results are paving the way for future atmospheric studies of habitable, terrestrial exoplanets with the hope of obtaining solid evidence for the existence of life around another star.

It is now obvious that planets orbiting other stars are common. But, interestingly the bulk of exoplanets detected so far have orbital distances less than 1 AU, in stark contrast to Solar System planets. This may be a selection effect, but it may also represent one of the dominant arrangements of planetary systems in the universe, making our Solar System’s configuration more special than expected. We are still far from having a comprehensive view of the full diversity of planetary systems predicted by models of planet formation, and have not yet detected any “Twin of Earth”, making it difficult to place our Solar System in context.

Part of the reason for this is an unforeseen contribution to the noise budget of stellar observations arising from magnetic and convective effects in a stellar atmosphere. Stellar activity has become one of the main limiting factors in these observations, making the detection of planets like Earth difficult, whether done by transit or Doppler techniques. This additional noise structure, intrinsic to the astrophysical nature of stars, slows down progress and requires new strategies to be developed to circumvent the problem. For example, this was one of the main motivations for extending the Kepler mission’s lifetime and for initial efforts to obtain an intensive series of Doppler measurements on a few bright stars.

INTENSIVE RADIAL VELOCITY MEASURES

In 2003, the high-­‐precision HARPS spectrograph was installed on the 3.6m ESO telescope at La Silla. Ten years later the main planet survey carried out with HARPS used about 800 nights of observations, with an average of 40 radial velocity measurements for each of the selected 350 bright, southern G-­‐K dwarfs stars.  This program achieved a total of about 150 measurements for a few dozen of them. This led  to the discovery of compact systems of exoplanets with masses in the Neptune and “super-­‐Earth” range. It also successfully demonstrated that it is possible to build an efficient spectrograph, optimized for planet searches, that has long-­‐term radial velocity precision below one meter per second on a timescale of years.

Unfortunately, we also learned that solar type stars, on average, are Doppler variable at the 1-­‐2 m/s level, on timescales ranging from a few days to many months. Today, our progress towards detecting smaller planets on longer orbits depends more on our ability to address stellar variability issues than on improvements in spectrograph design or the availability of bigger telescopes.

We know that some spectroscopic indicators can be used to model components of stellar activity. Recent intensive observation campaigns on stars like Corot-­‐7 or Alpha Cen B suggest that intense series of measurements are a promising way to dig out small amplitude planetary signals from beneath the “sea of noise” originating from the star [2], [3].

The recent results of the Kepler mission clearly indicate that the odds of finding a G or K dwarf star hosting a planet with a size below 2 Earth-­‐diameters and an orbit shorter than 30 days, is higher than 50%. Extrapolation of these results to a 1 Earth-­‐diameter planet in the habitable zone (HZ) suggests that between 7% and 15% of planetary systems may be found in such a special orbital configuration.  Based on this important statistical result, we may conclude that Earth’s twins may be found in many nearby, naked-­‐eye stars.

Transiting exoplanets have a special geometrical configuration relative to Earth that makes them “Rosetta stones” for studies of other worlds [4]. They are the only exoplanets for which we can accurately measure both mass and radius, yielding strong clues to their physical structure and bulk composition [5]. We can also  measure their  orbital obliquity, and derive insightful constraints on their dynamical history [6]. But the true power of their special orbital geometry is that it offers a way to study their atmosphere without having to spatially resolve them from their host stars.

PERSPECTIVE ON OUR OWN SOLAR SYSTEM

The booming study of transiting planets allows us to start placing our own solar system in a broader perspective. While the Kepler space mission is determining the frequency of small-­‐size planets around solar-type stars [7], ground-­‐based surveys targeting relatively bright stars (V<13) are detecting – at an increasing rate – short-period, transiting giant planets suitable for detailed characterization. Notably, follow-­‐up observations of these bright “hot Jupiters” performed with space-­‐ and ground-­‐based instruments have given initial glimpses into their atmospheric properties, including chemical composition, vertical pressure-­‐ temperature profiles, albedos, and circulation patterns [8]. These first detailed studies of other worlds have laid the foundations of comparative exoplanetology [9].

The PLATO space mission (PLAnetary Transits and Oscillation of stars) scheduled for launch in 2024 is designed to detect terrestrial exoplanets in the habitable zone of solar-­‐type stars and assess their bulk properties. To characterize the nature and composition of planets, a massive ground-­‐based follow--‐up program will be required, especially to measure the mass of transiting planets. Examples include Corot-­‐7 and Kepler-­‐78, for which a large number of measurements (more than 100) are needed to obtain the mass of a planet and measure its density with sufficient accuracy to obtain a useful constraint on its structure. PLATO will outstrip TESS and Kepler in the detection of small planets where characterization is possible. About 100 Earth-mass planets are expected to be seen, and thousands of super-­‐Earths.

In principle, exporting the techniques developed for the pioneering first studies of transiting gas giants to the atmospheric characterization of terrestrial planets orbiting in the habitable zone (HZ; e.g. [10]) of their star is a promising path to search for life outside our solar system without requiring the huge technological developments and financial costs required by direct imagery projects like TPF [11] and Darwin [12].

Still, and as a practical matter, the application of these methods to an Earth-­‐twin transiting a Sun-­‐like star still seems out of reach. The main reasons for this are the overwhelmingly large area contrasts between the solar disk and the tiny Earth atmospheric annulus, and between the Sun’s and Earth’s luminosities, leading to signal-­‐to-­‐noise ratios (SNR) that are much less than one for any spectroscopic signature and any realistic program, even when considering the observation of a putative terrestrial planet transiting a nearby solar-­‐ twin with the future James Webb Space Telescope [13].

Fortunately, this negative conclusion does not hold for the dominant population of the solar neighborhood, the M-­‐ dwarfs. Because of their smaller size and luminosity, and the resulting larger planet-­‐to-­‐star flux and size contrasts, expected SNRs on the detection of spectroscopic signatures are indeed much more favorable for M-­‐dwarfs than for solar analogs. Furthermore, their HZ is much closer to them than for a solar-­‐type star, making the transits of a habitable planet more frequent and probable.

Looking for planets around low mass stars today seems the most realistic approach to detecting the first terrestrial planets amenable to atmosphere characterization, with the prospect of detecting bio-­‐ signatures with giant, next-­‐generation telescopes like the E-­‐ELT and the JWST in space. Many studies (e.g., [14], [15]) have shown that these two future major facilities have the potential to thoroughly probe the atmospheric properties of Earth-­‐ sized planets, but only if they transit a nearby (~30 pc at most) ultra-­‐ cool dwarf.

The small luminosity of M-type stars, the most abundant stars in the Galaxy, have habitable zones that are 30-­‐100 times closer-in than for the Sun. If a star is similar to Jupiter in size, the transit signature of an Earth-size planet will be deep and short enough to be detected from the ground. These are ideal targets to search for Earth-size planets from the ground using the transit method.

To carry out a successful survey of a thousand ultra-cool stars almost uniformly spread over the sky requires milli-magnitude photometric precision, and measures obtained at high cadence. These are necessary to detect a fast event produced by a short period, Earth-size planet in transit. As a practical matter, each target has to be monitored individually and continuously for dozens of nights. Doing this requires a series of modest size telescopes equipped with a red optimized CCD, located in sites of outstanding quality.

REFERENCES

[1] Mayor, M. and Queloz, D. 1995, Nature 378, pp. 355 - 359

[2] Haywood, R. D., Collier Cameron, A., Queloz, D., Barros, S. C. C., Deleuil, M., Fares, R., Gillon, M., Lanza, A. F., Lovis, C., Moutou, C., Pepe, F., Pollacco, D., Santerne, A., Segransan, D., Unruh, Y. C. 2014, “Planets and Stellar Activity: Hide and Seek in the CoRoT-7 System”, arXiv:1407.1044 [astro-ph.EP]

[3] Dumusques, X., Pepe, F., Lovis, C., Segransan, D., Sahlmann, J., Benz, W. Bouchy, F., Mayor, M., Queloz, D., Santos, N., and Udry, S. 2012, Nature 491, pp. 207 - 2011

[4] Winn, J. S. 2010, Exoplanets, ed. S. Seager, University of Arizona Press (Tucson)

[5] Fortney, J. J., Marley, M. S., and Barnes, J. W. 2007, Ap J. 659, pp. 1661 - 1672

[6] Winn, J.N. 2011, “The Rossiter-McLaughlin effect for exoplanets,” in The Astrophysics of Planetary Systems, Proc. IAU Symp. No. 276, eds. A. Sozzetti, M. Lattanzi, and A. Boss, Cambridge University Press (Cambridge)

[7] Borucki. W. J. et al. 2011, Ap. J. 736, pp. 19 - 22

[8] Seager, S. and Deming, D. 2010, “Exoplanet atmospheres”, Ann. Rev. of Astr. And Astrophys. 48, pp. 631 - 672

[9] Seager, S. 2008, “Exoplanet transit spectroscopy and photometry,” in Space Sci. Rev. 135, pp. 345 – 354

[10] Kasting, J. F., Whitmire, D. P.,and Reynolds, R. P. 1993, Icarus 101, pp. 108 - 128

[11] Traub, W., Shaklan, S., and Lawson, P. 2007, In the spirit of Bernard Lyot: The direct detection of planets and circumstellar disks in the 21st century, ed. P. Kalas, A. & A. 509

[12] Cockell, C. S. et al. 2009, “Darwin—an experimental astronomy mission to search for extrasolar planets, Exp Astron. 23, pp. 435 – 461

[13] Seager, S. et al. 2009, “Discovery and characterization of transiting superearths using an all-sky transit survey and follow-up by the James Webb Space Telescope,” arXiv:0903.4880 [astro-ph.EP]

[14] Kaltenegger, L., and Traub, W. A. 2009, Ap. J. 698, pp. 519 - 527

[15] Snellen, I., de Kok, R., Le Poole, R., Brogi, M., and Birkby, J. 2013, Ap. J. 764, 182

[Go to Top]


POST-HUMAN EVOLUTION ON EARTH AND BEYOND

Martin J Rees
Institute of Astronomy
Madingley Road
Cambridge CB3 OHA
mjr@ast.cam.ac.uk

ABSTRACT

The pace of technological advance on Earth is such that post-humans – whether organic, cyborg or entirely inorganic – could emerge within a few centuries (or indeed within a single century). In the billions of years lying ahead, such entities, continuing to evolve not through natural selection but on the (far faster) timescale of technological evolution could spread through the cosmos (in a manner whose details we manifestly cannot even conceive) . If advanced life had emerged on other planets, and followed a similar evolutionary track to what has happened on Earth, then the era of ‘organic’ intelligence will be a thin sliver of time compared to the far longer post-human era dominated by ‘machines’. This suggests that, if SETI succeeded, the most likely source of any artificial emissions would be unlikely to come from anything resembling the ‘organic’ civilization that prevails on Earth.

Extraterrestrial life and intelligence have always been fascinating topics on the speculative fringe of science. But in the last decade or two, serious advances on several fronts have generated wider interest in these subjects – indeed, they have become almost ‘mainstream’. One can highlight four areas where there’s a gratifying crescendo of interest and understanding:

(i) The discovery and study of exoplanets began only 20 years ago. It is now one of the most vibrant frontiers of science. Data are accumulating at an accelerating rate; we can confidently assert that there are billions of Earth-like planets in our Galaxy; it is not premature to seek evidence that some have biospheres

(ii) There has been substantial recent progress in understanding the origin of life. It’s been clear for decades that the transition from complex chemistry to the first entities that could be described as ‘living’ poses one of the crucial problems in the whole of science. But until recently, people shied away from it, regarding it as neither timely nor tractable. In contrast, numerous distinguished scientists are now committed to this challenge.

(iii) Advances in computational power and robotics have led to growing interest in the possibility that ‘artificial intelligence’ (AI) could in the coming decades achieve (and exceed) human capabilities over a wider range of conceptual and physical tasks. This has stimulated discussions of the nature of consciousness (is it an ‘emergent’ property or something more special?), and further speculation by ethicists and philosophers on what forms of inorganic intelligence might be created by us – or might already exist in the cosmos – and how humans might relate to them.

(iv) In the coming years there will be expanded and better-resourced efforts to search for ET; these will focus wider interest on the subject and thereby generate new ideas.

SOME HISTORY

Speculations on ‘the plurality of inhabited worlds’ date back to antiquity. From the 17th to the 19th century, it was widely suspected that the other planets of our Solar System were inhabited. The arguments were often more theological than scientific. Eminent 19th century thinkers like Whewell and Brewster argued that life must pervade the cosmos, because otherwise such vast domains of space would seem such a waste of the Creator’s efforts. An interesting and amusing critique of such ideas is given in books by Alfred Russel Wallace, the co-developer of natural selection theory. Wallace is specially scathing about the physicist David Brewster (remembered for the ‘Brewster angle’ in optics) who conjectured on such grounds that even the Moon must be inhabited [1]. Brewster argued that had the Moon “been destined to be merely a lamp to our Earth, there was no occasion to variegate its surface with lofty mountains and extinct volcanoes, and cover it with large patches of matter that reflect different quantities of light and give its surface the appearance of continents and seas. It would have been a better lamp had it been a smooth piece of lime or of chalk.”

By the end of the nineteenth century, so convinced were many astronomers that life existed on other planets in our Solar System that a prize of 100,000 francs was offered to the first person to make contact with them. And the prize specifically excluded contact with Martians – that was considered far too easy! The erroneous claim that Mars was crisscrossed by canals had been taken as proof positive of intelligent life on the Red Planet.

The space age brought sobering news. Venus, a cloudy planet that promised a lush tropical swamp-world, turned out to be a crushing, caustic hell-hole. Mercury was a pockmarked blistering rock. And NASA’s Curiosity probe (and its predecessors) showed that Mars, though the most Earth-like body in the Solar System, was actually a frigid desert with a very thin atmosphere. There may be creatures swimming under the ice of Jupiter’s moon Europa, or Saturn’s moon Enceladus, but nobody can be optimistic.

However, the prospects brighten enormously when we extend our gaze beyond our Solar System – beyond the reach of any probe we can devise today. What has transformed and energized the whole field of exobiology is the realization that stars are orbited by retinues of planets. Giordano Bruno speculated about this in the 16th century. From the 1940s onward, astronomers suspected he was correct: the earlier idea that our Solar system formed from a tidal stream torn out by the tidal pull of a close-passing star (which would have implied that planetary systems were rare) had by then been discredited. But it wasn’t until the mid-1990s that evidence for exoplanets started to emerge. Moreover, Bruno famously went further, and conjectured that on some of those planets there might be other creatures “as magnificent as those upon our human Earth.” Will he one day be proved right on this bolder speculation too?

ORIGIN OF LIFE

There seem good prospects for progress in understanding the origin of life. What triggered the transition from complex molecules to entities that can metabolize and reproduce? It might have involved a fluke so rare that it happened only once in the entire Galaxy. On the other hand, this crucial transition might have been almost inevitable given the ‘right’ environment. We just don’t know – nor do we know if the DNA/RNA chemistry of terrestrial life is the only possibility, or just one chemical basis among many options that could be realized elsewhere.

The origin of life is now attracting stronger interest: it’s no longer deemed to be one of those problems (consciousness, for instance, is still in this category) which, though manifestly important, doesn’t seem timely or tractable – and is relegated to the ‘too difficult box’. And of course the understanding of life’s beginnings is important not only for our assessment of the likelihood of alien life, but also to the most firmly earthbound evolutionary biologist.

And there is a second still more fascinating question (Bruno’s conjecture) : if simple life exists, what are the odds that it evolves into something that we would recognize as intelligent? Even if primitive life were common, the emergence of ‘advanced’ life may not be – it may depend on many contingencies (phases of glaciation, the Earth’s tectonic history, asteroid impacts, and so forth). Several authors have speculated about possible ‘bottlenecks’ – key stages in evolution that are hard to transit. Perhaps the transition to multi-cellular life is one of these. (The fact that simple life on Earth seems to have emerged quite quickly, whereas even the most basic multi-cellular organisms took nearly 3 billion years, suggests that there may be severe barriers to the emergence of any complex life.) Or the ‘bottleneck’ could come later.

Even in a complex biosphere, the emergence of intelligence isn’t guaranteed. If, for instance, the dinosaurs hadn’t been wiped out, the chain of mammalian evolution that led to humans might have been foreclosed and we can’t predict whether another species would have taken our role. Some evolutionists regard the emergence of intelligence as a contingency – even an unlikely one. The alternative view is represented by Simon Conway Morris (see his contribution to this workshop).

Perhaps, more ominously, there could be a ‘bottleneck’ at our own present evolutionary stage – the stage when intelligent life develops powerful technology. If so, the long-term prognosis for ‘Earth-sourced’ life depends on whether humans survive this critical evolutionary phase. This does not mean that the Earth has to avoid a disaster – only that, before it happens some humans or advanced artefacts have spread beyond their home planet.

In considering the possibilities of life elsewhere, we should surely be open-minded about where it might emerge and what forms it could take – and to devote some thought to non-earthlike life in non-earthlike locations. But it plainly makes sense to start with what we know (the ‘searching under the streetlamp’ strategy) and to deploy all available techniques to discover whether any exoplanet atmospheres display evidence for a biosphere. Clues will surely come in the next decade or two from high-resolution spectra using the James Webb Space Telescope and the next generation of 30+ meter ground-based telescopes expected to be operational in the 2020s. To optimize the prospects, we shall need beforehand to have scanned the whole sky to identify the nearest earthlike planets. Even for these, next-generation telescopes will have a hard job separating out the spectrum of the planet’s atmosphere from the spectrum of the hugely brighter central star.

Conjectures about advanced or intelligent life are of course far more shaky than those about simple life. But the firmest guesses that we can make are based on extrapolating the far future of Earth-based life. I would argue that this suggests two things about the entities that SETI searches could reveal.

(a) They will not be ‘organic’ or biological

(b) They will not remain on the planet where their biological precursors lived.

FAR FUTURE OF EARTH-SOURCED INTELLIGENCE

During this century, the entire Solar System – planets, moons and asteroids – will be explored by flotillas of tiny robotic craft. The next step would be the deployment of large-scale robotic fabricators, which can construct and assemble large structures in space (and fabrication in space will be a better use of materials mined from asteroids or the Moon than bringing them back to Earth). The Hubble Telescope’s successors, with huge gossamer-thin mirrors assembled under zero gravity, will further expand our vision of stars, galaxies and the wider cosmos.

But what role will humans play? There’s no denying that NASA’s Curiosity rover, now trundling across a giant Martian crater, may miss startling discoveries that no human geologist could overlook. But robotic techniques are advancing fast, allowing ever more sophisticated unmanned probes – and, later in the century, robotic fabricators will be building huge lightweight structures in space. The practical case for manned spaceflight gets ever-weaker with each advance in robotics and miniaturization. If some people now living one day walk on Mars (as I hope they will) it will be as an adventure, and as a step towards the stars.

The current cost gap between manned and unmanned missions is huge. Unless motivated by prestige and bankrolled by superpowers, manned missions beyond the Moon will perforce be cut-price ventures, accepting high risks – perhaps even ‘one-way tickets’. These missions will be privately funded; no Western government agency would expose civilians to such hazards. There would, despite the risks, be many volunteers – driven by the same motives as early explorers, mountaineers, and the like. But don’t ever expect mass emigration. No place in our Solar system offers an environment even as clement as the Antarctic or the top of Everest. Space doesn’t offer an escape from Earth’s problems.

Nonetheless, a century or two from now, there may be small groups of pioneers living independent from the Earth – on Mars or on asteroids. Whatever ethical constraints we impose here on the ground, we should surely wish these adventurers good luck in genetically modifying their progeny to adapt to alien environments. This might be the first step towards divergence into a new species: the beginning of the post-human era. And genetic modification would be supplemented by cyborg technology – indeed there may be a transition to fully inorganic intelligences.

(As a parenthetic comment, I’d note that the most crucial impediment to routine space flight, even in Earth’s orbit and still more for those venturing further, stems from the intrinsic inefficiency of chemical fuel, and the consequent requirement to carry a weight of fuel far exceeding that of the payload. So long as we are dependent on chemical fuels, interplanetary travel will remain a challenge. It’s interesting to note, incidentally, that this is a generic constraint, based on fundamental chemistry, on any organic intelligence that had evolved on another planet. If a planet’s gravity is strong enough to retain an atmosphere at a temperature where water doesn’t freeze and metabolic reactions aren’t too slow, the energy required to lift a molecule from it will require more than one molecule of chemical fuel).

Nuclear energy (or, more futuristically, matter/antimatter annihilation) could be a transformative fuel. But even then, the transit time beyond nearby stars exceeds a human lifetime. Interstellar travel (except for unmanned probes, DNA samples, etc.) is therefore an enterprise for post-humans. They could be silicon-based. Alternatively, they could be organic creatures who had won the battle with death, or perfected the techniques of hibernation or suspended animation.

Few doubt that machines will gradually surpass more and more of our distinctively human capabilities – or enhance them via cyborg technology. Disagreements are basically about the timescale – the rate of travel, not the direction of travel. The cautious amongst us envisage timescales of centuries rather than decades for these transformations. Be that as it may, the timescales for technological advance are but an instant compared to the timescales of the Darwinian selection that led to humanity’s emergence – and (more relevantly) they are less than a millionth of the vast expanses of cosmic time lying ahead. So the outcomes of future technological evolution will surpass humans by as much as we (intellectually) surpass a bug.

But we humans shouldn’t feel too humbled. Even though we are surely not the terminal branch of an evolutionary tree, we could be of special cosmic significance for jump-starting the transition to silicon-based (and potentially immortal) entities, spreading their influence far beyond the Earth, and far transcending our limitations.

Philosophers debate whether “consciousness” is special to the wet, organic brains of humans, apes and dogs. Might it be that robots, even if their intellects seem superhuman, will still lack self-awareness or inner life? The answer to this question crucially affects how we react to the far-future scenario I’ve sketched. If the machines are zombies, we would not accord them the same value as humans, and the post-human future would seem bleak. But if they are conscious, we should surely welcome the prospect of their future hegemony.

The far future will bear traces of humanity, just as our own age retains influences of ancient civilizations. Humans and all they have thought might be a transient precursor to the deeper cogitations of another culture — one dominated by machines, extending deep into the future and spreading far beyond Earth.

I think it’s likely that the machines will gain dominance on Earth – perhaps indeed before the stage when any self-sustaining human colony gets established away from our planet. This is because there are chemical and metabolic limits to the size and processing power of ‘wet’ organic brains. Maybe we’re close to these already. But no such limits constrain silicon based computers (still less, perhaps, quantum computers): for these, the potential for further development could be as dramatic as the evolution from monocellular organisms to humans. So, by any definition of ‘thinking’, the amount and intensity that’s done by organic human-type brains will be utterly swamped by the cerebrations of AI. Moreover, the Earth’s biosphere in which organic life has symbiotically evolved is not a constraint for advanced AI. Indeed it is far from optimal – interplanetary and interstellar space will be the preferred arena where robotic fabricators will have the grandest scope for construction, and where non-biological ‘brains’ may develop insights as far beyond our imaginings as string theory is for a mouse.

Abstract thinking by biological brains has underpinned the emergence of all culture and science. But this activity – spanning tens of millennia at most – will be a brief precursor to the more powerful intellects of the inorganic post-human era.

Human brains have changed little since our ancestors roamed the African savannah and coped with the challenges that life then presented. It’s surely remarkable that these brains have allowed us to make sense of the quantum and the cosmos – far removed from the ‘common sense’ everyday world in which we evolved. Nonetheless, some key features of reality may be beyond our conceptual grasp. Scientific frontiers are advancing fast. Answers to many current mysteries will surely come into focus. but we may at some point ‘hit the buffers’. Some insights may have to await post-human intelligence. There may be phenomena, crucial to our long-term destiny, that we are not aware of, any more than a monkey comprehends the nature of stars and galaxies. Some ‘brains’ may structure their consciousness in a fashion that we can’t conceive, and have a quite different perception of reality.

In cosmological terms (or indeed in a Darwinian timeframe) a millennium is but an instant. So let us ‘fast forward’ not even for a few millennia, but for an ‘astronomical’ timescale millions of times longer than that. The ‘ecology’ of stellar births and deaths in our Galaxy will proceed gradually more slowly, until jolted by the ‘environmental shock’ of an impact with Andromeda, maybe four billion years hence. The debris of our Galaxy, Andromeda and their smaller companions within the local group will thereafter aggregate into one amorphous galaxy. Distant galaxies will not only move further away, but recede faster and faster until they disappear – rather as objects falling onto a black hole encounter a horizon, beyond which they are lost from view and causal contact.

But the remnants of our Local Group could continue for far longer – time enough, perhaps for Kardashev Type III phenomenon to emerge as the culmination of the long-term trend for living systems to gain complexity and ‘negative entropy’. All the atoms that were once in stars and gas could be transformed into structures as intricate as a living organism or a silicon chip but on a cosmic scale.

But even these speculations don’t take us to the utter limits. I have assumed that the universe itself will expand, at a rate that no future entities have power to alter. And that everything is in principle understandable as a manifestation of the basic laws governing particles, space and time that have been disclosed by contemporary science. Some science fiction authors envisage stellar-scale engineering to create black holes and wormholes – concepts far beyond any technological capability that we can envisage, but not in violation of these basic physical laws. But are there new ‘laws’ awaiting discovery? And will the present ‘laws’ be immutable, even to a Type III intelligence able to draw on galactic-scale resources?

Post-human intelligences (autonomously-evolving artefacts) will achieve the processing power to simulate living things – even entire worlds. These super or hyper-computers would have the capacity to simulate not just a simple part of reality, but a large fraction of an entire universe.

And then of course the question arises: if these simulations exist in far larger numbers than the universe themselves, could we be in one of them? Could we ourselves not be part of what we think of as bedrock physical reality? Could we be ideas in the mind of some supreme being who is running a simulation? Indeed, if the simulations outnumber the universes, as they would if one universe contained many computers making many simulations, then the likelihood is that we are ‘artificial life’ in this sense. This concept opens up the possibility of a new kind of ‘virtual time travel’, because the advanced beings creating the simulation can, in effect, rerun the past. It’s not a time-loop in a traditional sense: it’s a reconstruction of the past, allowing advanced beings to explore their history.

These ideas would have the extraordinary consequence that we may not be part of the deepest reality: we may be a simulation. The possibility that we are creations of some supreme (or super) being, blurs the boundary between physics and idealist philosophy, between the natural and the supernatural. We may be in the matrix rather than directly manifesting the basic physical laws.

SETI: PROSPECTS AND TECHNIQUES

The scenarios I’ve just described would have the consequence – a boost to human self-esteem! – that even if life had originated only on the Earth, it would not remain a trivial feature of the cosmos: humans may be closer to the beginning than to the end of a process whereby ever more complex intelligence spreads through the Galaxy. But of course there would in that case be no ‘ET’ at the present time.

Suppose however that there are many other planets where life began; and suppose that on some of them Darwinian evolution followed a similar track. Even then, it’s highly unlikely that the key stages would be synchronized. If the emergence of intelligence and technology on a planet lags significantly behind what has happened on Earth (because the planet is younger, or because the ‘bottlenecks’ have taken longer to negotiate there than here) then that planet would plainly reveal no evidence of ET. But life on a planet around a star older than the Sun could have had a head-start of a billion years or more. Thus it may already have evolved much of the way along the futuristic scenarios outlined in the last section.

One generic feature of these scenarios is that ‘organic’ human-level intelligence is just a brief interlude before the machines take over. The history of human technological civilization is measured in millennia (at most) – and it may be only one or two more centuries before humans are overtaken or transcended by inorganic intelligence, which will then persist, continuing to evolve, for billions of years. This suggests that if we were to detect ET, it would be far more likely to be inorganic: we would be most unlikely to ‘catch’ alien intelligence in the brief sliver of time when it was still in organic form.

SETI searches are surely worthwhile, despite the heavy odds against success, because the stakes are so high. That’s why we should surely acclaim the launch of Breakthrough Listen – a major ten-year commitment by the Russian investor Yuri Milner to buy time on the world’s best radio telescopes and develop instruments to scan the sky in a more comprehensive and sustained fashion than ever before. Breakthrough Listen will carry out the world’s deepest and broadest search for extraterrestrial technological life using several of the world’s largest professional radio and optical telescopes. The project will deploy radio dishes at Green Bank and at Parkes – and hopefully others including the Arecibo Observatory. The radio telescopes will be used to search for non-natural radio transmissions from nearby and distant stars, from the plane of the Milky Way, from the Galactic Centre, and from nearby galaxies. They will search over a wide frequency bandwidth from 100 MHz to 50 GHz using advanced signal processing equipment developed by a team centered at UC Berkeley.

SETI searches seek some electromagnetic transmission that is manifestly artificial. But even if the search succeeded (and few of us would bet more than one percent on this), it would still in my view be unlikely that the ‘signal’ would be a decodable message. It would more likely represent a byproduct (or even a malfunction) of some super-complex machine far beyond our comprehension that could trace its lineage back to alien organic beings (which might still exist on their home planet, or might long ago have died out). The only type of intelligence whose messages we could decode would be the (perhaps small) subset that used a technology attuned to our own parochial concepts.

Even if intelligence were widespread in the cosmos, we may only ever recognize a small and atypical fraction of it. Some ‘brains’ may package reality in a fashion that we can’t conceive. Others could be living contemplative lives, perhaps deep under some planetary ocean, doing nothing to reveal their presence. It makes sense to focus searches first on Earth-like planets orbiting long-lived stars. But science fiction authors remind us that there are more exotic alternatives. In particular, the habit of referring to ET as an ‘alien civilization’ may be too restrictive. A ‘civilization’ connotes a society of individuals: in contrast, ET might be a single integrated intelligence. Even if signals were being transmitted, we may not recognize them as artificial because we may not know how to decode them. A radio engineer familiar only with amplitude-modulation might have a hard time decoding modern wireless communications. Indeed, compression techniques aim to make the signal as close to noise as possible – insofar as a signal is predictable, there’s scope for more compression.

Perhaps the Galaxy already teems with advanced life, and our descendants will ‘plug in’ to a galactic community – as rather junior members. On the other hand, Earth’s intricate biosphere may be unique and the searches may fail. This would disappoint the searchers. But it would have an upside. Humans could then be less cosmically modest. Our tiny planet – this pale blue dot floating in space – could be the most important place in the entire cosmos. Either way, our cosmic habitat seems ‘tuned’ to be an abode for life. Even if we are now alone in the universe, we may not be the culmination of this ‘drive’ towards complexity and consciousness.

The focus of the ‘Breakthrough Listen’ project is on the radio and optical parts of the spectrum. But of course, in our state of ignorance about what might be out there, we should clearly encourage searches in all wavebands (e.g. the X-ray band) and also be alert for artefacts and other evidence of non-natural phenomena. I don’t think even the optimistic SETI searchers would rate the chance of success as more than a few percent – and most of us are more pessimistic, but nevertheless think the stakes are so high that it’s worth a gamble – we’d surely all like to see searches begun in our lifetime.

Finally, there are two familiar maxims that pertain to this quest. First ‘extraordinary claims will require extraordinary evidence’ and second ‘absence of evidence isn’t evidence of absence’.

REFERENCES

[1] Wallace, A. R 1903, Man’s Place in the Universe, Chapman and Hall (London) pp 15 – 19

[Go to Top]


SUPERINTELLIGENT AI AND THE POSTBIOLOGICAL COSMOS APPROACH

Susan Schneider
Department of Philosophy and Cognitive Science Program, The University of Connecticut
Center for Theological Inquiry, Princeton
Technology and Ethics Group, Yale University
susansdr@gmail.com

ABSTRACT

The postbiological approach in astrobiology has been largely independent of the discussions of superintelligence in the AI literature, despite the increasing attention on superintelligent AI in both academe and in the media. In this paper, I bring these issues together. In my view, one route to understanding superintelligent alien civilizations, as well as superintelligence on Earth (should either ever exist) could involve identifying general features of computational systems, without which a superintelligence would be far less efficient. By drawing from Nick Bostrom’s work on superintelligent AI on Earth, as well as ideas from computational neuroscience, I will attempt to identify some goals and cognitive capacities likely to be possessed by superintelligent beings. I will then comment on some social implications of the postbiological approach

INTRODUCTION

Thinking about how aliens in other technological societies might think, if they exist at all, is obviously speculative, even for a philosopher. After all, exoplanets are habitable, we do not know if they are inhabited. We do not currently have an agreed-upon account of the origin of life on Earth, and we do not know how easy it is for life to originate elsewhere. And even if microbial life exists on many exoplanets, perhaps it is rare for microbial life to evolve into intelligent life. Or, perhaps it isn’t rare for intelligence to evolve, but civilizations do not survive their own technological maturity. Perhaps we are one of only a few technological civilizations in the universe, or perhaps we are alone.

But I am going to assume, optimistically, that advanced civilizations are out there. After all, if even one technological civilization exists, it is likely to be older than us, and it could have spread throughout the universe. Further, some proponents of the search for extraterrestrial intelligence (SETI) estimate that we will encounter alien intelligence within the next several decades. Even if you hold a more conservative estimate – say, that the chance of encountering alien intelligence in the next 50 years is 5 percent – the stakes for our species are high. Knowing that we are not alone in the universe would be a profound realization, and contact with an alien civilization could produce amazing technological innovations and cultural insights. It thus can be valuable to consider these questions, albeit with the goal of introducing possible routes to answering them, rather than producing definitive answers. So, let us ask: how might aliens think? Believe it or not, it’s possible to say something concrete in response to this question.

We can approach this issue by drawing from science and the humanities, rather than just science In particular, I will draw from neuroscience, philosophy, astrobiology and artificial intelligence (AI). My point of departure is the intriguing position in astrobiology that the most intelligent alien civilizations may be postbiological, being synthetic superintelligences – creatures that are vastly smarter than humans in every respect, scientific reasoning, social skills, and more [1], [2], [3], [4], [5], [6].

The postbiological approach has been largely independent of the discussions of superintelligence, despite the increasing attention on superintelligent AI in both academe and in the media [7]. Herein, I bring these issues together, drawing from [4]. In my view, to understand the most intelligent alien civilizations, as well as superintelligence on Earth, we can look for general features of computational systems, without which a superintelligence would be far less efficient. So using work on superintelligent AI on Earth, as well as ideas from computational neuroscience, I will briefly and provisionally attempt to identify some goals and cognitive capacities likely to be possessed by superintelligent beings.

Section One overviews the postbiological cosmos approach. Section Two discusses Nick Bostrom’s recent book on superintelligence, which focuses on the genesis of superintelligent AI (“SAI”) on Earth; as it happens, many of Bostrom’s observations are informative in the present context. I then isolate a specific type of superintelligence that is of particular import in the context of alien superintelligence, biologically inspired superintelligences (“BISAs”). Section Three concludes by raising some issues for future reflection.

THE POSTBIOLOGICAL COSMOS APPROACH IN ASTROBIOLOGY

Our culture has long depicted aliens as humanoid creatures with small, pointy chins, massive eyes, and large heads, apparently to house brains that are larger than ours. Paradigmatically, they are “little green men.” While we are aware that our culture is anthropomorphizing, I imagine that my suggestion that aliens are supercomputers may strike you as far-fetched. So what is my rationale for the view that most intelligent alien civilizations will have members that possess SAI? I offer three observations that, together, motivate this conclusion.

(1) The short window observation. Once a society creates the technology that could put them in touch with the cosmos, they are only a few hundred years away from changing their own paradigm from biology to AI [3], [6], [2]. This “short window” makes it more likely that the aliens we encounter would be postbiological.

The short-window observation is supported by human cultural evolution, at least thus far. Our first radio signals date back only about 120 years, and space exploration is only about 50 years old, but we are already immersed in digital technology, such as cell-phones and laptop computers. It is probably a matter of less than 50 years before sophisticated internet connections are wired directly into our brains. Indeed, implants for Parkinson’s are already in use, and in the United States the Defense Advanced Research Projects Agency (DARPA) has started to develop neural implants that interface directly with the nervous system, regulating conditions such as post-traumatic stress disorder, arthritis, depression, and Crohn’s disease. DARPA’s program, called “ElectRx,” aims to replace certain medications with “closed-loop” neural implants, implants that continually assess the state of one’s health, and provide the necessary nerve stimulation to keep one’s biological systems functioning properly [8]. Eventually, implants will be developed to enhance normal brain functioning, rather than for medical purposes.

You may object that this argument employs “N = 1 reasoning,” generalizing from the human case to the case of alien civilizations. But it strikes me as being unwise to discount arguments based on the human case. Human civilization is the only one we know of and we had better learn from it. It is no great leap to claim that other civilizations will develop technologies to advance their intelligence and survival. This is especially true if the alien civilizations evolved with similar evolutionary pressures as those on Earth. And, as I will explain in a moment, synthetic intelligence will likely outperform unenhanced brains.

A second objection to my short-window observation rightly points out that nothing I have said thus far suggests that humans will be superintelligent. I have merely said that future humans will be posthuman. While I offer support for the view that our own cultural evolution suggests that humans will eventually be postbiological, this does not show that advanced alien civilizations will reach superintelligence. So even if one is comfortable reasoning from the human case, the human case does not support the position that the members of advanced alien civilizations will be superintelligent.

This is a correct reading of my first observation. Whether or not they would be superintelligent is the addressed by the second.

(2) The greater age of alien civilizations. Proponents of SETI have often concluded that alien civilizations would be much older than our own: “… all lines of evidence converge on the conclusion that the maximum age of extraterrestrial intelligence would be billions of years, specifically [it] ranges from 1.7 billion to 8 billion years” ([2] p 468). If civilizations are millions or billions of years older than us, many would be vastly more intelligent than we are. By our standards, many would be superintelligent. We are galactic babies.

But would they be forms of AI, as well as forms of superintelligence? I believe so. Even if they were biological, merely having biological brain enhancements, their superintelligence would be reached by artificial means, and we could regard them as having forms of “artificial intelligence.” But I suspect something stronger than this, which leads me to my third observation:

(3) It is likely that these synthetic beings will not be biologically-based. Currently, silicon appears to be a better medium for information processing than the brain itself, and future materials may even prove superior to silicon. Neurons reach a peak speed of about 200 Hz, which is seven orders of magnitude slower than current microprocessors ([7] p 59). While the brain can compensate for some of this with massive parallelism, features such as “hubs,” and so on, crucial mental capacities, such as attention, rely upon serial processing, which is incredibly slow, and has a maximum capacity of about seven manageable chunks [9]. Further, the number of neurons in a human brain is limited by cranial volume and metabolism, but computers can occupy entire buildings or cities, and can even be remotely connected across the globe [7]. Of course, the human brain is far more intelligent than any modern computer. But intelligent machines can in principle be constructed by reverse engineering the brain, and improving upon its algorithms.

In sum: I have observed that there seems to be a short window from the development of the technology to access the cosmos and the development of postbiological minds and AI. I then observe that we are galactic babies: extraterrestrial civilizations are likely to be vastly older than us, and thus they would have already reached not just postbiological life, but superintelligence. Finally, I noted that they would likely have SAI, because silicon is a superior medium for superintelligence. From this I conclude that many advanced alien civilizations will be populated by forms with SAI.

Even if I am wrong – even if the majority of alien civilizations turn out to be biological – it may be that the most intelligent alien civilizations will be ones in which the inhabitants are SAI. Further, creatures that are silicon-based, rather than biologically-based, are more likely to endure space travel, having durable systems that are practically immortal, so they may be the kind of the creatures we first encounter.

HOW MIGHT SUPERINTELLIGENT ALIENS THINK

There has been a good deal of attention by computer scientists, philosophers, and the media on the topic of superintelligent AI. Nick Bostrom’s recent book on superintelligence focuses on the development of superintelligence on Earth, but we can draw from his thoughtful discussion [7]. Bostrom distinguishes three kinds of superintelligence:

(1) Speed superintelligence – even a human emulation could in principle run so fast that it could write a PhD thesis in an hour.

(2) Collective superintelligence – the individual units need not be superintelligent, but the collective performance of the individuals outstrips human intelligence.

(3) Quality superintelligence – at least as fast as human thought, and vastly smarter than humans in virtually every domain.

Any of these kinds could exist alongside one or more of the others.

An important question is whether we can identify common goals that these types of superintelligences may share. Bostrom suggests:

The Orthogonality Thesis:

“Intelligence and final goals are orthogonal – more or less any level of intelligence could in principle be combined with more or less any final goal.” ([7] p 107)

Bostrom is careful to underscore that a great many unthinkable kinds of SAI could be developed. At one point, he raises a sobering example of a superintelligence with the final goal of manufacturing paper clips ([7] pp 107–108, 123–125). While this may initially strike you as a harmless endeavor although hardly a life worth living, Bostrom points out that a superintelligence could utilize every form of matter on Earth in support of this goal, wiping out biological life in the process. Indeed, Bostrom warns that superintelligence emerging on Earth could be of an unpredictable nature, being “extremely alien” to us ([7] p 29). He lays out several scenarios for the development of SAI. For instance, SAI could be arrived at in unexpected ways by clever programmers, and not be derived from the human brain whatsoever. He also takes seriously the possibility that Earthly superintelligence could be biologically inspired, that is, developed from reverse engineering the algorithms that cognitive science says describe the human brain, or from scanning the contents of human brains and transferring them to a computer (i.e. “uploading”).

Although the final goals of superintelligence are difficult to predict, Bostrom singles out several instrumental goals as being likely, given that they support any final goal whatsoever:

The Instrumental Convergence Thesis:

Several instrumental values can be identified which are convergent in the sense that their attainment would increase the chances of the agent’s goal being realized for a wide range of final goals and a wide range of situations, implying that these instrumental values are likely to be pursued by a broad spectrum of situated intelligent agents. ([7] p 109)

The goals that he identifies are resource acquisition, technological perfection, cognitive enhancement, self-preservation, and goal content integrity (i.e. that a superintelligent being’s future self will pursue and attain those same goals). He underscores that self-preservation can involve group or individual preservation, and that it may play second-fiddle to the preservation of the species the AI was designed to serve ([7] p 109).

Let us call an alien superintelligence that is based on reverse engineering an alien brain, including uploading it, a biologically-inspired superintelligent alien (“BISA”). Although BISAs are inspired by the brains of the original species that the superintelligence is derived from, a BISA’s algorithms may depart from those of their biological model at any point.

BISAs are of particular interest in the context of alien superintelligence. For if Bostrom is correct that there are many ways superintelligence can be built, but a number of alien civilizations develop superintelligence from uploading or other forms of reverse engineering, it may be that BISAs are the most common form of alien superintelligence out there. This is because there are many kinds of superintelligence that can arise from raw programming techniques employed by alien civilizations. (Consider, for instance, the diverse range of AI programs under development on Earth, many of which are not modelled after the human brain). This may leave us with a situation in which the class of SAIs is highly heterogeneous, with members generally bearing little resemblance to each other. It may turn out that of all SAIs, BISAs bear the most resemblance to each other. In other words, BISAs may be the most cohesive subgroup because the other members are so different from each other.

Here, you may suspect that because BISAs could be scattered across the galaxy and generated by multitudes of species, there is little interesting that we can say about the class of BISAs. But notice that BISAs have two features that may give rise to common cognitive capacities and goals:

(1) BISAs are descended from creatures that had motivations like: find food, avoid injury and predators, reproduce, cooperate, compete, and so on.

(2) The life forms that BISAs are modeled from have evolved to deal with biological constraints like slow processing speed and the spatial limitations of embodiment.

Could (1) or (2) yield traits common to members of many superintelligent alien civilizations? I suspect so.

Consider (1). Intelligent biological life tends to be primarily concerned with its own survival and reproduction, so it is more likely that BISAs would have final goals involving their own survival and reproduction, or at least the survival and reproduction of the members of their society. If BISAs are interested in reproduction, we might expect that, given the massive amounts of computational resources at their disposal, BISAs would create simulated universes stocked with artificial life and even intelligence or superintelligence. If these creatures were intended to be “children” they may retain the goals listed in (1) as well.

You may object that it is useless to theorize about BISAs, as they can change their basic architecture in numerous, unforeseen ways, and any biologically-inspired motivations can be constrained by programming. There may be limits to this, however. If a superintelligence is biologically-based, it may have its own survival as a primary goal. In this case, it may not want to change its architecture fundamentally, but stick to smaller improvements. It may think: when I fundamentally alter my architecture, I am no longer me [10]. Uploads, for instance, may be especially inclined not to alter the traits that were most important to them during their biological existence.

Consider (2). The designers of the superintelligence, or a self-improving superintelligence itself, may move away from the original biological model in all sorts of unforeseen ways, although I have noted that a BISA may not wish to alter its architecture fundamentally. But we could look for cognitive capacities that are useful to keep; cognitive capacities that sophisticated forms of biological intelligence are likely to have, and which enable the superintelligence to carry out its final and instrumental goals. We could also look for traits that are not likely to be engineered out, as they do not detract the BISA from its goals.

If (2) is correct, we might expect the following, for instance.

(i) Learning about the computational structure of the brain of the species that created the BISA can provide insight into the BISAs thinking patterns. One influential means of understanding the computational structure of the brain in cognitive science is via “connectomics,” a field that seeks to provide a connectivity map or wiring diagram of the brain [11]. While it is likely that a given BISA will not have the same kind of connectome as the members of the original species, some of the functional and structural connections may be retained, and interesting departures from the originals may be found.

(ii) BISAs may have viewpoint-invariant representations. At a high level of processing your brain has internal representations of the people and objects that you interact with that are viewpoint-invariant. Consider walking up to your front door. You’ve walked this path hundreds, maybe thousands of times, but technically, you see things from slightly different angles each time, as you are never positioned in exactly the same way twice. You have mental representations that are at a relatively high level of processing and are viewpoint invariant. It seems difficult for biologically-based intelligence to evolve without viewpoint invariant representations, as they enable categorization and prediction [12]. Such representations arise because a system that is mobile needs a means of identifying items in its ever-changing environment, so we would expect biologically-based systems to have them. BISA would have little reason to give up object-invariant representations insofar as it remains mobile or has mobile devices sending it information remotely.

(iii) BISAs will have language-like mental representations that are recursive and combinatorial. Notice that human thought has the crucial and pervasive feature of being combinatorial. Consider the thought “wine is better in Italy than in China.” You probably have never had this thought before, but you were able to understand it. The key is that the thoughts are combinatorial because they are built out of familiar constituents, and combined according to rules. The rules apply to constructions out of primitive constituents, that are themselves constructed grammatically, as well as to the primitive constituents themselves. Grammatical mental operations are incredibly useful: it is the combinatorial nature of thought that allows one to understand and produce these sentences on the basis of one’s antecedent knowledge of the grammar and atomic constituents (e.g. wine, China). Relatedly, thought is productive: in principle, one can entertain and produce an infinite number of distinct representations because the mind has a combinatorial syntax [13].

Brains need combinatorial representations because there are infinitely many possible linguistic representations, and the brain only has a finite storage space. Even a superintelligent system would benefit from combinatorial representations. Although a superintelligent system could have computational resources that are so vast that it is mostly capable of pairing up utterances or inscriptions with a stored sentence, it would be unlikely that it would trade away such a marvelous innovation of biological brains. If it did, it would be less efficient, since there is the potential of a sentence not being in its storage, which must be finite.

(iv) BISAs may have one or more global workspaces. When you search for a fact or concentrate on something, your brain grants that sensory or cognitive content access to a “global workspace” where the information is broadcast to attentional and working memory systems for more concentrated processing, as well as to the massively parallel channels in the brain [14]. The global workspace operates as a singular place where important information from the senses is considered in tandem, so that the creature can make all-things-considered judgments and act intelligently, in light of all the facts at its disposal. In general, it would be inefficient to have a sense or cognitive capacity that was not integrated with the others, because the information from this sense or cognitive capacity would be unable to figure in predictions and plans based on an assessment of all the available information.

(v) A BISA’s mental processing can be understood via functional decomposition. As complex as alien superintelligence may be, humans may be able to use the method of functional decomposition as an approach to understanding it. A key feature of computational approaches to the brain is that cognitive and perceptual capacities are understood by decomposing the particular capacity into their causally organized parts, which themselves can be understood in terms of the causal organization of their parts. This is the aforementioned “method of functional decomposition” and it is a key explanatory method in cognitive science. It is difficult to envision a complex thinking machine not having a program consisting of causally interrelated elements each of which consists in causally organized elements.

All this being said, superintelligent beings are by definition beings that are superior to humans in every domain. While a creature can have superior processing that still basically makes sense to us, it may be that a given superintelligence is so advanced that we cannot understand any of its computations whatsoever. It may be that any truly advanced civilization will have technologies that will be indistinguishable from magic, as Arthur C. Clarke once suggested [15]. I obviously speak to the scenario in which the SAI’s processing makes some sense to us, one in which developments from cognitive science yield a glimmer of understanding into the complex mental lives of certain BISAs.

SOME ISSUES FOR FURTHER REFLECTION

In the spirit of encouraging future discussion, I will close by raising issues for future reflection.

Given the vast variety of possible intelligences, it is an intriguing question to ask whether creatures with different sensory modalities may have the same kind of thoughts or think in a similar ways as humans. There is a debate in the field of philosophy of mind that is relevant to this question. Contemporary neo-empiricists, such as the philosopher Jesse Prinz, have argued that all concepts are modality specific, being couched in a particular sensory format, such as vision [16]. If he’s correct, it may be difficult to understand the thinking of creatures with vastly different sensory experiences than us. But I am skeptical. For instance, consider my aforementioned comment on viewpoint invariant representations. At a higher level of processing, information seems to become less viewpoint dependent. Similarly, it becomes less modality specific, as with the processing in the human brain, as it ascends from particular sensory modalities to the brain’s association areas and into working memory and attention, where it is in a more neutral format.

But these issues are subtle and deserve a lengthier treatment. I pursued issues related to this topic in my monograph, The Language of Thought, which looked at whether thinking is independent of the kind of perceptual modalities humans have and is also prior to the kind of language we speak [12]. In the context of alien life or SAI, an intriguing question is the following: If there is an inner mental language that is independent of sensory modalities, having the aforementioned combinatorial structure, would this be some sort of common ground, should we encounter other advanced intelligences? (Many of these issues apply to the case of intelligent biological alien life as well, and could also be helpful in the context of the development of SAI on Earth.)

The ethical and metaphysical issues surrounding postbiological intelligence concern me greatly. Perhaps the best way to introduce the ethical and metaphysical issues is to consider that the post-biological cosmos approach involves a shift in our usual perspective about intelligent life in the universe. Normally, we think of encountering alien intelligence as encountering creatures with radically different biological features and sensory experiences. The shift of focus is twofold: first, the focus moves away from biology to superintelligent AI, and this will involve theorizing about the computational abilities of advanced artificial intelligence. Second, as we reflect on the nature of postbiological intelligence, we must be keenly aware that we may be reflecting upon the nature of our own descendants as well as aliens. In essence, the line between “us” and “them” blurs, and our focus moves away from biology to the difficult task of understanding the computations and behaviors of creatures that will be far more advanced than we are.

What does this all mean? In contrast to Ray Kurzweil’s utopian enthusiasm for the singularity, I do not see normative evaluations of whether a post-biological existence is desirable for our species in the astrobiology literature, and there has been little discussion of the singularity within contemporary metaphysics and philosophy of mind. But it is important to reflect upon the ethical, philosophical and social implications of all this. Would superintelligent AI, including our own postbiological descendants, be selves or persons? Could they be conscious? My own view is that the question of whether AI could be conscious is key – if the synthetic being in question is not capable of consciousness, that is, if it doesn’t feel like anything to be it, then why would it be a self or person? I’ve discussed the issue of consciousness elsewhere [4], but since that point, I have been increasingly convinced that the question of machine consciousness is an open question that cannot be solved today. In addition to the matter of whether the substrate in question (e.g., graphene, silicon) supports consciousness, the devil is in the details of the particular AI design. That is, we would have to determine whether the architecture of the particular AI in question even employs conscious thought. Consciousness is associated with slower, more deliberative processing in humans, and it is unclear whether superintelligence would even need conscious processing, as it would have mastered so much already. What would be novel to it? And would consciousness even be associated with slower, deliberative processing in an AI in any case?

The science fiction treatment of androids may lead us to believe that machines can feel – for instance, consider the Samantha program in the film Her, or consider Asimov’s robot stories. But this is just science fiction, and the empirical and philosophical question of whether AI can be conscious remains open.

CONCLUSION

In this brief piece, I’ve discussed why it is likely that the alien civilizations we encounter will be forms of superintelligent AI (or “SAI”). I then turned to the difficult question of how such creatures might think. I provisionally attempted to identify some goals and cognitive capacities likely to be possessed by superintelligent beings. I discuss Nick Bostrom’s recent book on superintelligence, which focuses on the genesis of SAI on Earth; as it happens, many of Bostrom’s observations were informative in the present context [7]. Finally, I isolated a specific type of superintelligence that is of particular import in the context of alien superintelligence, biologically-inspired superintelligences (“BISAs”). I urged that if any superintelligences we encounter are BISAs, certain work in computational neuroscience, cognitive neuroscience and philosophy of mind may provide resources for at least a rough understanding of the computations of BISAs.

REFERENCES

[1] Cirkovic, M. and Bradbury, R. 2006, “Galactic Gradients, Postbiological Evolution and the Apparent Failure of SETI,” New Astronomy 11, pp. 628–639

[2] Dick, S. 2013, “Bringing Culture to Cosmos: the Postbiological Universe,” Cosmos and Culture: Cultural Evolution in a Cosmic Context, S. J. Dick and M. Lupisella eds., Washington, DC: NASA, online at http://history.nasa.gov/SP-4802.pdf

[3] Shostak, S. 2009, Confessions of an Alien Hunter, National Geographic (Washington, DC)

[4] Schneider, S. 2015, “Alien Minds,” in Discovery, Steven Dick, ed., Cambridge University Press (Cambridge)

[5] Davies, P. 2010, The Eerie Silence, Houghton Mifflin Harcourt (London)

[6] Bradbury, R., Cirkovic, M., and Dvorsky, G. 2011, “Dysonian Approach to SETI: A Fruitful Middle Ground?” Journal of the British Interplanetary Society, 64, pp. 156–165

[7] Bostrom, N. 2014, Superintelligence: Paths, Dangers, Strategies, Oxford University Press (Oxford)

[8] Guerini, Federico 2014, “DARPA’s ElectRx Project: Self-Healing Bodies Through Targeted Stimulation Of The Nerves,” http://www.forbes.com/sites/federicoguerrini/2014/08/29/darpas-electrx-project-self-healing-bodies-through-targeted-stimulation-of-the-nerves/ Forbes Magazine, 8/29/2014. Extracted Sept. 30, 2014

[9] Miller, R. 1956, “The Magical Number Seven, Plus or Minus Two: Some Limits on Our Capacity for Processing Information,” The Psychological Review, 63, pp. 81–97

[10] Schneider, S. 2011a, “Mindscan: Transcending and Enhancing the Brain,” Neuroscience and Neuroethics: Issues At the Intersection of Mind, Meanings and Morality, J. Giordano ed., Cambridge University Press (Cambridge)

[11] Seung, S. 2012, Connectome: How the Brain’s Wiring Makes Us Who We Are, Houghton Mifflin Harcourt (Boston)

[12] Hawkins, J. and Blakeslee, S. 2004, On Intelligence: How a New Understanding of the Brain will Lead to the Creation of Truly Intelligent Machine, Times Books (New York)

[13] Schneider, S. 2011b, The Language of Thought: a New Philosophical Direction, MIT Press (Boston)

[14] Baars, B. 2008, “The Global Workspace Theory of Consciousness,” The Blackwell Companion to Consciousness, M. Velmans and S. Schneider eds.,Wiley-Blackwell (Boston), pp. 236-247

[15] Clarke, A. 1962, Profiles of the Future: An Inquiry into the Limits of the Possible, Harper and Row (New York)

[16] Prinz, J. 2004, Furnishing the Mind: Concepts and their Perceptual Basis, MIT Press (Boston)

[Go to Top]


THINKING OUTSIDE THE SETI BOX

Seth Shostak
SETI Institute
189 Bernardo Ave.
Mountain View, CA 94043
seth@seti.org

ABSTRACT

We consider the biological provincialism of traditional SETI, and why there are good arguments for thinking that the bulk of the intelligence in the cosmos is synthetic.  Given this possibility, the SETI community should consider how to conduct a meaningful search for intelligence that is not constrained to habitable worlds.  To that end, we consider some of the factors that might govern the behavior of highly advanced, cognitive machinery and some strategies that might aid in the discovery of same.

THE ANTHROPOCENTRIC BIAS

The premise of most SETI experiments, the Search for Extraterrestrial Intelligence, was established with Frank Drake’s pioneering Project Ozma more than five decades ago [1]. Today’s efforts differ in scale, but not in approach: Their strategy is to seek signals produced by cosmic inhabitants whose level of technology is at least as advanced as our own. 

For more than two decades, SETI has been largely underwritten by private donations, and because of this the scientists involved are often pressured to make some estimate of the chances of success.  To this end, they will frequently invoke the well-known Drake Equation which quantifies the number of galactic societies currently producing detectable signals.  If some estimate of the prevalence of transmitting sources can be made, then a timescale for SETI success can also be made.

Unfortunately, the value of many of the parameters of this equation are still unknown, and the few for which new data have recently become available are little changed from the estimates made when the equation was first written.  The Drake Equation, while ubiquitous and helpful in formulating the problem of SETI, does little to determine the odds for any particular experiment.

Of possibly greater importance is the Equation’s influence in setting strategy.  It assumes that SETI will succeed only if there are at least a few thousand technically accomplished civilizations resident in the Milky Way.  Detectable societies are assumed to consist of a large number of individuals, resident on a planet that’s not only amenable to life but also able to beget and sustain complex organisms.  In other words, a world analogous to our own.

That view hasn’t changed in a half century.  New thinking on how to conduct SETI has been less about the nature of the beings we seek or their habitat, and more about their presumed behavior. 

As example, a matter of popular discussion is whether signals from extraterrestrials are more likely to be deliberate beacons, or accidental leakage. This discussion is largely motivated by the trend in our own society to shift to higher efficiency communication modes (e.g., direct satellites and fiber optics in place of traditional broadcasting.) This change has led many to opine that advanced civilizations will be economical, and not generate significant leakage. However, while this argument sounds plausible, there’s no denying that it is highly parochial, and based on human experience a scant century after the invention of practical radio and lasers.  And even this modest speculation on the conduct of extraterrestrials – they will be more efficient users of energy than we are – has had little impact on SETI experiments.

In fact, experiments do what they are able, and are mostly indifferent to whether the signal being sought is intentional or otherwise.  SETI today continues to adopt the playbooks of the past: the aliens are analogous to us, only more advanced.  The circumstances of their environment are also presumed to be similar to ours.

Unsurprisingly then, SETI practitioners have been heartened by recent discoveries of exoplanets.  The good news is that worlds akin to our own could exist in great abundance. Current estimates are that between 0.1 and 0.2 of all star systems host an Earth-size planet in the habitable zone [2].  This implies that tens of billions of these favored locales pepper the Galaxy.

But there is also bad news.  At a time when the prospects for beings comparable to ourselves are improving, there is a slow-growing realization that biological intelligence may be only a short-lived – and possibly cryptic – stepping stone to the real thinkers of the cosmos: synthetic intelligence.

PROSPECTS FOR SYNTHETIC INTELLIGENCE

If researchers in the field of artificial intelligence (AI) are to be believed, we will invent machines that are our cognitive equals by mid-century.  Roboticist Hans Moravec has pointed out that the exponential improvement in digital electronics will produce workaday computers with reckoning power comparable to a human brain in less than a decade’s time [3].  This rapid betterment in computation has led some, such as Vernor Vinge and Ray Kurzweil, to predict a future time – the “singularity” – at which our own intellectual capacities will be swamped by that of our devices [4],[5].

Of course, there are already machines that can outperform the human brain in tasks generally regarded as “intelligent.”  The best chess playing computer can beat the best grand master, and the recent triumph of IBM’s Watson computer against seasoned contestants on a television quiz show attracted widespread attention, if not admiration.  More recently, Google’s AlphaGo software beat a world expert human at the game of Go, one that is considerably more complex than chess.  But as AI entrepreneur Peter Voss has noted, these attainments merely point up the current situation in which one can either build a machine that is excellent at a narrowly scoped task (e.g., chess) or one that is quite mediocre at many things [6].  In order to challenge the intellectual abilities of humans, what’s required is what is termed GAI – generalized artificial intelligence.

It is not the intent of this essay to either review or critique developments in AI research, but rather to assume that GAG machines will appear – if not in this century, then in the next.  The timing is of little consequence to the implications for SETI.  But the events following this development are straightforward:

1.  If our own example can be taken as typical, then GAI quickly follows on the heels of radio technology – within a few centuries.

2.  There is no reason to believe that the evolution of “wet ware” – augmentations of our own brains – can keep pace with GAI.

3.  Because artificial intelligence can quickly evolve (by its own design), it will soon outstrip the cognitive capability of biological beings.

4.  Artificial intelligence will be self-repairing, and therefore of indefinite lifetime.

5.  GAI will be the dominant form of intelligence for any society that has progressed even slightly beyond the point of being able to send signals into space.

6.  Unlike biology, which has been “engineered” bottom-up, GAI will be engineered top-down. We cannot hope to forecast what talents or interests it will have, but the one aspect of its functionality that seems safe to assume is survival. This sounds Darwinian, and therefore biological, but is essential if we are to find GAI now, billions of years into the history of the cosmos.

The bottom line is simple, if disquieting: biological brains will beget synthetic ones.  If this technical evolution is commonplace, then there’s reason to expect that the majority of the intelligence in the universe is non-biological.  This intelligence would not be dependent on water worlds, atmospheres, or planets at all.  Consequently the premise of most SETI – that we should expect to find signals from old, habitable worlds – could be wide of the mark [7],[8].

It seems probable that the future of our hunt for extraterrestrials will require more than just new equipment.  We’ll need to rethink what it is we seek.

SO HOW DO WE FIND IT?

Adapting our SETI strategies to the challenge of uncovering GAI may sound simple at first. Nothing more is required than to put less emphasis on targeting habitable planets, or even individual stars, and simply scan as much of the sky as possible.  However, there may be opportunities to increase our chances of success by augmenting this simple, brute-force approach with insights about the likely nature or behavior of synthetic intelligence.

First, we are probably well advised to avoid hubris.  There may be little we can fathom about the nature of artificial intelligence that might be the result of millions of generations of self-improvement – improvement not predicated on the slight and random modifications of Darwin, but directed changes.  Such intelligence will surely be as superior to us as we are to the nematodes in the garden.  Consequently, we should not feel too sure about our speculations as to what AGI might do or how it might be detected.  Imaginative ideas about the interests and activities of synthetic beings are plentiful in fiction, but these ideas are vulnerable to anthropocentric bias. 

However, there are at least a few aspects of GAI that seem less suspect:

1.  Assuming that for such machines more computation is better, they can be expected to prefer locations with abundant energy and an effective heat sink.  The former suggests the neighborhoods of early-type stars or black holes (either of the stellar variety or the massive objects hunkered at the centers of galaxies.)  It’s been suggested that the outer regions of galaxies might be preferred locales for such machines because of their slightly lower temperatures, resulting in greater thermal efficiency [9].  However, given that the efficiency depends only on a temperature ratio between source and sink, this argument is of significance only if the energy source is no more than a few hundred degrees, as space is cold almost everywhere. 

2.  The short timescales for self-improvement may set up a “winner take all” situation. Whatever machine first appears in a given part of the cosmos could endlessly trump others that arise, since even a cosmically short period of time is a great number of GAI generations, and the new kids on the block could never catch up.

3.  Given the dangers present in the universe, a machine might wish to buy insurance in the form of backup machines.  These could be kept at a distance that would minimize simultaneous annihilation, but linked to the mother machine so that updates could be continually offered. Detecting this telemetry might offer a way to discover GAI, although one can assume that the communication would be point to point and unlikely to be intercepted with our instruments.

4.  Another possible organization scheme for GAI might be hierarchical.  Social systems might make sense if the increase of information in a machine eventually becomes small compared to the timescale for interaction with other machines (the light travel time between them).  In other words, if the new capability acquired per year by a GAI eventually becomes a very small fraction of the previously accumulated capability, then interchanging information makes sense, since that information is not rendered obsolete and irrelevant in the time it takes to effect the exchange.

5.  Whether intelligent machines would have any interest in broadcasting (as opposed to point-to-point telemetry) is impossible to know.  One metric for intelligence is the ability to foresee danger and avoid it.  The cleverest GAI, by this measure, might be less concerned about revealing their presence with easily found signals.  They might also wish to communicate with other such machines that are largely outside their light cone, as these would have information that they could not obtain otherwise [10].

These considerations offer a few plausible arguments as to where we should look for GAI. However, they promise little in terms of assuring SETI scientists that such machines would have any motive to make themselves known.

In the case of biological beings, we can safely assume the presence of curiosity, as this trait is necessary to divine the laws of nature and build transmitters we could find.  But artificial sentience might not share this type of curiosity.  Maybe after solving all the puzzles of science, GAI would be happy to indulge itself with endless entertainments – perhaps with Bostrom-like simulations [11].  If they are capable of self-repair (an assumption in all of the above), then it may be that their primary project is to forestall the heat death of the universe and an end to their own existence.

CONCLUSIONS

What might SETI practitioners do to increase their chances of detecting what is likely to be the most prevalent form of intelligence in the cosmos?  Unfortunately, the list is short. 

A search for unusual phenomena in the vicinity of high-density energy sources is a straightforward desideratum.  Another is to consider that the oldest of such machines might wish to contact their peers in other parts of the cosmos to compare notes and offer novel information.  This suggests an experiment in which SETI searches for signals (radio or optical) in the direction of stellar black holes or quasars that are antipodal.  E.g., two stellar black holes on opposite sides of the sky might conceivably host AGI whose beamed data would pass through our neighborhood.

Perhaps the best strategy to find the universe’s intellectual giants is the least deliberate: simply be careful to note any unusual phenomena uncovered in the course of astronomical research. Are there nebulae with anomalous, depleted deuterium?  Do some stars or galaxies display unnatural infrared excess, a possible tipoff to energy-intensive residents [12],[13]?  Are there cosmological behaviors without natural explanation?

It is easy to design an experiment to find the aliens of sci-fi, for these are robustly similar to ourselves.  But when you don’t know your prey, the hunt can be hard.

REFERENCES

[1] Drake, F. 1960, “How can we detect radio transmissions from distant planetary systems,”Sky and Telescope 39, 140

[2] Petigura, E. A., Howard, A. W., and Marcy, G. W. 2013, “Prevalence of Earth-size planets orbiting Sun-like stars,” PNAS110, No. 48, 19273

[3] Moravec, Hans 2000, Robot: Mere Machine to Transcendent Mind, Oxford

University Press (Oxford)

[4] Vinge, V. 1993 “The coming technological singularity,” Vision-21:

Interdisciplinary Science & Engineering in the Era of CyberSpace, proceedings of a Symposium

held at NASA Lewis Research Center (NASA Conference Publication CP-10129)

[5] Kurzweil, Ray 2005, The Singularity is Near, Viking Penguin (New York)

[6] Voss, Peter 2015, http://www.agi-3.com/technology.html

[7] Shostak, S. 1998, Sharing the Universe, Berkeley Hills Books (Berkeley)

[8] Shostak, S. 2011, “Seeking intelligence far beyond our own,” International Astronautics Congress, IAC-11.A4.2.4

[9] Cirkovic, M. M. and Bradbury, R.J. 2006, “Galactic gradients, postbiological evolution, and the apparent failure of SETI,” New Astronomy11, 628

[10] Windell, Alex Noholoa 2015, private communication

[11] Bostrom, N. 2003, Philosophical Quarterly, 53 No. 211, 243

[12] Carrigan, R. 2009, “The IRAS-based whole-sky upper limit on Dyson spheres,” Ap. J.698 2075

[13] Griffith, R. L., Wright, J. T., Maldonado, J., Povich, M. S., Sigurdsson, S., Mullan, B. 2015, “The Ĝ Infrared Search for Extraterrestrial Civilizations with Large Energy Supplies. III. The Reddest Extended Sources in WISE,” arXiv:1504.03418 [astro-ph.GA]

[Go to Top]