In my previous post, The First Information Age Part I, I showed that chance alone is not enough to produce a complex, functional biomolecule such as DNA. In this post, I will explore the other option for its production: necessity.
Let’s begin with our good friend Richard Dawkins since his example, producing “METHINKS IT IS LIKE A WEASEL” from a combination of letters and spaces, was so compelling and easy to understand. He showed in his book “The Blind Watchmaker” that using “single-step selection of random variation”, or chance, it was unlikely to produce the target phrase. But, he then goes on to show that if you use what he calls “cumulative selection”, then the production of the exact phrase “METHINKS IT IS LIKE A WEASEL” is not only possible, but highly probable on a very short time scale. This is how it works: First, in the same experiment as before, a single random combination of letters and spaces is produced which is 28 characters in length. Then, this single phrase “breeds”, whereby it copies itself with a certain probability of random error (or mutation) with each copied generation. After a certain number of progeny has been produced, the computer then chooses the progeny that is closest to the target sequence, “METHINKS IT IS LIKE A WEASEL”, and repeats this procedure with this starting sequence (allows it to “reproduce” with a chance of mutation). After only 41 “generations” the target sequence is produced! Only 41 generations! So, it seems like the problem is solved! So long as the environment facilitates the enhanced retention followed by reproduction of the “good” sequences, then the production of the specified, complex product is not only possible, but highly probable – and on a very short timescale!
Let’s take a step back, however, and think about what Dawkins has done. Although his simulation shows the power of mutation on cherry-picked sequences, it really doesn’t say much in terms of undirected evolution of functional biomolecules. He acknowledges this at the end of this section of his book, saying that evolution does not work with a goal in mind – in contrast, it is driven by short-term environmental factors (i.e. necessity). The problem is that this simulation is now famous, and many forget to acknowledge that it really has no relevance in terms of contributing to actual routes toward the emergence of function amongst a random pool of non-functional molecules on early Earth. So far, we are no closer to finding a plausible route toward producing a functional DNA molecule through strictly abiotic processes…
I learned how much of a problem this truly is for origin of life scientists when I attended a conference this past spring where the sole focus of the symposia was the origin of life on Earth. I realized that there are two “camps” for the scientific research being performed. Either you belong to the camp which is composed nearly entirely of biologists who assume the first functioning RNA molecule (the likely predecessor of DNA as mentioned in the previous post) was already in existence and try to determine how you get from one RNA to a functioning “protocell”, or you are doing research on systems and reactions which would have been important well before even the first oligomers were formed on early Earth – determining how even the first molecules were formed. There is little research being done in the region in between the two – and this is where the origin of RNA (and subsequently DNA) would fit in. It ends up, that either the leading researchers ignore the problem, or just admit that it is a hard one and continue their own research.
There are a few however who do worry about these things – one such person whom I heard speak at this conference, and has recently published a paper on the topic is Dr. Irene Chen. In her research, she runs computer simulations (complemented by a few experiments), where she tries to find plausible prebiotic scenarios for the emergence of a functional RNA molecule. In her recent paper (http://nar.oxfordjournals.org/content/40/10/4711.full-text-lowres.pdf), her group takes short random sequences of nucleic acids and shows that through a process called “template-directed ligation”, longer and more compositionally diverse oligomers are formed. The idea is that the longer and more compositionally diverse oligomers that are formed, the greater chance that one of them will be functional.
Here’s basically how it works: they take a pool of short, varying in length nucleic acid sequences and then add a catalyst (cyanogen bromide). If the oligomer is six monomers long or longer, then it can act as a template. Then, two other oligomers (acting as substrates) with three bases complementary to either end of the template oligomer can attach to the template, allowing for the catalyzed reaction between the two substrate oligomers (“ligation”). This template-directed ligation process then allows for the production of longer oligomers through the connection of smaller oligomers – quite an advantage over simply building an oligomer monomer by monomer (a process which, by the way, is unfavorable in the bulk ocean for numerous reasons – although not the topic of this post). At the end of their simulations, they did indeed find that through this mechanism, the resultant oligomers were longer and more compositionally diverse, resulting in their proposal of the following general scheme (image from the above linked paper, DOI: 10.1093/nar/gks065):
Again, as seen in Dawkins work, the complementary random process (ligation in the absence of any template-direction) actually results in a decrease in compositional diversity (BAD when you need to increase diversity for a better chance at producing a functional molecule). Therefore, the use of template-directed ligation could be a plausible route toward the production of a functional information-bearing molecule.
But, as with Dawkins’s work, we must take a step back and analyze what has actually been done here. First, it is important to note that no functional biomolecules were formed in the work done by Chen’s group. Although their mechanism is the best and most prebiotically relevant that I have come across, they still do not actually show a continuous mechanism from a pool of non-functioning oligomers to an information bearing molecule. Second, they assume the presence of a diverse pool of oligomers to begin with. Although a 6-mer is not difficult to imagine, it actually is not as easy as it sounds to produce one of even this small length in the absence of an enzyme and in the bulk ocean environment provided by the early Earth. In fact, it is still an active field of research.
So what do we take from all of this – should we throw in the towel like many of the Intelligent Design proponents have done and say that science will NEVER be able to explain the emergence of function? It is a difficult problem to be sure, but what great fundamental problems in science aren’t? It is intriguing that there has been such difficulty with finding a solution to these very fundamental origin of life questions. We have made giant leaps in technological advancements (the so-called current Information Age) in recent years, but have still failed to explain the emergence of the First Information Age. Is this beyond the realm of science? I think not. It may, however, be necessary to alter the way we think about life and its origins in general – the current reductionistic paradigm of science may be unable to explain life’s origins. Instead, we may need a new paradigm – emergence. Stay tuned, as this will be the topic of my next post!
As always, comments and questions are welcome!