Great new SciAm blog post on Stanley Miller and the origin of life

Great SciAm post today which is directly relevant to two of my previous posts: “Urey-Miller Experiment – A Dead End?” and “Historical vs. Operational Science”.  Related to the latter, the following statements from Horgan’s SciAm article, from an interview with Stanley Miller, are especially relevant:

Miller acknowledged that scientists may never know precisely where and when life emerged. “We’re trying to discuss an historical event, which is very different from the usual kind of science, and so criteria and methods are very different,” he remarked. But when I suggested that Miller sounded pessimistic about the prospects for discovering life’s secret, he looked appalled. Pessimistic? Certainly not! He was optimistic!

The great Stanley Miller puts this perfectly.  Yes, historical science is different than operational science which is a challenge for origin of life scientists, but should we give up on striving to understand the origin of life?  Of course not!


The First Information Age: The Origin of DNA – Part 2

In my previous post, The First Information Age Part I, I showed that chance alone is not enough to produce a complex, functional biomolecule such as DNA.  In this post, I will explore the other option for its production: necessity.

Let’s begin with our good friend Richard Dawkins since his example, producing “METHINKS IT IS LIKE A WEASEL” from a combination of letters and spaces, was so compelling and easy to understand.  He showed in his book “The Blind Watchmaker” that using “single-step selection of random variation”, or chance, it was unlikely to produce the target phrase.  But, he then goes on to show that if you use what he calls “cumulative selection”, then the production of the exact phrase “METHINKS IT IS LIKE A WEASEL” is not only possible, but highly probable on a very short time scale.  This is how it works:  First, in the same experiment as before, a single random combination of letters and spaces is produced which is 28 characters in length.  Then, this single phrase “breeds”, whereby it copies itself with a certain probability of random error (or mutation) with each copied generation.  After a certain number of progeny has been produced, the computer then chooses the progeny that is closest to the target sequence, “METHINKS IT IS LIKE A WEASEL”, and repeats this procedure with this starting sequence (allows it to “reproduce” with a chance of mutation).  After only 41 “generations” the target sequence is produced!  Only 41 generations!  So, it seems like the problem is solved!  So long as the environment facilitates the enhanced retention followed by reproduction of the “good” sequences, then the production of the specified, complex product is not only possible, but highly probable – and on a very short timescale!

Let’s take a step back, however, and think about what Dawkins has done.  Although his simulation shows the power of mutation on cherry-picked sequences, it really doesn’t say much in terms of undirected evolution of functional biomolecules.  He acknowledges this at the end of this section of his book, saying that evolution does not work with a goal in mind – in contrast, it is driven by short-term environmental factors (i.e. necessity).  The problem is that this simulation is now famous, and many forget to acknowledge that it really has no relevance in terms of contributing to actual routes toward the emergence of function amongst a random pool of non-functional molecules on early Earth.  So far, we are no closer to finding a plausible route toward producing a functional DNA molecule through strictly abiotic processes…

I learned how much of a problem this truly is for origin of life scientists when I attended a conference this past spring where the sole focus of the symposia was the origin of life on Earth.  I realized that there are two “camps” for the scientific research being performed.  Either you belong to the camp which is composed nearly entirely of biologists who assume the first functioning RNA molecule (the likely predecessor of DNA as mentioned in the previous post) was already in existence and try to determine how you get from one RNA to a functioning “protocell”, or you are doing research on systems and reactions which would have been important well before even the first oligomers were formed on early Earth – determining how even the first molecules were formed.  There is little research being done in the region in between the two – and this is where the origin of RNA (and subsequently DNA) would fit in.  It ends up, that either the leading researchers ignore the problem, or just admit that it is a hard one and continue their own research. 

There are a few however who do worry about these things – one such person whom I heard speak at this conference, and has recently published a paper on the topic is Dr. Irene Chen.  In her research, she runs computer simulations (complemented by a few experiments), where she tries to find plausible prebiotic scenarios for the emergence of a functional RNA molecule.  In her recent paper (, her group takes short random sequences of nucleic acids and shows that through a process called “template-directed ligation”, longer and more compositionally diverse oligomers are formed.  The idea is that the longer and more compositionally diverse oligomers that are formed, the greater chance that one of them will be functional. 

Here’s basically how it works: they take a pool of short, varying in length nucleic acid sequences and then add a catalyst (cyanogen bromide).  If the oligomer is six monomers long or longer, then it can act as a template.  Then, two other oligomers (acting as substrates) with three bases complementary to either end of the template oligomer can attach to the template, allowing for the catalyzed reaction between the two substrate oligomers (“ligation”).  This template-directed ligation process then allows for the production of longer oligomers through the connection of smaller oligomers – quite an advantage over simply building an oligomer monomer by monomer (a process which, by the way, is unfavorable in the bulk ocean for numerous reasons – although not the topic of this post).  At the end of their simulations, they did indeed find that through this mechanism, the resultant oligomers were longer and more compositionally diverse, resulting in their proposal of the following general scheme (image from the above linked paper, DOI: 10.1093/nar/gks065):


 Again, as seen in Dawkins work, the complementary random process (ligation in the absence of any template-direction) actually results in a decrease in compositional diversity (BAD when you need to increase diversity for a better chance at producing a functional molecule).  Therefore, the use of template-directed ligation could be a plausible route toward the production of a functional information-bearing molecule.

But, as with Dawkins’s work, we must take a step back and analyze what has actually been done here.  First, it is important to note that no functional biomolecules were formed in the work done by Chen’s group.  Although their mechanism is the best and most prebiotically relevant that I have come across, they still do not actually show a continuous mechanism from a pool of non-functioning oligomers to an information bearing molecule.  Second, they assume the presence of a diverse pool of oligomers to begin with.  Although a 6-mer is not difficult to imagine, it actually is not as easy as it sounds to produce one of even this small length in the absence of an enzyme and in the bulk ocean environment provided by the early Earth.  In fact, it is still an active field of research.

So what do we take from all of this – should we throw in the towel like many of the Intelligent Design proponents have done and say that science will NEVER be able to explain the emergence of function?  It is a difficult problem to be sure, but what great fundamental problems in science aren’t?  It is intriguing that there has been such difficulty with finding a solution to these very fundamental origin of life questions.  We have made giant leaps in technological advancements (the so-called current Information Age) in recent years, but have still failed to explain the emergence of the First Information Age.  Is this beyond the realm of science?  I think not.  It may, however, be necessary to alter the way we think about life and its origins in general – the current reductionistic paradigm of science may be unable to explain life’s origins.  Instead, we may need a new paradigm – emergence.  Stay tuned, as this will be the topic of my next post!

As always, comments and questions are welcome!

The Higgs Boson – The End of Reductionism?

The above article is another fantastic one by Ashutosh Jogalekar on his SciAm blog “The Curious Wavefunction”.  In this post, he discusses the limits of the current paradigm under which science operates: reductionism.  Further, he proposes the vast evidence for the role of emergence in science, especially when it comes to origins (a point which I, of course, look upon with great interest).

I wrote a response to this blog post, and have copied it below:

“As an origin of life scientist, I completely agree that one of the areas where reductionism fails to provide a complete picture is when trying to describe origins, but this is not something that is widely accepted amongst scientists.  Reductionism, as you have described here, is the tried and true paradigm under which science has successfully operated for many years now.  Thus, any new paradigm is difficult to introduce without causing a little dissension in the ranks.

It doesn’t help that emergence once had strong ties to vitalism, the once popular (but now mostly dormant) theory that there was a vital force which separates life from non-life – essentially proposing that living things weren’t even composed of the same “stuff” as non-living entities.  British Emergentism (as described by Brian McLaughlin) unfortunately resembled vitalism in that it proposed the existence of configurational forces, which were an attempt at quantifying emergent properties, but required new laws of physics (a new fundamental force for aggregates). 

The emergence you describe here is not the same emergence as what was proposed originally by British Emergentists – and yet the bias still remains in some circles.  Emergence is as of yet poorly defined in terms of practical applications, and thus to the common scientist it is more or less useless.  So, the question I pose to you (and which I will also post to my own blog) is how is emergence useful to the everyday, practicing scientist?  We all understand how to operate under the reductionist paradigm – we constantly strive to break-down every phenomenon into its most fundamental parts – but how would this change if we all acknowledge the existence of emergence in science? 

Please do not misunderstand me – I fully believe that emergence is essential to a full understanding of scientific phenomena – most especially when we are talking about origins.  And yet, something that has bothered me is whether or not thinking of things such as emergence is merely a task for the more philosophically minded people, or whether there is some application for the everyday scientist…”

So, what do you think?  Is emergence useful for the everyday practicing scientist??

I will write a more extended post on this topic in the coming weeks (first, I must finish my series on the origin of DNA…but it is on the list) – comment with any ideas you may have!

The First Information Age: The Origin of DNA – Part I


If you are reading this right now you are riding on the train of what some call “The Information Age”.  The internet, cell phones, computers, etc. were all made in the burst of technological advancements made in the very recent past – but information itself is much older.  In fact, information is as old as life itself.  Many argue that the biomolecule most essential to life is DNA.  DNA actually encodes information through its chemical sequence, and can then transmit that information.  Hence, the arrival of DNA on early Earth constitutes the first “Information Age” – the first time when a system could not only carry information, but use that information to perform a function.  How did such a molecule, with its intricate structure and specific sequences necessary to store information, arise on early Earth through undirected natural processes?  This is the topic of this series of posts.

Function is also a word we are all familiar with.  The car that you drive is said to “function” when you go out to your garage, put the key in the ignition, and the car starts.  Our bodies are well-oiled machines that again are described as functioning, with each component pulling its weight through performing its own specific function contributing to the machine as a whole.  In the origin of life, one thing which is of paramount importance is how there was selection from a bath of varying degrees of complexity of molecules, some of which may resemble the necessary biomolecules for life (DNA, RNA, or proteins), for the even more complex, very specific biomolecules that compose life as we know it.  Function is one of the necessary components in separating “life” from “non-life”.  It is also essential in order for natural selection to act – you must have a certain degree of functional diversity, i.e. enough different things which exhibit functions, some of which may be advantageous in the environment provided.   Both evolutionists and its critics question whether chance encounters alone are not enough to explain the origin of function, which will primarily be the topic of Part I of this series of posts.  DNA (or what is thought to be its predecessor – RNA, differing only by the sugar used in its chemical make-up) is commonly used as the key example, since it is essential to life through its information bearing properties. 

There are essentially two routes to the production of certain specific molecules on early Earth: chance or necessity (an idea first proposed by Aleksandr Oparin in terms of molecules in the origin of life, but is essentially just an extension of Darwin’s original ideas).  The first, chance, is essentially what it sounds like: complex molecules arose literally through random, chance interactions, with no external driving forces.  Many have taken this mechanism and have subsequently calculated the probability of a complex biomolecule having arisen on early Earth.  One such calculation is presented in Stephen C. Meyer’s 2009 book “Signature in the Cell”.  It is important to note that Meyer is an intelligent design proponent, but, as I will present later, calculations such as these are performed by both evolutionists and its skeptics.  And yes, as a little disclaimer, I do read both intelligent design literature as well as evolutionary literature – I am a firm believer in being fully educated from the primary sources on all sides of a debate.  Anyways, back to the issue at hand, in his book Meyer presents a few different calculations (using varying assumptions) of the odds of producing any functioning 150-amino acid long protein sequence from chance alone.  All of these calculations result in a final number of 1 in 10164 – a number which, to most people, is unfathomable.  To put this number in perspective, Meyer compares it to the chance of finding a marked proton in the universe (1 in 1080) or to the number of events since the beginning of the universe (10140)…so in conclusion, it is literally impossible (according to Meyer and his numbers at least) to form even one functional 150-amino acid long protein from chance alone.

As I asserted earlier, the improbability of chance alone is also acknowledged by evolutionists.  Take one of the most celebrated figures in popular science circles concerning Evolution: Richard Dawkins.  In his book, “The Blind Watchmaker”, he also shows that chance alone is unlikely to have been able to account for the complexity of life seen today.  He uses the now celebrated example of the assembling of the phrase “METHINKS IT IS LIKE A WEASEL” from Hamlet using a random combination of letters and spaces.  Using simple statistics, the probability of getting the first letter in the sequence, “M”, is 1 in 27.  The entire phrase is 28 characters long, therefore, the probability of randomly receiving the entire sequence is (1/27)28 (1/27 multiplied by itself 28 times) which results in “about 1 in 10,000 million million million million million million” (Dawkins, 1996).  These odds admittedly are very small, resulting in the probability of getting the exact phrase from Hamlet through “single-step selection of random variation” (i.e. chance) as asserted by Dawkins being highly unlikely.  So, Dawkins comes to the same conclusion as Meyer – getting even a simple phrase from Hamlet, much less a complex, information-bearing biomolecule such as DNA, is very improbable with chance alone.  Therefore, we must move on to Oparin’s other option: necessity.

As you can imagine, most scientists have given up on chance alone being enough for the origin of life on Earth – not to say they have turned to supernatural sources.  Rather, they now search to find environments in which the chemical reactions necessary to form these molecules are more favorable.   Although chance will always comprise a portion of the physical and chemical processes leading up to the production of a biomolecule such as DNA in the origin of life, these processes can be driven by environmental factors as well – resulting in the influence of necessity.  If the environment favors one reaction over another, then that reaction will be enhanced, resulting in the production of certain products over the distribution of products which would result from chance alone.  Hence, the environment is skewing the odds for a particular reaction.  This sounds great in theory, but is there any evidence that this could be the case?  Is there scientific research being done in this area, or is it merely stated to overcome the challenges posited by the existence of such a complex, essential molecule as DNA?  This will be explored further in my next post – in fact, this is currently an area of intense interest to origin of life scientists, and there are those attempting to tackle it (including Dawkins himself – could you imagine him leaving the issue as stated above??).

Urey-Miller Experiment – A Dead End?


Among the many issues surrounding the origin of life on Earth that must be solved is the origin of small molecules needed to build more complex molecules (like proteins or DNA) necessary to living systems. The first to really attack this problem experimentally with success were Stanley Miller and Harold Urey back in the 1950s.  At that time, Miller was a graduate student of Urey’s at the University of Chicago.  They were operating under the fundamental assumption that the early Earth’s atmosphere was reducing – meaning the atmosphere was full of hydrogen (H2) and lacking in oxygen (O2).  In his ground-breaking experiment, Miller simulated lightning (using a spark-discharge) in such a reducing atmosphere, composed of hydrogen, methane (CH4), and ammonia (NH3), then directed the products towards some water.  To his (and the rest of the scientific community’s) surprise, amongst the products produced were some of the essential amino acids for life (amino acids are the building blocks for proteins, one of the three necessary biomolecules for life along with RNA and DNA).  This was a momentous result!  For many years thereafter, scientists performed further spark-discharge experiments (simulating lightning) searching for – and finding – other necessary building blocks for life.

But, is a reducing atmosphere plausible?  Most now think no.  Rather, the early Earth atmosphere is thought to have been controlled by volcanic outgassing more similar to today’s volcanic emissions – resulting in an atmosphere composed primarily of  carbon dioxide with some nitrogen and water, but with only small amounts of hydrogen.  Unfortunately, this “neutral” atmosphere, similar to the current composition of the atmospheres of Mars and Venus, is essentially a dead-end for spark discharge experiments, with essentially no useful molecules being produced.

So – is the Urey-Miller experiment useless in terms of the production of the first building blocks for life?  Not necessarily according to OB Toon and coworkers at the University of Colorado – they are revamping the reducing atmosphere feasibility by arguing that although the hydrogen levels would have been lower coming from the early Earth volcanoes, the escape rate of hydrogen to space would have been slower, retaining a significant concentration of hydrogen in the atmosphere (see their article published in Science in 2005 for the real science,  This would re-validate the Urey-Miller experiments!

So, what is the conclusion??  The short answer is that there still is no consensus within the scientific community as to whether or not spark discharge was a feasible way to make the building blocks needed for life on early Earth.  It is difficult to determine the exact composition of the early atmosphere, and thus scientists are still working on the problem.

What if spark discharge experiments are a dead end?  Is there no hope for the production of these necessary molecules on early Earth?  Should we just give up?  Of course not – there are two other plausible theories on the origin of these molecules: synthesis in hydrothermal vents (spots on the ocean floor where heated water from volcanic activity spews out) and transport from space to Earth by meteors and/or comets (there were many MANY more impacts from these on early Earth).

So, going back to the hype surrounding all origin of life theories, many strictly naturalistic origin of life proponents still maintain the validity of the original Urey-Miller Experiment in public settings (like the Museum of Natural History in Washington D.C., at least as of a few years ago when I last visited…), not communicating the challenges that this original experiment now face within the scientific community.  On the other hand, critics of naturalistic origins completely discount spark-discharge experiments in the origin of life, thereby claiming that there are NO plausible naturalistic routes to the production of simple molecules needed in larger biomolecules (like DNA).  In reality, the scientific community has presented results which lie somewhere in between these two extreme positions.  It is important that both sides of this argument recognize that scientists haven’t quite figured this one out, so taking a strong stance on either side of the fence is probably a little premature…