What is life? Do we need to know to understand its origin?

What is life? This is a question which has plagued philosophers and scientists alike for centuries. Fundamentally, this is a question about the nature of life – what most basic property or function must a system have to be considered “living”? Over 2000 years ago, Aristotle proposed an answer to this question. He described life as having two essential abilities: self-nutrition and self-reproduction. In more modern terms, self-nutrition is essentially metabolism – the ability of a system to sustain or feed itself. Self-reproduction is just what it sounds like – the ability of a system to reproduce, thereby making more generations of itself. Since the time of Aristotle, many have similarly tried to define life, with efforts ranging from the scientific to the supernatural. Today, there is still no consensus as to what makes something living, but many in the astrobiological community go with the definition decided upon by NASA.

According to NASA, “Life is a self-sustained chemical system capable of undergoing Darwinian evolution”. Now, I think an entire book could be written dissecting this definition in gory detail and analyzing whether it is actually useful as a working definition, but that is not the goal here. Rather, I would like to point out a few things. First, this definition encompasses two essential features of life: life must be a self-sustained chemical system and it must be able to undergo Darwinian evolution. Being a self-sustaining system means, by definition, that the system can feed/sustain itself, i.e. has some sort of metabolism. Having the ability to undergo Darwinian evolution implies a genetic code coupled with some form of physical expression upon which natural selection can act (a genotype/phenotype distinction) and hence the ability to pass on that genetic system to future progeny, i.e. the ability to reproduce. Isn’t it interesting that over 2000 years later we are still defining life, albeit with some new terminology, in the same way Aristotle did, as requiring both metabolism and reproduction.

So, we now know that both metabolism and reproduction are common properties of life – but is one more fundamental than the other? You may be wondering why we would even attempt to separate the two most fundamental properties of life as we know it. The problem is that many definitions of life do just this – they focus on one property or the other in an attempt to whittle away at what we know of life until we reach its most fundamental core – always striving to answer the question of what makes living things different from non-living ones. Now, it is important to note that there is little scientific data to suggest that this is warranted. As far as we know, by observing the examples of life that we have on Earth, BOTH reproduction and metabolism are essential to life. And yet, the question is still being asked – which is MORE fundamental. Asking this question pushes what we know of the nature of life to its limits. Can you have a living system comprised of only a genetic code? Would a metabolic network be considered alive?

Oddly enough, these questions are not being pursued by those who study the nature of life, but rather by those who study its origins. Look at the two leading origin of life theories: the RNA world and the small molecule world. The RNA world theory asserts that the ideal molecule at life’s origins was a self-catalytic RNA molecule, and that this molecule spontaneously formed on early Earth with the ability to replicate itself. The RNA world presupposes that the most essential feature of life is the ability to reproduce, i.e. the possession of a genetic code. The small molecule world on the other hand asserts that autocatalytic networks of chemical reactions emerged from a random assemblage of small molecules, and that this transition marks the jump from non-life to life. The small molecule world is thus presupposing that metabolism is more essential to life. These two “camps” are bitter rivals in the scientific community, with each side being 100% sure that they have the origin of life figured out. Isn’t it interesting though that the line drawn in the sand in theories on the origin of life represent the same two things concluded to be essential to the nature of life? Are the nature of life and the origin of life necessarily intertwined? Must we understand one to understand the other?

Strictly speaking, even if we fully understand the nature of something, we cannot infer its origin. Take a simple example: a chair. Let’s assume it is a simple wooden chair. Now, we can determine from examining its composition that the chair is first indeed made of wood. Then, we could identify the type of wood and hence the type of tree that the wood came from – possibly even the area of the world that it originated. But, can we determine exactly how it originated? Can we know from only its composition today how its seed found the ground? Or what tree its seed came from? Or how far the seed travelled before finding its home in the ground? Or, from only a study of the nature of the chair itself, can we even know it was formed from a seed at all? This simple, rather naïve example illustrates that although we can study something through careful examination of the specimen in front of us and determine its nature, we are unable to infer its origin.

Now, following along this same example, what if we wanted to find out the trees origin? How would we go about doing this? First, we must know as much as possible about the nature of the tree that was used to fashion the chair. Then, using our background knowledge of how other trees have been formed (which we are able to directly observe), we would assert that the tree of interest was formed in a similar way (this brings back the idea of historical vs. operational science, the subject of one of my previous posts). From there, we can continue to reach deeper and deeper to extrapolate the process by which our particular tree came into existence. So, although a thorough knowledge of the nature of something does not imply an understanding of its origin, it is an essential part in the process. Understanding the nature of life therefore gives a goal for origin of life researchers upon which to base their research.

Let’s return to the two conflicting origin of life camps: RNA world vs. small molecule world. Why are scientists asking an origins question drawing lines in the sand on a nature question? This should not be the case. There has been no scientific conclusion showing that either metabolism or reproduction is MORE essential to life, so there should not be any origins theories claiming one or the other as the end goal of their research. We know that BOTH are essential to life. Therefore, a legitimate origin of life theory at this point in time, aiming to find an origin for life as we know it on Earth today, should include both a metabolic and genetic (reproduction) component. Rather than fighting with each other about whose theory is more valid, we should be attempting to soften the lines between these theories to create one which incorporates both of the components we know to be essential to life on Earth.

The Higgs Boson – Was it worth the money?

There has been a lot of talk lately about the recent discovery of the Higgs Boson – a particle whose existence was predicted by the Standard Model of Particle Physics, and is thought to be what gives everything mass. Its discovery is quite a big deal in the physics community with even its predicted existence causing much controversy in recent years. But, what does the proven existence of the Higgs Boson do for the everyday person? This is an important question, especially considering the total cost of its discovery – $13.25 billion
according to Forbes (http://www.forbes.com/sites/alexknapp/2012/07/05/how-much-does-it-cost-to-find-a-higgs-boson/)! To be fair, the Higgs certainly is not the only information gained by the Large Hadron Collider (LHC) at CERN, but so far, it is the most ground-breaking discovery made – and one of the most anticipated in all of physics.

So taking that all into account, let’s ask the question again – Is a particle accelerator such as the LHC, which has the potential to make a discovery such as the Higgs, worth $13.25 billion dollars?? Almost 10 years ago, the US government answered this question – and their answer was a resounding no.

In 1991, the US began construction on its own particle accelerator in Texas called the Superconducting Supercollider (SSC) – one which would top all others, even the LHC (in size, energy capabilities, etc.). Congress originally approved its construction with an budget of $4.4 billion. By 1993, it was very apparent that this budget would nowhere near cover the costs, and thus with a new projected cost of around $12 billion, Congress decided the money was not worth the outcome. In order to come to these decisions, scientists were called upon to speak in Congressional hearings, where both those in support and in opposition were allowed to attempt to sway Congress in their own direction. One of those to speak in the original Congressional hearing in 1986 (before any funding was allocated for the construction of the SSC) was the famous theoretical physicist Steven Weinberg. Following his Congressional testimony, a paper appeared in Nature (http://www.nature.com/nature/journal/v330/n6147/abs/330433a0.html) transcribing a talk Weinberg gave at Cambridge regarding the issues surrounding the controversy over this particle accelerator’s construction. In his talk, Weinberg asserts that arguably the most important reason why the SSC is worth the money spent, is because particle physics is the “most fundamental” of all sciences. He claims that all science has a “sense of direction”, with “arrows” that ultimately point to a common source. This source, according to Weinberg, lies at the level of the very small, and the smallest of the small are discovered in particle accelerators such as the SSC or the LHC. Thus, since all of science naturally points toward these more and more fundamental entities, their discovery is essential to the furthering of all other scientific endeavors (even if rather indirectly).

Weinberg quotes Richard Feynman at the beginning of the article who famously said, “The philosophy of science is just about as useful to scientists as ornithology is to birds”. This may be true in many areas of everyday scientific research, but underneath of the arguments for and against the spending of billions of dollars for particle physics research is ultimately a philosophical issue (one which is brought up directly by Weinberg): Is all of science reducible? First, I must define what I mean by “reducible”. Reduction implicitly assumes that science exists in a hierarchy: biology is reducible to chemistry, chemistry to molecular physics, and molecular physics to particle physics. This is directly analogous to Weinberg’s “arrows” – if all of science is reducible, in principle, all scientific disciplines would be fully describable using particle physics. Now, this takes nothing away from the importance of maintaining different levels of analysis for everyday research. It is essential in all modes of science to take bite-sized chunks of the problem at hand and analyze just that one bite, before taking another bite and trying to connect the two. For example, the techniques of chemistry are still essential to a better understanding of chemical reaction mechanisms, but at its core, these chemical reactions are fully describable by electrostatic interactions and quantum mechanics – fundamental physics.

Weinberg, although denying being an uncompromising reductionist, states similar views to those I have just described in his Nature article. One quote from that paper along those lines is:

“No one thinks that the phenomena of phase transitions and chaos…could have been understood on the basis of atomic physics without creative new scientific ideas, but does anyone doubt that real materials exhibit these phenomena because of the properties of the particles of which the materials are composed?”

He then goes on to say that even in chemistry, although we know the properties of molecules are in principle reducible, we are simply unable to reduce them with the tools available to us at this time (computer power, etc.).

Another good quote from this paper is, “no biologist today would be content with an axiom about biological behavior that could not be imagined to have an explanation at a more fundamental level. That more fundamental level would have to be the level of physics and chemistry, and the contingency that the Earth is billions of years old. In this sense, we are all reductionists today.”

So, if we take the view that particle physics lies at the core of all sciences and is therefore the most fundamental of all of the sciences, is new research in that area worth the billions of dollars spent? Even though the general public will never really feel the impact of any of its discoveries (unlike other scientific fields, such as synthetic chemistry aimed at drug development), IF particle physics really is the core of all other science, then I would have to say yes, it is worth the billions. BUT, what if this is not the case? As hinted at in a few of my earlier posts, there is an alternative to the philosophical and practical notion of reduction in the sciences – EMERGENCE. Stay tuned, as I will discuss a few of the current thoughts on the ill-defined concept of emergence soon!

Comments, questions, or suggestions are always welcome!

Great new SciAm blog post on Stanley Miller and the origin of life

http://blogs.scientificamerican.com/cross-check/2012/07/29/stanley-miller-and-the-quest-to-understand-lifes-beginning/

Great SciAm post today which is directly relevant to two of my previous posts: “Urey-Miller Experiment – A Dead End?” and “Historical vs. Operational Science”.  Related to the latter, the following statements from Horgan’s SciAm article, from an interview with Stanley Miller, are especially relevant:

Miller acknowledged that scientists may never know precisely where and when life emerged. “We’re trying to discuss an historical event, which is very different from the usual kind of science, and so criteria and methods are very different,” he remarked. But when I suggested that Miller sounded pessimistic about the prospects for discovering life’s secret, he looked appalled. Pessimistic? Certainly not! He was optimistic!

The great Stanley Miller puts this perfectly.  Yes, historical science is different than operational science which is a challenge for origin of life scientists, but should we give up on striving to understand the origin of life?  Of course not!

The First Information Age: The Origin of DNA – Part 2

In my previous post, The First Information Age Part I, I showed that chance alone is not enough to produce a complex, functional biomolecule such as DNA.  In this post, I will explore the other option for its production: necessity.

Let’s begin with our good friend Richard Dawkins since his example, producing “METHINKS IT IS LIKE A WEASEL” from a combination of letters and spaces, was so compelling and easy to understand.  He showed in his book “The Blind Watchmaker” that using “single-step selection of random variation”, or chance, it was unlikely to produce the target phrase.  But, he then goes on to show that if you use what he calls “cumulative selection”, then the production of the exact phrase “METHINKS IT IS LIKE A WEASEL” is not only possible, but highly probable on a very short time scale.  This is how it works:  First, in the same experiment as before, a single random combination of letters and spaces is produced which is 28 characters in length.  Then, this single phrase “breeds”, whereby it copies itself with a certain probability of random error (or mutation) with each copied generation.  After a certain number of progeny has been produced, the computer then chooses the progeny that is closest to the target sequence, “METHINKS IT IS LIKE A WEASEL”, and repeats this procedure with this starting sequence (allows it to “reproduce” with a chance of mutation).  After only 41 “generations” the target sequence is produced!  Only 41 generations!  So, it seems like the problem is solved!  So long as the environment facilitates the enhanced retention followed by reproduction of the “good” sequences, then the production of the specified, complex product is not only possible, but highly probable – and on a very short timescale!

Let’s take a step back, however, and think about what Dawkins has done.  Although his simulation shows the power of mutation on cherry-picked sequences, it really doesn’t say much in terms of undirected evolution of functional biomolecules.  He acknowledges this at the end of this section of his book, saying that evolution does not work with a goal in mind – in contrast, it is driven by short-term environmental factors (i.e. necessity).  The problem is that this simulation is now famous, and many forget to acknowledge that it really has no relevance in terms of contributing to actual routes toward the emergence of function amongst a random pool of non-functional molecules on early Earth.  So far, we are no closer to finding a plausible route toward producing a functional DNA molecule through strictly abiotic processes…

I learned how much of a problem this truly is for origin of life scientists when I attended a conference this past spring where the sole focus of the symposia was the origin of life on Earth.  I realized that there are two “camps” for the scientific research being performed.  Either you belong to the camp which is composed nearly entirely of biologists who assume the first functioning RNA molecule (the likely predecessor of DNA as mentioned in the previous post) was already in existence and try to determine how you get from one RNA to a functioning “protocell”, or you are doing research on systems and reactions which would have been important well before even the first oligomers were formed on early Earth – determining how even the first molecules were formed.  There is little research being done in the region in between the two – and this is where the origin of RNA (and subsequently DNA) would fit in.  It ends up, that either the leading researchers ignore the problem, or just admit that it is a hard one and continue their own research. 

There are a few however who do worry about these things – one such person whom I heard speak at this conference, and has recently published a paper on the topic is Dr. Irene Chen.  In her research, she runs computer simulations (complemented by a few experiments), where she tries to find plausible prebiotic scenarios for the emergence of a functional RNA molecule.  In her recent paper (http://nar.oxfordjournals.org/content/40/10/4711.full-text-lowres.pdf), her group takes short random sequences of nucleic acids and shows that through a process called “template-directed ligation”, longer and more compositionally diverse oligomers are formed.  The idea is that the longer and more compositionally diverse oligomers that are formed, the greater chance that one of them will be functional. 

Here’s basically how it works: they take a pool of short, varying in length nucleic acid sequences and then add a catalyst (cyanogen bromide).  If the oligomer is six monomers long or longer, then it can act as a template.  Then, two other oligomers (acting as substrates) with three bases complementary to either end of the template oligomer can attach to the template, allowing for the catalyzed reaction between the two substrate oligomers (“ligation”).  This template-directed ligation process then allows for the production of longer oligomers through the connection of smaller oligomers – quite an advantage over simply building an oligomer monomer by monomer (a process which, by the way, is unfavorable in the bulk ocean for numerous reasons – although not the topic of this post).  At the end of their simulations, they did indeed find that through this mechanism, the resultant oligomers were longer and more compositionally diverse, resulting in their proposal of the following general scheme (image from the above linked paper, DOI: 10.1093/nar/gks065):

Image

 Again, as seen in Dawkins work, the complementary random process (ligation in the absence of any template-direction) actually results in a decrease in compositional diversity (BAD when you need to increase diversity for a better chance at producing a functional molecule).  Therefore, the use of template-directed ligation could be a plausible route toward the production of a functional information-bearing molecule.

But, as with Dawkins’s work, we must take a step back and analyze what has actually been done here.  First, it is important to note that no functional biomolecules were formed in the work done by Chen’s group.  Although their mechanism is the best and most prebiotically relevant that I have come across, they still do not actually show a continuous mechanism from a pool of non-functioning oligomers to an information bearing molecule.  Second, they assume the presence of a diverse pool of oligomers to begin with.  Although a 6-mer is not difficult to imagine, it actually is not as easy as it sounds to produce one of even this small length in the absence of an enzyme and in the bulk ocean environment provided by the early Earth.  In fact, it is still an active field of research.

So what do we take from all of this – should we throw in the towel like many of the Intelligent Design proponents have done and say that science will NEVER be able to explain the emergence of function?  It is a difficult problem to be sure, but what great fundamental problems in science aren’t?  It is intriguing that there has been such difficulty with finding a solution to these very fundamental origin of life questions.  We have made giant leaps in technological advancements (the so-called current Information Age) in recent years, but have still failed to explain the emergence of the First Information Age.  Is this beyond the realm of science?  I think not.  It may, however, be necessary to alter the way we think about life and its origins in general – the current reductionistic paradigm of science may be unable to explain life’s origins.  Instead, we may need a new paradigm – emergence.  Stay tuned, as this will be the topic of my next post!

As always, comments and questions are welcome!

The Higgs Boson – The End of Reductionism?

http://blogs.scientificamerican.com/the-curious-wavefunction/2012/07/23/the-higgs-boson-and-the-future-of-science/

The above article is another fantastic one by Ashutosh Jogalekar on his SciAm blog “The Curious Wavefunction”.  In this post, he discusses the limits of the current paradigm under which science operates: reductionism.  Further, he proposes the vast evidence for the role of emergence in science, especially when it comes to origins (a point which I, of course, look upon with great interest).

I wrote a response to this blog post, and have copied it below:

“As an origin of life scientist, I completely agree that one of the areas where reductionism fails to provide a complete picture is when trying to describe origins, but this is not something that is widely accepted amongst scientists.  Reductionism, as you have described here, is the tried and true paradigm under which science has successfully operated for many years now.  Thus, any new paradigm is difficult to introduce without causing a little dissension in the ranks.

It doesn’t help that emergence once had strong ties to vitalism, the once popular (but now mostly dormant) theory that there was a vital force which separates life from non-life – essentially proposing that living things weren’t even composed of the same “stuff” as non-living entities.  British Emergentism (as described by Brian McLaughlin) unfortunately resembled vitalism in that it proposed the existence of configurational forces, which were an attempt at quantifying emergent properties, but required new laws of physics (a new fundamental force for aggregates). 

The emergence you describe here is not the same emergence as what was proposed originally by British Emergentists – and yet the bias still remains in some circles.  Emergence is as of yet poorly defined in terms of practical applications, and thus to the common scientist it is more or less useless.  So, the question I pose to you (and which I will also post to my own blog) is how is emergence useful to the everyday, practicing scientist?  We all understand how to operate under the reductionist paradigm – we constantly strive to break-down every phenomenon into its most fundamental parts – but how would this change if we all acknowledge the existence of emergence in science? 

Please do not misunderstand me – I fully believe that emergence is essential to a full understanding of scientific phenomena – most especially when we are talking about origins.  And yet, something that has bothered me is whether or not thinking of things such as emergence is merely a task for the more philosophically minded people, or whether there is some application for the everyday scientist…”

So, what do you think?  Is emergence useful for the everyday practicing scientist??

I will write a more extended post on this topic in the coming weeks (first, I must finish my series on the origin of DNA…but it is on the list) – comment with any ideas you may have!

The First Information Age: The Origin of DNA – Part I

Image

If you are reading this right now you are riding on the train of what some call “The Information Age”.  The internet, cell phones, computers, etc. were all made in the burst of technological advancements made in the very recent past – but information itself is much older.  In fact, information is as old as life itself.  Many argue that the biomolecule most essential to life is DNA.  DNA actually encodes information through its chemical sequence, and can then transmit that information.  Hence, the arrival of DNA on early Earth constitutes the first “Information Age” – the first time when a system could not only carry information, but use that information to perform a function.  How did such a molecule, with its intricate structure and specific sequences necessary to store information, arise on early Earth through undirected natural processes?  This is the topic of this series of posts.

Function is also a word we are all familiar with.  The car that you drive is said to “function” when you go out to your garage, put the key in the ignition, and the car starts.  Our bodies are well-oiled machines that again are described as functioning, with each component pulling its weight through performing its own specific function contributing to the machine as a whole.  In the origin of life, one thing which is of paramount importance is how there was selection from a bath of varying degrees of complexity of molecules, some of which may resemble the necessary biomolecules for life (DNA, RNA, or proteins), for the even more complex, very specific biomolecules that compose life as we know it.  Function is one of the necessary components in separating “life” from “non-life”.  It is also essential in order for natural selection to act – you must have a certain degree of functional diversity, i.e. enough different things which exhibit functions, some of which may be advantageous in the environment provided.   Both evolutionists and its critics question whether chance encounters alone are not enough to explain the origin of function, which will primarily be the topic of Part I of this series of posts.  DNA (or what is thought to be its predecessor – RNA, differing only by the sugar used in its chemical make-up) is commonly used as the key example, since it is essential to life through its information bearing properties. 

There are essentially two routes to the production of certain specific molecules on early Earth: chance or necessity (an idea first proposed by Aleksandr Oparin in terms of molecules in the origin of life, but is essentially just an extension of Darwin’s original ideas).  The first, chance, is essentially what it sounds like: complex molecules arose literally through random, chance interactions, with no external driving forces.  Many have taken this mechanism and have subsequently calculated the probability of a complex biomolecule having arisen on early Earth.  One such calculation is presented in Stephen C. Meyer’s 2009 book “Signature in the Cell”.  It is important to note that Meyer is an intelligent design proponent, but, as I will present later, calculations such as these are performed by both evolutionists and its skeptics.  And yes, as a little disclaimer, I do read both intelligent design literature as well as evolutionary literature – I am a firm believer in being fully educated from the primary sources on all sides of a debate.  Anyways, back to the issue at hand, in his book Meyer presents a few different calculations (using varying assumptions) of the odds of producing any functioning 150-amino acid long protein sequence from chance alone.  All of these calculations result in a final number of 1 in 10164 – a number which, to most people, is unfathomable.  To put this number in perspective, Meyer compares it to the chance of finding a marked proton in the universe (1 in 1080) or to the number of events since the beginning of the universe (10140)…so in conclusion, it is literally impossible (according to Meyer and his numbers at least) to form even one functional 150-amino acid long protein from chance alone.

As I asserted earlier, the improbability of chance alone is also acknowledged by evolutionists.  Take one of the most celebrated figures in popular science circles concerning Evolution: Richard Dawkins.  In his book, “The Blind Watchmaker”, he also shows that chance alone is unlikely to have been able to account for the complexity of life seen today.  He uses the now celebrated example of the assembling of the phrase “METHINKS IT IS LIKE A WEASEL” from Hamlet using a random combination of letters and spaces.  Using simple statistics, the probability of getting the first letter in the sequence, “M”, is 1 in 27.  The entire phrase is 28 characters long, therefore, the probability of randomly receiving the entire sequence is (1/27)28 (1/27 multiplied by itself 28 times) which results in “about 1 in 10,000 million million million million million million” (Dawkins, 1996).  These odds admittedly are very small, resulting in the probability of getting the exact phrase from Hamlet through “single-step selection of random variation” (i.e. chance) as asserted by Dawkins being highly unlikely.  So, Dawkins comes to the same conclusion as Meyer – getting even a simple phrase from Hamlet, much less a complex, information-bearing biomolecule such as DNA, is very improbable with chance alone.  Therefore, we must move on to Oparin’s other option: necessity.

As you can imagine, most scientists have given up on chance alone being enough for the origin of life on Earth – not to say they have turned to supernatural sources.  Rather, they now search to find environments in which the chemical reactions necessary to form these molecules are more favorable.   Although chance will always comprise a portion of the physical and chemical processes leading up to the production of a biomolecule such as DNA in the origin of life, these processes can be driven by environmental factors as well – resulting in the influence of necessity.  If the environment favors one reaction over another, then that reaction will be enhanced, resulting in the production of certain products over the distribution of products which would result from chance alone.  Hence, the environment is skewing the odds for a particular reaction.  This sounds great in theory, but is there any evidence that this could be the case?  Is there scientific research being done in this area, or is it merely stated to overcome the challenges posited by the existence of such a complex, essential molecule as DNA?  This will be explored further in my next post – in fact, this is currently an area of intense interest to origin of life scientists, and there are those attempting to tackle it (including Dawkins himself – could you imagine him leaving the issue as stated above??).