Genetics Suggest Population Expansion in Africa Began in Stone Age

July 29, 2009
By Andrea Anderson

NEW YORK (GenomeWeb News) – Modern human populations started expanding some 40,000 years ago, according to a paper appearing appeared online today in PLoS ONE.

Researchers from the University of Arizona and the University of California at San Francisco used multi-locus sequence analysis to assess genetic signatures found in nearly 200 individuals from seven populations around the world. Their results suggest human population expansions in Africa started about 40,000 years ago during the Stone Age — a more recent expansion time than that predicted from previous studies.

“[B]oth hunter-gathers (San and Biaka) and food-producers (Mandenka and Yorubans) best fit models with population growth beginning in the Late Pleistocene,” senior author Michael Hammer, a genetics researcher at the University of Arizona, and his co-authors wrote. “These dates are concurrent with the appearance of the Late Stone Age in Africa, supporting the hypothesis that population growth played a significant role in the evolution of Late Pleistocene human cultures.”

Previous studies based on mitochondrial DNA, Y-chromosome data, or autosomal microsatellites provided a broad range of estimates about when modern human population expansion began, dating as far back as about a few hundred thousand years ago. But such estimates often conflict with one another and are based on one or a few sequences that may be under selective pressure, the researchers explained.

In an effort to generate more reliable data for teasing apart human population history, Hammer and his team used Sanger sequencing to re-sequence roughly 6,000 bases of nuclear DNA from each of about 20 autosomal non-coding regions for 184 individuals.

These regions were selected because they were sites with lots of crossing over events but were also far from protein-coding genes and not likely to be under selection. By looking at all of the areas together, Hammer told GenomeWeb Daily News, it’s possible to overcome the noise detected at any single region.

The individuals tested belonged to seven different populations: San, Biaka, Mandenka, Yoruban, French Basque, Han Chinese, and Melanesian.

When the team analyzed their data using multi-locus analysis, they found evidence suggesting that both hunter-gatherer populations (such as the San from Namibia and the Biaka from the Central African Republic) and food-producer populations (such as the Mandenka from Senegal and Yorubans from Nigeria) began expanding roughly 40,000 years ago during the Late Pleistocene period.

That predates the advent of farming in Africa, Hammer noted, and is consistent with archeological evidence suggesting there was a burst of populations interacting and sharing tools and cultural innovations at that time.

Overall, the team concluded that human populations in Africa began a ten-fold expansion some 36,000 years ago. Their data hint that expansion may have been a tad earlier and faster in the hunter-gatherer population — about a 13-fold expansion starting about 41,000 years ago — than in the food-producing populations, which expanded approximately seven-fold starting some 31,000 years ago.

In the future, the team plans to do additional studies looking at more populations from different parts of the world. And, Hammer said, they also hope to employ next-generation sequencing technology to look at even more regions in the genome.

Completado o sequenciamento do genoma de um vetor da doença de Chagas

Renata Moehlecke

O genoma do triatomídeo Rhodnius prolixus, um dos principais vetores da doença de Chagas, acaba de ser sequenciado: é o que conta o biólogo Pedro Lagerblad, da Universidade Federal do Rio de Janeiro, em palestra no Simpósio Internacional Centenário da Descoberta da Doença de Chagas.

 Lagerblad: a divisão do sequenciamento facilitará a busca de genes individuais e fará com que qualquer estudioso seja capaz de encontrar o que está procurando (Foto: Gutemberg Brito)
Lagerblad: a divisão do sequenciamento facilitará a busca de genes individuais e fará com que qualquer estudioso seja capaz de encontrar o que está procurando (Foto: Gutemberg Brito)

Para a realização do sequenciamento, foram feitas cerca de 7 milhões de leituras do genoma. “Os 21 cromossomos do R. prolixus foram quebrados em 14 mil pedaços, quando o comum é conseguir dividi-los em 70 mil”, explica o pesquisador. “A divisão facilitará a busca de genes individuais e fará com que qualquer estudioso seja capaz de encontrar tudo o que está procurando”.

A montagem e identificação do genoma permitirão o desenvolvimento de diversas pesquisas, como as referentes a fisiologia, estrutura biológica, sistemática e evolução do vetor. “O objetivo desta exposição não é mostrar resultados de estudos, mas dizer que a caixa do genoma está aberta para estes serem realizados”, destaca Lagerblad. “A brincadeira começa agora”.

O pesquisador ainda aponta que o esquema de sequenciamento pode ser feito agora com outros vetores da doença de Chagas, que tem alta prevalência na América Latina. “Ao contrário de enfermidades como a dengue e a malária, que têm somente um tipo de vetor, a doença de Chagas pode ser transmitida por 137 espécies e 17 gêneros de insetos”, comenta Lagerblad. “Muito ainda deve e poder ser realizado”.

GWAS and Differences in DNA Between Tissues

Posted by Bob Grant
[Entry posted at 20th July 2009 04:52 PM GMT]
http://www.the-scientist.com/blog
Recent findings may spell trouble for genome-wide association studies based on DNA obtained through blood samples: Genetic material may vary between blood cells and other tissues in a single individual, a study in the July issue of Human Mutation reports.

Image: Wikimedia

The study “raises a very interesting question,” Howard Edenberg, director of the Indiana University School of Medicine’s center for medical genomics, told The Scientist. Many genome-wide association studies — especially studies on systemic diseases such as diabetes and atherosclerosis — depend solely upon DNA harvested from blood samples to identify genes associated with medical conditions. But this study “suggests that looking only at blood, you may miss some things.”

Searching for the genes behind a fatal condition called abdominal aortic aneurysm (AAA), researchers from McGill University in Montreal found that complementary DNA from diseased abdominal aortic tissue did not match genomic DNA from leukocytes in blood from the same patient. “We did not expect to find a difference in the tissue [genes] compared to the leukocyte [genes],” said endocrinologist Morris Schweitzer, who led the study.

Schweitzer and his team uncovered three single nucleotide polymorphisms (SNPs) in samples of diseased tissue from 31 AAA patients that were not present in matching blood samples. They also tested five aortic and blood samples from normal individuals and found the same discrepancy. Schweitzer said that the apparent genetic difference between different cells in the body may cast some doubt on genome-wide association studies that only use DNA from blood samples to infer disease states. “I think they may not be accurate because they might not reflect what’s in the tissue,” he said, adding that researchers should look upon such genetic results “very carefully and very trepidatiously”

Edenberg, who was not involved with the study but who conducts genome-wide association studies to explore the genetic roots of alcoholism and bipolar disorder, said that while the findings are interesting, they are very preliminary. “If they’re correct about this, and there are these genomic differences between tissues and blood at certain alleles, then we’re missing some things,” he said. Edenberg explained that experimenters generally take into account that such studies are somewhat “underpowered” in terms of their ability to catch every genetic indicator of disease. Schweitzer’s results, he noted, may add another layer to this consideration, but do not suggest that genome-wide association studies would turn up false positives, or blood-based genes mistakenly attributed to a particular disease.

Sudha Seshadri, a Boston University neurologist who was not involved in the study, told The Scientist that though the McGill group’s results are important, they do not negate genome-wide association data that scientists have already gathered. “I don’t think [the study] says much about the usefulness or validity of genome-wide association studies as they are being done in cohorts around the world.” Genome-wide studies on diabetes, for example, have identified about 16 genes that are related (in varying degrees) to the disease, said Seshadri, who collaborates on the Framingham Heart Study, a six-decade longitudinal study on more than 5,000 people that has more recently included genomic data.

“I think I would have suggested a few more experiments, personally,” Edenberg added. In particular, he pointed to the fact that the McGill researchers were comparing complementary DNA from aortic tissue to genomic DNA from blood. “At the moment,” he said, the discrepancy “seems relatively compatible with RNA editing [rather] than with a genomic issue.” The study should have compared genomic DNA from the aortic tissues with the genomic blood DNA, and cDNA from both cell types, Edenberg said.

Schweitzer said his group is currently working on this experiment and “should have results probably in a couple of weeks.” He noted that differences between tissue and blood DNA may account for the relatively low levels of association turned up by most genome-wide association studies. Of all the genome-wide association studies that have been conducted, he said, “No one has really found that one miracle gene that really points to something.”

Seshadri, however, said it’s hasty to dismiss the value of such studies. “I think [the authors] make some provocative statements that express a viewpoint, but not a widely-accepted viewpoint,” she said. “It’s far too early in the process of genome-wide association studies to conclude that they have not been fruitful.”

Canadian Initiative Developing Platform to Map Human Interactome, Eyes International Consortium

This story originally ran on July 1 and has been updated to include additional comments.

By Tony Fong

A multi-million dollar effort to create a technology platform to map the human interactome is underway in Canada with an eye to making it international.
Last month the Canada Foundation for Innovation awarded C$9.16 million ($7.89 million) to a national initiative to create a technology platform, bringing the total funding for the project to C$22.9 million ($19.7 million).

A total of 12 universities throughout Canada will be working on the interactome project.

Once the national technology platform becomes operational, the plan is to bring in institutions and partners from around the globe in an international push to create a complete set of cellular interaction networks.

In an interview with ProteoMonitor this week, Benoit Coulombe, who is heading the Canadian work and is a professor and director of the Proteomics Discovery Platform at the Institut de Recherches Cliniques de Montreal, said that the national technology platform comprises the 12 universities along with their instruments, methods, workflows, and expertise in elucidating the human interactome.

Much of the funding will be directed at purchasing new equipment and renovating facilities. The C$9.16 funding from CFI, an independent corporation created by the Canadian government, is for infrastructure. The remaining C$13.74 million, which comes from other partners such as the province of Quebec and companies such as Thermo Fisher Scientific, also will be used for infrastructure costs, not operational expenses, Coulombe said.

Among the new equipment that will be purchased are: Thermo Fisher’s Orbitrap mass spectrometers; Illumina’s Genome Analyzer and Applied Biosystems’ SOLiD second-generation DNA sequencing platforms; robotic liquid handlers; confocal microscopes; and other instruments.

While the 12 universities are already mapping the human interactome, the national initiative brings them together in a collaborative mode that can lead to greater efficiency, more reliable results, and generally better science, Coulombe said.

“The idea of this technology platform is that we put together 12 universities across Canada … that already have activities in protein-protein interaction or interactome studies,” he said. In a virtual manner, “these 12 institutions [will now] sit around the same table and plan their activities relating to protein-protein, protein-RNA interaction studies, et cetera. … Now we have a coordinated platform and now we can plan the equipment [and] the technology pipeline that we want to run.”

New methods development, especially in computational approaches, will also be part of the initiative.

The schools involved in the effort are IRCM, which is affiliated with the University of Montreal; Centre for Cellular and Biomolecular Research at the University of Toronto; Samuel Lunenfeld Research Institute at the Mt. Sinai Hospital; the Ottawa Institute of Systems Biology at the University of Ottawa; the Université de Sherbrooke; Dalhousie University; the University of Victoria; the University of British Columbia; the University of Manitoba; the Institut de Recherché en Immunologie et en Cancérologie at the University of Montreal; McGill University; and the Université Laval.

Because each participating institution has its own area of expertise, the initiative will allow researchers to tap into information that they otherwise might not have access to, Coulombe said. In addition, the organizational structure will facilitate interlaboratory work among the participants, which could improve reproducibility, he added.

When different schools perform a similar experiment, it will be important that common standard operational procedures are in place and followed “so that the data that comes out of the many sites…are comparable,” Coulombe said.

“The only way to achieve this is through communication between the sites. So if some of the sites combine their efforts in [a] project, we have to be able to tell the funders that when we do the same type of experiments in different locations, we’re doing it in a way that the data can be compared, is reproducible, [and] is complementary but can be put together,” he said. “So this is one of the important virtues of this type of platform.”

The initiative is currently performing a multi-site pilot project comparing affinity purification techniques. Each site, using similar equipment and analytical methods for the same proteins, is generating data, which will then be analyzed to determine what steps need to be taken to resolve differences between different labs.

In addition, they are investigating methods aside from mass-spec based technologies to monitor protein-protein interactions such as yeast 2-hybrids and luminescence-based mammalian interactome technology, or LUMIER, Coulombe said.

Within six months, most of the new equipment should be installed and the national platform should be “90 percent operational.” In a year, “we plan to have operational funding for at least one big interactome project,” he said.

If that happens, it would be one of the few examples of such a project. While there have been calls in the past for a large-scale human interactome mapping effort, such proposals have failed to take flight and most of the current work has been confined to individual labs. According to Tony Pawson of the Samuel Lunenfeld Research Institute and a participant in the Canadian effort, only about 5 percent of the human interactome has been mapped to date.

The most prominent proponent of a coordinated interlaboratory approach to describing the human interactome has been Marc Vidal, an associate professor of genetics at the Harvard Medical School, who in 2006 published an article in The Scientist advocating for a $100 million investment into a large-scale human interactome mapping effort. While the funding agencies never took him up on his advice, a number of smaller individual efforts have been started since then, he told ProteoMonitor.

The Center for Cancer Systems Biology at the Dana Farber Cancer Institute, of which Vidal is director, has also adopted the Human Interactome Mapping Project as its flagship project.

“We’re not quite there yet … if you were to compare us to the genome sequencing project at its peak, but it’s definitely starting to crystalize a bit,” he said. “People are getting together, people are publishing four, five, six groups together. … I also think that the field as a whole is already past the single lab, single R01 [stage].”

In January, he and a cadre of other collaborators published a series of articles in Nature Methods describing research into the interactomes of various organisms.

The Systems Biology Center New York has also been exploring the idea of a Quantitative Human Interactome Project to “experimentally obtain kinetic constants for cellular interactions between all of the proteins encoded by the human genome and construct a database of these parameters,” according to a report it released in March 2008.

Coulombe said that the Canadian initiative is the only one he knows of that pulls together the resources of so many institutions and directs it at the human interactome.

But at a time when other similar projects, such as mapping the human proteome, have failed to gain any traction, and protein-protein interactions within the human model are still poorly understood, are Coulombe and his peers jumping ahead of themselves with their ambitions to map the human interactome, which looks not only at protein-protein interactions but also at protein-DNA and protein-RNA interactions?

They don’t see it that way. Pawson said that the technology has reached the stage where “it’s really feasible to think about doing these things on a large scale, and also very importantly, people who use different approaches … are starting to talk to each other much more extensively.”

Indeed, while the funding announced last week focuses on building the national technology platform, Coulombe and others in the initiative are already looking ahead to a large-scale effort that would involve researchers from across the globe to map the human interactome. That effort is called the International Interactome Initiative, or I3.

“This is one of the projects that we hope will be supported by the platform,” Coulombe said. “The national platform is the technology platform in Canada that will serve in the international interactome initiative.”

The Canadian initiative and the proposed I3 plan comes out of a project called the Human Proteotheque Initiative that Coulombe has been working on for several years to chart protein interactions that regulate cell growth, differentiation, and disease progression [see PM 08/02/07].

“What you see now [with I3] is the evolution of this initiative,” Coulombe said. “We’re building our way to the interactome.”

He and others involved in trying to get I3 off the ground have created a steering committee “that includes key players in the interactome field from the US, from Europe and from Canada,” that is exploring funding opportunities for the project and setting scientific objectives, Coulombe said, adding that he hopes to have funding for I3 secured next year so that research can begin in early 2011.

“With this international consortium, we feel that if we have appropriate funding, by joining efforts and technologies such as affinity purifications, mass spectrometry, yeast 2-hybrids, protein complementation assays, LUMIER … in five years we [could] have a draft map of the interactome with pretty much full coverage,” Coulombe said.

What can DNA tell us? Place your bets now

* 08 July 2009 by Lewis Wolpert and Rupert Sheldrake
* Magazine issue 2716. Subscribe and get 4 free issues.
* For similar stories, visit the Essays and Genetics Topic Guides

Read full article

From Newton to Hawking, scientists love wagers. Now Lewis Wolpert has bet Rupert Sheldrake a case of fine port that: “By 1 May 2029, given the genome of a fertilised egg of an animal or plant, we will be able to predict in at least one case all the details of the organism that develops from it, including any abnormalities.” If the outcome isn’t obvious, then the Royal Society will be asked to adjudicate. Watch this space…

Competition: Challenge New Scientist to a scientific wager
Lewis Wolpert

I HAVE entered into this wager with Rupert Sheldrake because of my interest in the details of how embryos develop, and how our understanding of this process will progress. In my latest book, How We Live and Why We Die, I suggest that it will one day be possible to predict from an embryo’s genome how it will develop, and I believe it is possible for this to happen in the next 20 years.

I am, in fact, being a little over-keen because 40 years is a more likely time frame for such a breakthrough. Cells and embryos are extremely complicated: for their size, embryonic cells are the most complex structures in the universe.

Animals develop from a single cell, a fertilised egg, which divides to produce cells that will form the embryo. How that egg develops into an embryo and newborn animal is controlled by genes in the chromosomes. These genes are passive: they do nothing, just provide the code for proteins. It is proteins that determine how cells behave. While the DNA in every cell contains the code for all the proteins in all the cells, it is the particular proteins produced in particular cells that determine how those cells behave.

Every cell of the embryo contains many copies of several thousand different proteins. These proteins have a plethora of functions: acting as enzymes to break down and build other molecules, providing structures for the cell, interacting with each other, and many more. The complexity of the interactions between millions of molecules is amazing.

As the proteins determine how the cells behave, it is their activity that causes the embryo to develop. Underlying this process, though, are the genes, as they control which proteins are made – including some proteins that activate specific genes. It is essential that there is this control over which cells continue to divide, and of mechanisms to pattern the embryo so that different cells develop into different structures, such as the brain or limbs.

There is a huge incentive to understand these processes and so be able to work out the development of an embryo given only its genome. This ability could pave the way for regenerative medicine by allowing scientists to program stem cells to become structures that could replace damaged parts of the body.

To win the bet, we will have to be able to predict the behaviour of almost all the cells in the embryo. In a small worm, say the nematode Caenorhabditis elegans, there are 959 cells, making it the ideal model to solve this problem. It is a major challenge, but advances in cell biology, systems biology and computing will take us there.
One of the nematode worms, with just 959 cells, is the ideal model to solve this problem

Rupert Sheldrake

LEWIS WOLPERT’s faith in the predictive power of the genome is misplaced. Genes enable organisms to make proteins, but do not contain programs or blueprints, or explain the development of embryos.

The problems begin with proteins. Genes code for the linear sequences of amino acids in proteins, which then fold up into complex three-dimensional forms. Wolpert’s wager presupposes that the folding of proteins can be computed from first principles, given the sequence of amino acids specified by the genes. So far, this has proved impossible. As in all bottom-up calculations, there is a combinatorial explosion. For example, by random folding, the amino-acid chain of the enzyme ribonuclease, a small protein, could adopt more than 1040 different shapes, which would take billions of years to explore. In fact, it folds into its habitual form in 2 minutes.

Even if we could solve protein-folding, the next stage would be to predict the structure of cells on the basis of the interactions of millions of proteins and other molecules. This would unleash a far worse combinatorial explosion, with more possible arrangements than all the atoms in the universe.

Random molecular permutations simply cannot explain how organisms work. Instead, cells, tissues and organs develop in a modular manner, shaped by morphogenetic fields, first recognised by developmental biologists in the 1920s. Wolpert himself acknowledges the importance of such fields. Among biologists, he is best known for “positional information”, by which cells “know” where they are within the field of a developing organ, such as a limb. But he believes morphogenetic fields can be reduced to standard chemistry and physics. I disagree. I believe these fields have organising abilities, or systems properties, that involve new scientific principles.

Issue 2716 of New Scientist magazine

* Like what you’ve just read?
* Don’t miss out on the latest content from New Scientist.
* Get 51 issues of New Scientist magazine plus unlimited access to the entire content of New Scientist online.

MEDomics Announces MitoDx(TM), the First NextGen Mitochondrial Genome Diagnostic Test

LOS ANGELES, June 9 /PRNewswire/ — MEDomics, LLC (www.medomics.com) announces an innovative test for early diagnosis of mitochondrial diseases, a group of disorders that can result in neurological dysfunction, muscle weakness, gastrointestinal symptoms, migraine headaches, blindness, deafness, and diabetes. The MEDomics mitochondrial genome test, MitoDx(TM), uses the revolutionary NextGen sequencing technology to detect all mutations in any of the 37 mitochondrial DNA genes. The MEDomics team of experts provides interpretation of the functional significance of detected mutations. This comprehensive test offers exceptionally high diagnostic utility for suspected mitochondrial disease, enabling potentially lifesaving therapy and accurate risk counseling.
Disease from mutations in mitochondrial DNA is now thought to be common in both adults and children. In childhood, mitochondrial disease is more common than muscular dystrophy or cancer. Most mitochondrial disease may go undiagnosed because a primary care physician does not suspect the disease or because the causative mutation is missed by current methods.
“To my knowledge, MEDomics is the first laboratory to offer a whole genome clinical diagnostic test utilizing the powerful NextGen sequencing technique,” says Steve S Sommer, MD, PhD, Founder and President of MEDomics.
Mitochondria are the “power plants” of the cell, providing energy for cellular processes, including growth, and metabolism. Mutations in mitochondrial genes may decrease energy production and affect multiple organs. Since cells contain hundreds of mitochondrial DNA molecules, any particular tissue may contain mitochondrial DNA molecules that are all identical, or there may be a fraction that differs. When both normal and mutant molecules exist, the mitochondria are said to be “heteroplasmic.” The heteroplasmic fraction of mutations can differ substantially among tissues.
It is critical to detect heteroplasmy sensitively, since even low levels in blood, which is routinely tested, may reveal disease affecting other organs. Such low levels of heteroplasmy in blood are generally not detected by standard methods, but are detected by the MEDomics test utilizing NextGen sequencing technology. The error rate determines how small a mutant fraction is reliably detected. MEDomics uses the Applied Biosystems SOLiD(TM) 3 NextGen sequencing platform which has an exceptionally low error rate, allowing detection of heteroplasmy down to about 1%.
The MEDomics NextGen mitochondrial genome test can help diagnose mitochondrial disease, enabling life-saving therapy decisions and allowing for accurate family risk counseling.
About MEDomics
MEDomics is a molecular diagnostic laboratory founded in 2008 by Steve S. Sommer, MD, PhD, with the mission of providing Mutation Expert-based Diagnosis (“MED”) to support the physician in delivering personalized medicine based on analysis of the patient’s genome (“omics”). The mutation experts at MEDomics provide unparalleled quality interpretation to aid the practicing physician.
Dr. Sommer is a Founding Fellow of the American College of Medical Genetics with 25 years experience in Clinical Molecular Diagnosis and over 300 scientific publications and patents. During the past few years, his personalized cancer genetics research and clinical team, including Kelly Gonzalez, MS, and Bill Scaringe, MS, discovered mutation showers. Mutation showers may occasionally cause cancer in an instant. His neuropsychiatric genetics team, including Carolyn Buzin, PhD, also helped to define the first genes for which mutations strongly predispose to schizophrenia or autism. Carolyn Buzin, Kelly Gonzalez, and Bill Scaringe are currently Senior Scientist, Director of Genetic Counseling & Education, and Director of Bioinformatics at MEDomics, respectively.
Richard Boles, MD, Director of the Mitochondrial and Metabolic Disorders Clinic at Childrens Hospital, Los Angeles, is the distinguished clinical consultant for MEDomics in mitochondrial diseases.
SOURCE MEDomics, LLC

Limitations and Possibilities of small RNA Digital Gene Expression Profiling

474 | VOL.6 NO.7 | JULY 2009 | nature methods

To the Editor: High-throughput sequencing (HTS) has proven
to be an invaluable tool for the discovery of thousands of
microRNA genes across multiple species1,2. At present, the
throughput of HTS platforms is sufficient to combine discovery
with quantitative expression analysis allowing for digital gene
expression (DGE) profiling3. We observed that methods for small
RNA DGE profiling are strongly biased toward certain small
RNAs, preventing the accurate determination of absolute numbers
of small RNAs. The observed bias is largely independent of
the sequencing platform but strongly determined by the method
used for small RNA library preparation. However, as the biases are
systematic and highly reproducible, DGE profiling is suited for
determining relative expression differences between samples.
We generated duplicate small RNA libraries using three librarypreparation
methods (poly(A) tailing4, modban adaptor (IDT)
ligation5 and Small RNA Expression kit (SREK; Ambion)) from
a single sample (rat brain) and sequenced these on Roche 454,
AB SOLiD and traditional capillary dideoxy sequencing platforms
(Supplementary Fig. 1, Supplementary Note and Supplementary
Methods). To assess the impact of the library-preparation method
and sequencing platform, we focused on the distribution of
known rat 5′ and 3′ microRNA sequences (miRBase v11.0; ref. 6).

Authors: Sam E V Linsen, Elzo de Wit, Georges Janssens, Sheila Heater, Laura Chapman, Rachael K Parkin, Brian Fritz, Stacia K Wyman, Ewart de Bruijn, Emile E Voest, Scott Kuersten, Muneesh Tewari & Edwin Cuppen

Citation: Limitations and possibilities of small RNA digital gene expression profiling

Linsen et al. Nature Methods 6, 474-476 (30 June 2009) doi:10.1038/nmeth0709-474 correspondence

Link: http://www.nature.com/nmeth/journal/v6/n7/full/nmeth0709-474.html

This correspondence describes a comparison of 3 library methods for small RNA DGE (Digital Gene Expression) using ModBan (Illumina), SOLiD™ Small RNA Expression Kit (SREK) and a polyA tailing method.

Craig Venter: On The Verge of Creating Synthetic Life

“Can we create new life out of our digital universe?” asks Craig Venter.

And his answer is, yes, and pretty soon. He walks the TED2008 audience through his latest research into “fourth-generation fuels” — biologically created fuels with CO2 as their feedstock. His talk covers the details of creating brand-new chromosomes using digital technology, the reasons why we would want to do this, and the bioethics of synthetic life.

http://www.ted.com


DNA Package

“I would like to give this movie to all scientists around the world. I am Dr Cong, a geneticst, working at the Molecular Biology Laboratory in Hanoi. Vietnam.”
My address is :
Mr nguyen Thnah Cong
Molecular Biology Laboratory
Agricultural genetics Institute
Vien Di Truyen nong Nghiep
Tuliem, Hanoi
VIETNAM

Molecular Biology at Life Technologies

Peter Dansky talks about the Life Technologies’ Molecular Biology division.

« Older entries