12 Matching Results

Search Results

Advanced search parameters have been applied.

Deciphering the genetic regulatory code using an inverse error control coding framework.

Description: We have found that developing a computational framework for reconstructing error control codes for engineered data and ultimately for deciphering genetic regulatory coding sequences is a challenging and uncharted area that will require advances in computational technology for exact solutions. Although exact solutions are desired, computational approaches that yield plausible solutions would be considered sufficient as a proof of concept to the feasibility of reverse engineering error control codes and the possibility of developing a quantitative model for understanding and engineering genetic regulation. Such evidence would help move the idea of reconstructing error control codes for engineered and biological systems from the high risk high payoff realm into the highly probable high payoff domain. Additionally this work will impact biological sensor development and the ability to model and ultimately develop defense mechanisms against bioagents that can be engineered to cause catastrophic damage. Understanding how biological organisms are able to communicate their genetic message efficiently in the presence of noise can improve our current communication protocols, a continuing research interest. Towards this end, project goals include: (1) Develop parameter estimation methods for n for block codes and for n, k, and m for convolutional codes. Use methods to determine error control (EC) code parameters for gene regulatory sequence. (2) Develop an evolutionary computing computational framework for near-optimal solutions to the algebraic code reconstruction problem. Method will be tested on engineered and biological sequences.
Date: March 1, 2005
Creator: Rintoul, Mark Daniel; May, Elebeoba Eni; Brown, William Michael; Johnston, Anna Marie & Watson, Jean-Paul
Partner: UNT Libraries Government Documents Department

MPSA 2004 Final Report

Description: This proposal was to support a portion of the costs of Methods in Protein Structure Analysis 2004 (MPSA2004). MPSA2004 was the 15th in the series of MPSA international conferences on protein structure analysis that began in 1974. MPSA2004 was held on the campus of the University of Washington, Seattle, WA August 29 through September 2, 2004. Twenty-four internationally recognized speakers gave invited presentations; additional participants were chosen to present short talks in the 10 topics that were addressed. The aim of MPSA conferences is to communicate the latest, cutting-edge techniques in protein structure analysis and proteomics through science success stories as told by the scientific leaders who developed the technologies. Biotechnology vendors are present to explain currently available commercial technology through workshops and demonstrations booths. The overall aim is to provide a forum for exchanging the latest methods and ideas in protein structure analysis and proteomics to current and future practitioners. The conference supported the missions of the DOE Office of Science and the Office of Biological and Environmental Research (BER) in at least two ways: by enabling the above forum, and by encouraging young researchers who might otherwise not attend to meet leading researchers and to learn about the most current ideas, technologies, products, and services relevant to protein structure analysis. Topics covered in MPSA2004 were highly relevant to the BER Genomics: GTL initiative and several National Laboratory scientists were among the invited speakers.
Date: February 18, 2006
Creator: Anderson, Carl W.
Partner: UNT Libraries Government Documents Department

Final technical report: analysis of molecular data using statistical and evolutionary approaches

Description: This document describes the research and training accomplishments of Dr. Kevin Atteson during the DOE fellowship period of September 1997 to September 1999. Dr. Atteson received training in molecular evolution during this period and made progress on seven research topics including: computation of DNA pattern probability, asymptotic redundancy of Bayes rules, performance of neighbor-joining evolutionary tree estimation, convex evolutionary tree estimation, identifiability of trees under mixed rates, gene expression analysis, and population genetics of unequal crossover.
Date: February 15, 2000
Creator: Atteson, K. & Kim, Junhyong
Partner: UNT Libraries Government Documents Department

Enumerating molecules.

Description: This report is a comprehensive review of the field of molecular enumeration from early isomer counting theories to evolutionary algorithms that design molecules in silico. The core of the review is a detail account on how molecules are counted, enumerated, and sampled. The practical applications of molecular enumeration are also reviewed for chemical information, structure elucidation, molecular design, and combinatorial library design purposes. This review is to appear as a chapter in Reviews in Computational Chemistry volume 21 edited by Kenny B. Lipkowitz.
Date: April 1, 2004
Creator: Visco, Donald Patrick, Jr. (, . Tennessee Technological University, Cookeville, TN); Faulon, Jean-Loup Michel & Roe, Diana C.
Partner: UNT Libraries Government Documents Department

Catalyzing Inquiry at the Interface of Computing and Biology

Description: This study is the first comprehensive NRC study that suggests a high-level intellectual structure for Federal agencies for supporting work at the biology/computing interface. The report seeks to establish the intellectual legitimacy of a fundamentally cross-disciplinary collaboration between biologists and computer scientists. That is, while some universities are increasingly favorable to research at the intersection, life science researchers at other universities are strongly impeded in their efforts to collaborate. This report addresses these impediments and describes proven strategies for overcoming them. An important feature of the report is the use of well-documented examples that describe clearly to individuals not trained in computer science the value and usage of computing across the biological sciences, from genes and proteins to networks and pathways, from organelles to cells, and from individual organisms to populations and ecosystems. It is hoped that these examples will be useful to students in the life sciences to motivate (continued) study in computer science that will enable them to be more facile users of computing in their future biological studies.
Date: October 30, 2005
Creator: Wooley, John & Lin, Herbert S.
Partner: UNT Libraries Government Documents Department

UC Merced Center for Computational Biology Final Report

Description: Final report for the UC Merced Center for Computational Biology. The Center for Computational Biology (CCB) was established to support multidisciplinary scientific research and academic programs in computational biology at the new University of California campus in Merced. In 2003, the growing gap between biology research and education was documented in a report from the National Academy of Sciences, Bio2010 Transforming Undergraduate Education for Future Research Biologists. We believed that a new type of biological sciences undergraduate and graduate programs that emphasized biological concepts and considered biology as an information science would have a dramatic impact in enabling the transformation of biology. UC Merced as newest UC campus and the first new U.S. research university of the 21st century was ideally suited to adopt an alternate strategy - to create a new Biological Sciences majors and graduate group that incorporated the strong computational and mathematical vision articulated in the Bio2010 report. CCB aimed to leverage this strong commitment at UC Merced to develop a new educational program based on the principle of biology as a quantitative, model-driven science. Also we expected that the center would be enable the dissemination of computational biology course materials to other university and feeder institutions, and foster research projects that exemplify a mathematical and computations-based approach to the life sciences. As this report describes, the CCB has been successful in achieving these goals, and multidisciplinary computational biology is now an integral part of UC Merced undergraduate, graduate and research programs in the life sciences. The CCB began in fall 2004 with the aid of an award from U.S. Department of Energy (DOE), under its Genomes to Life program of support for the development of research and educational infrastructure in the modern biological sciences. This report to DOE describes the research and academic programs made possible ...
Date: November 30, 2010
Creator: Colvin, Michael & Watanabe, Masakatsu
Partner: UNT Libraries Government Documents Department

Algorithm Optimizations in Genomic Analysis Using Entropic Dissection

Description: In recent years, the collection of genomic data has skyrocketed and databases of genomic data are growing at a faster rate than ever before. Although many computational methods have been developed to interpret these data, they tend to struggle to process the ever increasing file sizes that are being produced and fail to take advantage of the advances in multi-core processors by using parallel processing. In some instances, loss of accuracy has been a necessary trade off to allow faster computation of the data. This thesis discusses one such algorithm that has been developed and how changes were made to allow larger input file sizes and reduce the time required to achieve a result without sacrificing accuracy. An information entropy based algorithm was used as a basis to demonstrate these techniques. The algorithm dissects the distinctive patterns underlying genomic data efficiently requiring no a priori knowledge, and thus is applicable in a variety of biological research applications. This research describes how parallel processing and object-oriented programming techniques were used to process larger files in less time and achieve a more accurate result from the algorithm. Through object oriented techniques, the maximum allowable input file size was significantly increased from 200 mb to 2000 mb. Using parallel processing techniques allowed the program to finish processing data in less than half the time of the sequential version. The accuracy of the algorithm was improved by reducing data loss throughout the algorithm. Finally, adding user-friendly options enabled the program to use requests more effectively and further customize the logic used within the algorithm.
Date: August 2015
Creator: Danks, Jacob R.
Partner: UNT Libraries

DOE EPSCoR Initiative in Structural and computational Biology/Bioinformatics

Description: The overall goal of the DOE EPSCoR Initiative in Structural and Computational Biology was to enhance the competiveness of Vermont research in these scientific areas. To develop self-sustaining infrastructure, we increased the critical mass of faculty, developed shared resources that made junior researchers more competitive for federal research grants, implemented programs to train graduate and undergraduate students who participated in these research areas and provided seed money for research projects. During the time period funded by this DOE initiative: (1) four new faculty were recruited to the University of Vermont using DOE resources, three in Computational Biology and one in Structural Biology; (2) technical support was provided for the Computational and Structural Biology facilities; (3) twenty-two graduate students were directly funded by fellowships; (4) fifteen undergraduate students were supported during the summer; and (5) twenty-eight pilot projects were supported. Taken together these dollars resulted in a plethora of published papers, many in high profile journals in the fields and directly impacted competitive extramural funding based on structural or computational biology resulting in 49 million dollars awarded in grants (Appendix I), a 600% return on investment by DOE, the State and University.
Date: February 21, 2008
Creator: Wallace, Susan S.
Partner: UNT Libraries Government Documents Department

Development of simulation tools for virus shell assembly. Final report

Description: Prof. Berger's major areas of research have been in applying computational and mathematical techniques to problems in biology, and more specifically to problems in protein folding and genomics. Significant progress has been made in the following areas relating to virus shell assembly: development has been progressing on a second-generation self-assembly simulator which provides a more versatile and physically realistic model of assembly; simulations are being developed and applied to a variety of problems in virus assembly; and collaborative efforts have continued with experimental biologists to verify and inspire the local rules theory and the simulator. The group has also worked on applications of the techniques developed here to other self-assembling structures in the material and biological sciences. Some of this work has been conducted in conjunction with Dr. Sorin Istrail when he was at Sandia National Labs.
Date: January 5, 2001
Creator: Berger, Bonnie
Partner: UNT Libraries Government Documents Department

Pacific Symposium on Biocomputing 2002/2003/2004

Description: Brief introduction to Pacific Symposium on Biocomputing The Pacific Symposium on Biocomputing is an international, multidisciplinary conference covering current research in the theory and the application of computational methods in problems of biological significance. Researchers from the United States, the Asian Pacific nations and around the world gather each year at PSB to exchange research results and discuss open issues in all aspects of computational biology. PSB provides a forum for work on databases, algorithms, interfaces, visualization, modeling and other computational methods, as applied to biological problems. The data-rich areas of molecular biology are emphasized. PSB is the only meeting in the bioinformatics field with sessions defined dynamically each year in response to specific proposals from the participants. Sessions are organized by leaders in emerging areas to provide forums for publication and for discussion of research in biocomputing ''hot topics''. PSB therefore enables discussion of emerging methods and approaches in this rapidly changing field. PSB has been designated as one of the major meetings in this field by the recently established International Society for Computational Biology (see www.iscb.org). Papers and presentations are peer reviewed typically with 3 reviews per paper plus editorial oversight from the conference organizers. The accepted papers are published in an archival proceedings volume, which is indexed by PubMed, and electronically (see http://psb.stanford.edu/). Finally, given the tight schedule from submission of papers to their publication, typically 5 to 5 1/2 months, the PSB proceedings each year represents one of the most up-to-date surveys of current trends in bioinformatics.
Date: October 26, 2004
Creator: Dunker, A.Keith
Partner: UNT Libraries Government Documents Department

''After the Genome 5 Conference'' to be held October 6-10, 1999 in Jackson Hole, Wyoming

Description: OAK B139 The postgenomic era is arriving faster than anyone had imagined--sometime during 2000 we'll have a large fraction of the human genome sequence. Heretofore, our understanding of function has come from non-industrial experiments whose conclusions were largely framed in human language. The advent of large amounts of sequence data, and of ''functional genomic'' data types such as mRNA expression data, have changed this picture. These data share the feature that individual observations and measurements are typically relatively low value adding. Such data is now being generated so rapidly that the amount of information contained in it will surpass the amount of biological information collected by traditional means. It is tantalizing to envision using genomic information to create a quantitative biology with a very strong data component. Unfortunately, we are very early in our understanding of how to ''compute on'' genomic information so as to extract biological knowledge from i t. In fact, some current efforts to come to grips with genomic information often resemble a computer savvy library science, where the most important issues concern categories, classification schemes, and information retrieval. When exploring new libraries, a measure of cataloging and inventory is surely inevitable. However, at some point we will need to move from library science to scholarship.We would like to achieve a quantitative and predictive understanding of biological function. We realize that making the bridge from knowledge of systems to the sets of abstractions that constitute computable entities is not easy. The After the Genome meetings were started in 1995 to help the biological community think about and prepare for the changes in biological research in the face of the oncoming flow of genomic information. The term ''After the Genome'' refers to a future in which complete inventories of the gene products of entire organisms become available.Since then, ...
Date: October 6, 1999
Creator: Brent, Roger
Partner: UNT Libraries Government Documents Department