2016 SysMod meeting @ ISMB 2018-02-06T15:53:20+00:00
2016 SysMod annual meeting
Where systems biology meets bioinformatics
July 9, 2016 | Orlando, FL
Methods
Dynamical modeling
Flux balance analysis
Logical modeling
Network modeling
Stochastic simulation
Systems
Animals
Bacteria
Humans
Plants
Yeast
Applications
Bioengineering
Cancer
Developmental biology
Immunology
Precision medicine
...
Ioannis Xenarios
Swiss Institute of
Bioinformatics, CH
Nathan Price
Institute for Systems
Biology and University
of Washington, US
Vassily Hatzimanikatis
École polytechnique
fédérale de Lausanne, CH
Keynote speakers

Overview

Advances in genomics are creating new opportunities to integrate systems modeling and bioinformatics. The goal of the first SysMod meeting is to create a forum for systems modelers and bioinformaticians to discuss common research questions and methods. The meeting will cover qualitative and quantitative approaches, as well as dynamical and steady-state modeling. The meeting will take place on July 9, 2016 at the ISMB Conference in Orlando, Florida.

Topics

Methods
Dynamical modeling
Flux balance analysis
Logical modeling
Network modeling
Stochastic simulation

Systems
Animals
Bacteria
Humans
Plants
Yeast

Applications
Bioengineering
Cancer
Development
Immunology
Precision medicine

Keynote speakers

Vassily Hatzimanikatis
École polytechnique fédérale de Lausanne, CH

Nathan Price
Institute for Systems Biology and University of Washington, US

Ioannis Xenarios
Swiss Institute of Bioinformatics, CH

Schedule

08:45-09:00 Welcome
09:00-12:50 Session 1: Modeling metabolism: From kinetics to whole genome
Over the past 20 years flux balance analysis together with whole-genome sequencing have enabled researchers to construct detailed genome-scale models of metabolism. More recently, researchers have integrated transcriptomics and proteomics data into flux balance analysis models to generate more accurate predictions. This session featured two keynote talks on the state-of-the-art of genome-scale modeling, how it can be enhanced through combination with genomic data, and its prospects for precision medicine and microbial engineering.
09:00-09:50 Metabolic analyses in microbes and humans
Nathan Price
Institute for Systems Biology, US
09:50-10:15 First-principles modeling of metabolism using statistical thermodynamics and maximum entropy
Garrett Goh , Jeremy Zucker, Douglas Baxter & William Cannon
Pacific Northwest National Laboratory
Historically, modeling metabolism has fallen under two major approaches. Ideally, a well-parameterized kinetic model would provide detailed insight into metabolic processes. However, owing to the challenge of obtaining large-scale kinetic measurements, this approach is not tractable for larger systems. In contrast, constrained-based method, such as flux-balance analysis (FBA) uses the law of mass action to predict possible solutions of metabolic fluxes. However, the underdetermined nature of FBA requires the use of empirically-determined objective functions to “select” an appropriate prediction. Here, we present the theoretical framework of using statistical thermodynamics to model metabolism. This approach combines the simplicity of constrained-based methods, where no kinetic data is required, with the ability the model metabolism dynamically like in traditional kinetic models. Metabolism is modeled as a series of states, and using only standard free energy of reactions as input parameters, a stochastic simulation that propagates based on the principles of thermodynamics and maximum entropy is achieved. Therefore, no empirically-determined objective function is needed to select for the optimal solution. Metabolic pathways, such as glycolysis, was simulated, and the predicted metabolite concentrations agree with experimental measurements, to within 0.5 log concentration units, with a correlation coefficient of over 0.9. Future directions of scaling up to encompass central metabolism and genome scale models are also discussed.
10:15-10:45 Coffee break
10:45-11:10 An ensemble approach to unravel the design principles of cancer metabolic rewiring
Chiara Damiani1,2 , Marzia Di Filippo1,3, Riccardo Colombo1,2, Dario Pescini1,4, Daniela Gaglio1,5, Marco Vanoni1,3, Lilia Alberghina1,3 & Giancarlo Mauri1,2
1SYSBIO Centre of Systems Biology, Piazza della Scienza 2, 20126 Milano, Italy
1Dipartimento di Informatica, Sistemistica e Comunicazione, Universita` degli Studi di Milano-Bicocca, Viale Sarca 336, 20126 Milano, Italy
1Dipartimento di Biotecnologie e Bioscienze, Universita` degli Studi di Milano-Bicocca, Piazza della Scienza 2, 20126 Milano, Italy
1Dipartimento di Statistica e Metodi Quantitativi, Universita` degli Studi di Milano-Bicocca, Via Bicocca degli Arcimboldi 8, 20126 Milano, Italy
1Istituto di Bioimmagini e Fisiologia Molecolare, Consiglio Nazionale delle Ricerche, Via F.lli Cervi 93, 20090 Segrate (MI), Italy
Generic genome wide reconstructions, encompassing all the biochemical reactions that may occur in human metabolism according to its genome, are today available. These networks can be effectively exploited as a scaffold for the integration of large and heterogeneous omics data coming from cell lines or even patients.Constraint-based modeling and Flux Balance Analysis (FBA) can be efficiently applied to simulate the phenotype associated with the so-obtained specific networks, and have proven particularly successful in simulating the effect of gene deletions and in identifying reporter metabolites. However, when it comes to unraveling the design principles of cancer metabolic reprogramming (or rewiring), some major drawbacks are encountered.On the one hand, standard FBA limits to the identification of one single flux pattern that maximizes a certain objective, thus shying away from diversity. On the other hand, when exploring the entire space of possible flux patterns of these comprehensive and highly redundant networks, variability blows up. Moreover, in these data-driven models, the metabolic phenotype is imposed on the network, whereas to investigate the rational at the basis of the rewiring performed by cancer cells, we should observe the properties emerging from the network.To better investigate the design principles of cancer metabolism, and as a complement to FBA of tissue-specific models, we propose to:

  1. reduce network diversity to its essentials, by focusing on what is important and dominant, and by limiting the substrates available to the network;
  2. tackle the diversity of the emergent behaviors of the reduced generic network in its entirety.

To prove the viability of the first goal we extracted, from the Human Metabolic Atlas, core models that focus around the central carbon metabolism of different kind of tumors: breast, liver and lung1. We showed that, the reduced models exhibit some common and relevant metabolic traits of cancer cells: 1) down-regulation of respiratory chain; 2) enhanced glycolytic flux and lactate production; 3) stimulated utilization of glutamine via reductive carboxylation. Remarkably, the core models are still able to account for some heterogeneity between the tumors: deregulation of respiratory chain occurs to a lesser extent in the lung cancer model, in line with experimental findings.

To prove the viability of the second intent, we conceived a FBA-based approach that releases the assumption on optimal growth, while exploring ensembles of different metabolic phenotypes2. As a proof of principles, we applied the method to a generic model of yeast metabolism and we were able to identify two ensembles of phenotypes: one consistent with Crabtree-positive yeasts strains, the other with Crabtree-negative ones.

The approach is promising and will be applied to a generic core model of human metabolism to investigate the differences between the ensemble of flux distributions that support sustained (but not necessarily optimal) growth rates under given environmental constraints against flux distributions carrying negligible growth rates.

References

  1. Di Filippo M, Colombo R, Damiani C, Pescini D, Gaglio D, Vanoni M, Alberghina L & Mauri G. Zooming-in on cancer metabolic rewiring with tissue specific constraint-based models. Comput Biol Chem 62:60-69 (2016). DOI: 10.1016/j.compbiolchem.2016.03.002  
  2. Damiani C, Pescini D, Colombo R, Molinari S, Alberghina L, Vanoni M & Mauri G. An ensemble evolutionary constraint-based approach to understand the emergence of metabolic phenotypes. Nat Comput 13, 3:321-331 (2014). DOI: 10.1007/s11047-014-9439-4 
11:10-11:35 Mighty morphing metabolic models: leveraging manual curations for automatic metabolic reconstruction of clades
Evangelos Simeonidis Matthew A. Richards , Brendan King & Nathan D. Price
Institute for Systems Biology, Seattle, WA, USA
Genome-scale metabolic models have been employed with great success for phenotypic studies of organisms over the last two decades. The most difficult step in the reconstruction of these metabolic models is the manual curation; a time-intensive and laborious process of literature review that is nevertheless essential for a high-quality network. Although various automated reconstruction methods have been developed to accelerate the reconstruction process, much effort must still be expended to supplement an automatically-generated draft model with manually curated information from the published literature to achieve the quality needed for successful simulation of the metabolic processes of the organism. Here, we are utilizing a tool for likelihood-based gene annotation previously developed in our lab1 to create a method that “morphs” a manually curated metabolic model to a draft model of a closely related organism. Our method combines genes from the original, manually curated model with genes from an annotation database to create a final structure that contains gene-associated reactions from both sources. The benefits of such an approach are twofold: on one hand, the effort and accumulated knowledge that has gone into the construction of the original model is leveraged to create a metabolic model for a closely related organism. On the other hand, starting from an already completed and functioning model allows the user to run simulations at every step as necessary, offering the ability to predict how modifications will affect the performance of the model. Using our manually curated model of Methanococcus maripaludis, iMR5402, as the starting point, we employ our method to create morphed models of three related methanogenic archaea. We demonstrate that our method successfully pulls literature information from the iMR540 model to create drafts for the other organisms, thereby decreasing the time needed to collect this information. We also show that gene annotations from iMR540 exhibit very low intersection with those from the annotation database, demonstrating the volume of information added by leveraging the manual curation in the original model. Our morphing method offers a viable alternative to other automated reconstruction methods, particularly for organisms that are evolutionarily dissimilar to those that form the foundation of annotation databases. Moreover, it provides a way to quickly reconstruct a clade of metabolic models for related organisms from one manually curated representative.References

  1. Benedict MN, Mundy MB, Henry CS, Chia N & Price ND. Likelihood-based gene annotations for gap filling and quality assessment in genome-scale metabolic models. PLoS Comput Biol 10, 10:e1003882 (2014). DOI: 10.1371/journal.pcbi.1003882
  2. Richards MA et al. (In preparation).
11:35-12:00 Recent software and services to support the SBML community
Michael Hucka1 , Frank T. Bergmann1,2, Andreas Dräger3,4, Harold F. Gómez5, Sarah M. Keating1,6, Nicolas Rodriguez7 & Lucian P. Smith8
1Computing and Mathematical Sciences, California Institute of Technology, Pasadena, CA, USA
2BioQUANT/COS, University of Heidelberg, Heidelberg, Germany
3Center for Bioinformatics Tuebingen, University of Tuebingen, Tübingen, Germany
4Systems Biology Research Group (SBRG), University of California, San Diego, La Jolla, CA, USA
5Department of Biosystems Science and Engineering, ETH Zürich, Basel, Switzerland
6European Molecular Biology Laboratory, European Bioinformatics Institute (EMBL-EBI), Hinxton, Cambridge, UK
7The Babraham Institute, Babraham Campus, Cambridge, UK
8Department of Genome Sciences, University of Washington, Seattle, WA, USA
Biologists today have at their disposal a wide range of software tools for computational modeling. The wealth of resources presents great opportunities for better research, but also brings interoperability problems: different tools are implemented in different programming languages, express models using different mathematical frameworks, provide different analysis methods, and support different data formats. SBML (the Systems Biology Markup Language) is an open format for representing models; it supports the ability to exchange and publish models in a tool-neutral way. The use of SBML together with other emerging community standards enhances interoperability and reproducibility in computational systems biology. Over the past sixteen years, SBML has empowered model exchange in over 260 software systems, thousands of models, and countless research efforts.Several communities have also used SBML as starting points for coalescing efforts to improve interoperability in their respective areas. For example, the CoLoMoTo community was formed to promote standards and interoperability in qualitative modeling, and created the Qualitative Models extension for SBML. Similarly, the constraints-based modeling community collectively defined a Flux Balance Constraints extension to redress prominent failures of interoperability. Such efforts are possible thanks to SBML Level 3’s modular architecture and open community process.In this presentation, we describe three recent software systems developed by the SBML Team to support the SBML-using community. These are (1) an updated Online SBML Validator that validates SBML files containing extensions (known as SBML packages) such as Qualitative Models and Flux Balance Constraints; (2) Deviser, the Design Explorer and Viewer for Iterative SBML Enhancement of Representations, a system that can generate software API library code given a description of an SBML package; and MOCCASIN, the Model ODE Converter for Creating Automated SBML INteroperability, a system that can take certain basic forms of ODE simulation models written in MATLAB and translate them into SBML format.
12:00-12:50 Towards kinetic modeling of genome-scale metabolic networks without sacrificing stoichiometric, thermodynamic and physiological constraints
Vassily Hatzimanikatis
École polytechnique fédérale de Lausanne, Switzerland
12:50-13:40 Lunch and poster session
New Standard Resources for Systems Biology: BiGG Models Database and Visual Pathway Editing with Escher
Andreas Dräger1,2 , Zachary A. King2, Justin S. Lu2, Ali Ebrahim2, Nikolaus Sonnenschein3, Philip C. Miller2, Joshua A. Lerman4, Bernhard O. Palsson5,6,7 & Nathan E. Lewis7
1Center for Bioinformatics Tuebingen (ZBIT), University of Tuebingen, Tübingen, Germany
2Bioengineering, University of California, San Diego, La Jolla, CA, USA
3Technical University of Denmark, Novo Nordisk Foundation Center for Biosustainability, Hørsholm, Denmark
4Total New Energies USA, Inc., Amyris, Inc., Emeryville, CA, USA,
5Department of Bioengineering, University of California, San Diego, La Jolla, CA, USA
6Novo Nordisk Foundation Center for Biosustainability, Technical University of Denmark, Lyngby, Denmark
7Department of Pediatrics, University of California, San Diego, La Jolla, CA, USA
Background. Genome-scale metabolic network reconstructions enable the simulation and analysis of complex biological networks, thus providing insights into how thousands of genes together influence cell phenotypes. Accuracy in systems biology research requires standards in model construction, a variety of specific software tools, and access to high-quality metabolic networks.Results. To meet these needs, we present BiGG Models database and a collection of software solutions for model building, curation, visualization, and simulation. BiGG Models currently contains more than 75 high-quality manually-curated genome-scale metabolic network reconstructions, which can be easily searched and browsed and include interactive pathway map visualizations. These visualizations have been generated with the web-based Escher pathway builder. Escher allows users to draw pathways in a semi-automated way and can visualize data related to genes or proteins that are associated to pathways. An export function facilitates storing Escher maps in the community formats SBML and SBGN-ML. These features make Escher an ideal interactive model development tool. In order to make all models in BiGG MIRIAM compliant, BiGG Models itself has become part of the MIRIAM registry and provides links a plethora of external databases for each model component. This rich annotation enables rapid comparison across models. New Systems Biology Ontology terms have been defined that are used to better highlight the role of model components. A comprehensive web API for programmatically accessing the database content enables interfacing with diverse modeling and analysis tools.Conclusions. With these features and tools, BiGG Models provides a valuable database, structured for easy access and to help improve the quality, standardization, and accessibility of all genome-scale models. The development of this resource has boosted the development of community standards for constraint-based modeling.Availabilityhttp://bigg.ucsd.eduhttps://escher.github.io
Automated assembly of rule-based models from natural language, literature and databases
Benjamin M. Gyori , John A. Bachman, Kartik Subramanian, Jeremy Muhlich & Peter K. Sorger
Harvard Medical School
We present INDRA, a novel framework for building mathematical models of biochemical mechanisms. INDRA is integrated with natural language parsers (DRUM1 and REACH2) and with pathway databases (PathwayCommons3 and NDEx4). INDRA allows the user to define models using natural language descriptions of molecular mechanisms (e.g. “GRB2 binds EGFR that is phosphorylated on a tyrosine residue.”). It can also extract mechanisms from the literature and databases. INDRA aggregates mechanistic information in an intermediate knowledge representation called INDRA-statements. It then automatically assembles a rule-based dynamical model from these statements. Automated model assembly involves synthesizing a set of molecular agents and their interaction rules from the collected mechanisms using assembly policies and biochemical rule templates.INDRA produces models in the PySB programmatic rule-based modeling language5 from which both BioNetGen and Kappa models can be obtained. This workflow supports rapid and extensible model building in which the user is allowed to focus on defining the content of the model rather than its implementation. Grounding (i.e. database IDs for proteins) and provenance (source text or database entry from which a mechanism was extracted) are also maintained by INDRA and propagated into the final model as annotations. We demonstrate the capabilities of INDRA with a model automatically assembled from a natural language description of growth factor and MAP kinase signaling. The INDRA-assembled model is able to explain, through simulation experiments, early resistance mechanisms to the cancer drug Vemurafenib in BRAF-V600E mutation driven cancers.Availabilityhttps://github.com/sorgerlab/indraReferences

  1. Allen J, de Beaumont W, Galescu L & Teng CM. Complex Event Extraction using DRUM. Proc 53rd Annu Meet Assoc Comput Linguist 7th Int Jt Conf Nat Lang Process 1:1–11 (2015). URL: http://aclweb.org/anthology/W/W15/W15-3801.pdf
  2. Valenzuela-Escarcega MA, Gus H-P, Thomas H & Surdeanu M. A domain-independent rule-based framework for event extraction. Proc 53rd Annu Meet Assoc Comput Linguist 7th Int Jt Conf Nat Lang Process 1:127–132 (2015). URL: http://surdeanu.info/mihai/papers/acl2015.pdf
  3. Cerami EG, Gross BE, Demir E, Rodchenkov I, Babur Ö, Anwar N, Schultz N, Bader GD & Sander C. Pathway Commons, a web resource for biological pathway data. Nucleic Acids Res 39, Database issue:D685-D690 (2011). DOI: 10.1093/nar/gkq1039 
  4. Pratt D, Chen J, Welker D, Rivas R, Pillich R, Rynkov V, Ono K, Miello C, Hicks L, Szalma S, Stojmirovic A, Dobrin R, Braxenthaler M, Kuentzer J, Demchak B & Ideker T. NDEx, the Network Data Exchange. Cell Syst 1, 4:302-305 (2015). DOI: 10.1016/j.cels.2015.10.001 
  5. Lopez CF, Muhlich JL, Bachman JA & Sorger PK. Programming biological models in Python using PySB. Mol Syst Biol 9:646 (2013). DOI: 10.1038/msb.2013.1
Investigation the role of the Hippo pathway mediated by Taz/Yap in TGFβ/Smad signalling pathway through mathematical modelling
Batool Labibi1 , Liliana Attisano1, Masahiro Narimatsu2 & Jeff Wrana2
1Donnelly Centre and Dept. of Biochemistry, University of Toronto, Toronto, ON, Canada
2Lunenfeld-Tanenbaum Research Institute, Mt. Sinai Hospital, Toronto, ON, Canada
Transforming Growth Factor-beta (TGFβ) superfamily members control a myriad of cellular activities and are noted for their function as morphogens, in which gradients of ligand control the magnitude and timing of target gene activation that ultimately establishes cell fate. TGFβ pathway activation induces the nuclear accumulation of Smad proteins, which associate with DNA binding transcription factors (TFs) to regulate gene expression. Proper development requires that cells also integrate cues from other pathways, such Hippo, which signals through two related proteins, Taz and Yap, to regulate tissue growth and organ size. Hippo and TGFβ pathways are intimately interconnected. Taz/Yap interact with Smads and when Hippo is active, cytoplasmically localized Taz/Yap binds and inhibits Smad nuclear accumulation. Our recent genome-wide analysis of gene expression by RNAseq in EPH4 epithelial cells showed that Hippo pathway perturbation yielded differential effects, from the expected decrease, to no effect or remarkably, to an increased TGFβ gene response. Follow-up by qPCR confirmed fidelity of the RNAseq results. The results provide strong evidence that Hippo/TGFβ pathway crosstalk could be a mechanism to shape gene expression patterns. The ability of DNA-binding TFs to bind and recruit Smads determines transcriptional outcome and as Taz/Yap form a complex with Smads on promoters, the results also suggest the intriguing possibility that Taz/Yap might alter the affinity of Smads for their transcription partners and thereby modulate the magnitude and/or timing of gene expression patterns. Our RNAseq data indicates that TGFβ target genes fall into two classes, those whose transcriptional activity is dependent on Taz/Yap and those that are not. However, Yap/Taz not only function within the DNA bound transcriptional complex, but also promote Smad nuclear accumulation. Thus, it is experimentally impossible to determine whether changes in gene expression upon Hippo pathway perturbation are through functions at the gene regulatory elements or on Smad nuclear accumulation. To overcome this problem, we employed a mathematical model that considers whether Taz/Yap alters the affinity of the Smad complex for its DNA binding transcription factor (TF) partner. Experiments to identify model parameters are currently underway. Specifically, we are using high-throughput automated image-based analysis to monitor endogenous Smad and Taz/Yap nuclear accumulation as a function of dose and time under conditions of Hippo pathway perturbation. In parallel, dose and time-dependent TGFβ regulated gene expression patterns are being assessed by qPCR. We expect that this work will establish a novel approach to model biological systems and will uncover new insights into how TGFβ /Hippo crosstalk enables key gene expression patterns in animal development.
High-performance computing in Systems Biology: accelerating the simulation and analysis of large and complex biological systems
Giancarlo Mauri1,2 , Marco S. Nobile2,3, Paolo Cazzaniga1,2 & Daniela Besozzi1,2
1Department of Informatics, Systems and Communication, University of Milano-Bicocca, Milan, Italy
2SYSBIO.IT Centre of Systems Biology, Milan, Italy
3Department of Human and Social Sciences, University of Bergamo, Bergamo, Italy
Mathematical modeling and simulation algorithms nowadays provide solid grounds for quantitative investigations of biological systems, in a synergistic way with traditional experimental research. Despite the possibility offered by various computational methods to achieve an in-depth understanding of cells functioning, typical tasks for model definition, calibration and analysis (e.g., reverse engineering, parameter estimation, sensitivity analysis, etc.) are still computationally challenging. Indeed, these problems require to execute a large number of simulations of the same model, each one corresponding to a different physiological or perturbed system condition. In addition, in the case of large-scale systems, characterized by hundreds or thousands species and reactions, even a single simulation can be unfeasible if executed on conventional computing architectures like Central Processing Units (CPUs).To overcome these drawbacks, parallel infrastructures can be used to strongly reduce the prohibitive running times of computational methods in Systems Biology, by distributing the workload over multiple independent computing units. In particular, General-Purpose Graphics Processing Units (GPUs) are gaining an increasing attention by the scientific community, since they are pervasive, cheap and extremely efficient parallel multi-core co-processors, which give access to low-cost, energy-efficient means to achieve tera-scale performances on common workstations.In this talk, we present the GPU-powered simulators that we designed and implemented to accelerate the simulation and analysis of reaction-based models of biological systems (cuTauLeaping1, cupSODA2, LASSIE3), which can rely either on numerical integration methods or stochastic simulation algorithms. In particular, we present both coarse-grain and fine-grain methods, which allow to execute a large number of parallel simulations of the same model, and to parallelize all calculations required by a single simulation run, respectively. These methods were developed to carry out either stochastic or deterministic simulations of both small and large-scale models, providing a relevant reduction of the running time, up to two orders of magnitude with respect to a classic CPU-bound execution.Three models of biological systems characterized by an increasing complexity: the Michaelis-Menten (MM) enzymatic kinetics; a model of gene expression in prokaryotic organisms (PGN); the Ras/cAMP/PKA signaling pathway in the yeast S. cerevisiae, are used as test cases to show the speed-up obtained by GPU-based tools.References

  1. Nobile MS, Cazzaniga P, Besozzi D, Pescini D & Mauri G. cuTauLeaping: a GPU-powered tau-leaping stochastic simulator for massive parallel analyses of biological systems. PLoS One 9, 3:e91963 (2014). DOI: 10.1371/journal.pone.0091963 
  2. Nobile MS, Cazzaniga P, Besozzi D, Mauri G. GPU–accelerated simulations of mass–action kinetics models with cupSODA. J Supercomput 69, 1:17–24 (2014). DOI: 10.1007/s11227-014-1208-8 
  3. Tangherloni A, Nobile MS, Cazzaniga P, Besozzi D & Mauri G. LASSIE: A GPU-based deterministic simulator for large-scale models. (In preparation).
Information evaluation of chemical structures by modeling arithmetic graphs
Armen Vasil Petrosyan
European Regional Academy in Armenia
The scientific and technological progress is greatly conditioned by carrying out joint investigations applying the possibilities of information technologies, system analysis, and automated systems for control considered as priority trends in the development of science and engineering stimulating the development of most important applied fields. The set N= n1, n2,. . . , nS and M= m1, m2, . . ., mK are given. G (N,M) is called an arithmetic graph if N is the set of the graph nodes and (ni, nj) is an edge, if ni,+ nj M where M is the deriving set, and ni,+ nj=m is the edge weight. The polyhedron whose structure is introduced in the shape of an arithmetic graph is called an arithmetic polyhedron.Results. The article is devoted to the investigation of correlative properties of different discrete objects including arithmetic graphs, polyhedrons, chemical graphs, information amounts and different properties of other structures. The main difficulty of this problem is the efficient separation of equivalence classes into elements in the discrete structure. One of such methods of classification is to encode the discrete object through the properties of an arithmetic graph. The theory of arithmetic graphs is taken as a basis for encoding. Encoded polyhedral arithmetic graphs with their equivalence classes are introduced in the given examples, for which an information amount is calculated by Shannons classical formula. The results of the research reveals the correlation/interrelation between materia, energy and information. Initially, the proposed method was applied to polyhedral arithmetic graphs. For those discrete objects, the correlation coefficient has been calculated between the side surfaces, the total length of the edges, and the information amounts. The method has also been used in designing the processes of mathematical modeling of circuits. Upon obtaining positive results, the method mentioned above was appropriated in the field of chemistry. Chemical substances with their structures were introduced as polyhedral arithmetic graphs. By the way, those structures are coded by the Nature. The encodings of the chemical graph nodes are the atoms, while the interatomic covalent bonds are the edges of the graph. The method has allowed to carry out calculations of information amount, which, in its turn, has given an opportunity to calculate the correlation coefficients depending on the results of the known chemical experiments. Some examples are as follow:

  1. chemical structures – hydrocarbon spirits Solubility in water.
  2. chemical structures melting, boiling temperatures of gases.
Mighty morphing metabolic models: leveraging manual curations for automatic metabolic reconstruction of clades
Evangelos Simeonidis Matthew A. Richards , Brendan King & Nathan D. Price
Institute for Systems Biology, Seattle, WA, USA
Genome-scale metabolic models have been employed with great success for phenotypic studies of organisms over the last two decades. The most difficult step in the reconstruction of these metabolic models is the manual curation; a time-intensive and laborious process of literature review that is nevertheless essential for a high-quality network. Although various automated reconstruction methods have been developed to accelerate the reconstruction process, much effort must still be expended to supplement an automatically-generated draft model with manually curated information from the published literature to achieve the quality needed for successful simulation of the metabolic processes of the organism. Here, we are utilizing a tool for likelihood-based gene annotation previously developed in our lab1 to create a method that “morphs” a manually curated metabolic model to a draft model of a closely related organism. Our method combines genes from the original, manually curated model with genes from an annotation database to create a final structure that contains gene-associated reactions from both sources. The benefits of such an approach are twofold: on one hand, the effort and accumulated knowledge that has gone into the construction of the original model is leveraged to create a metabolic model for a closely related organism. On the other hand, starting from an already completed and functioning model allows the user to run simulations at every step as necessary, offering the ability to predict how modifications will affect the performance of the model. Using our manually curated model of Methanococcus maripaludis, iMR5402, as the starting point, we employ our method to create morphed models of three related methanogenic archaea. We demonstrate that our method successfully pulls literature information from the iMR540 model to create drafts for the other organisms, thereby decreasing the time needed to collect this information. We also show that gene annotations from iMR540 exhibit very low intersection with those from the annotation database, demonstrating the volume of information added by leveraging the manual curation in the original model. Our morphing method offers a viable alternative to other automated reconstruction methods, particularly for organisms that are evolutionarily dissimilar to those that form the foundation of annotation databases. Moreover, it provides a way to quickly reconstruct a clade of metabolic models for related organisms from one manually curated representative.References

  1. Benedict MN, Mundy MB, Henry CS, Chia N & Price ND. Likelihood-based gene annotations for gap filling and quality assessment in genome-scale metabolic models. PLoS Comput Biol 10, 10:e1003882 (2014). DOI: 10.1371/journal.pcbi.1003882 
  2. Richards MA et al. (In preparation).
Fitting rule-based model to immunoprecipitation data
Anatoly Sorokin1 , J Douglas Armstrong2 & Oksana Sorokin2
1Institute of Cell Biophysics RAS
2University of Edinburgh
Imunoprecipitation (IP) methods are very common in biology and provide roughly the half of existing proteomic data. Qualitative and quantitative composition of those complexes depends on the structure and properties of the respective baits as well as their direct and indirect interactors (preys). Protein concentrations, affinities and domain-domain interactions are likely the key parameters that affect the complex composition, its stability and abundance. Thus, IP experimental data provides ideal constraints for rule-based model. Rule-based model specifies domain-domain interactions via rules hence one rule can describe a variety of reactions in the system. To build the model one needs to take into account the protein domain structure and specificity of domain-domain binding properties, including their binding affinity and posttranslational modifications. Model should also consider the relative concentrations of the proteins in the reaction mixture. Stochastic simulation of the model predicts the distribution of possible protein complexes of different size and composition. We have developed framework for the fitting of protein binding affinity based on IP data.AMPA-type glutamate receptors (AMPARs) are responsible for a variety of processes in the mammalian brain including fast excitatory neurotransmission, postsynaptic plasticity, or synapse development. Recently, it was demonstrated that native AMPARs are macromolecular complexes with a large molecular diversity, which results from assembly of the known AMPAR subunits, whith various types of auxiliary proteins1. The AMPARs dataset is relatively concise and well suited for development of rule-based model of the large complex assembly based on known domain structures of involved proteins.References

  1. Schwenk J, Harmel N, Brechet A, Zolles G, Berkefeld H, Müller CS, Bildl W, Baehrens D, Hüber B, Kulik A, Klöcker N, Schulte U & Fakler B. High-resolution proteomics unravel architecture and molecular diversity of native AMPA receptor complexes. Neuron 74, 4:621-633 (2012). DOI: 10.1016/j.neuron.2012.03.034 
Systematic multi-scale modeling and analysis for gene regulation
Daifeng Wang  & Mark Gerstein
Computational Biology and Bioinformatics, Yale University
The rapidly increasing quantity of biological data offers novel and diverse resources to study biological functions at the system level. Integrating and mining these various large-scale datasets is both a central priority and a great challenge for the field of systems biology and necessitates the development of specialized computational approaches. In this presentation, I will introduce how the multi-scale modeling framework can be used to systematically study gene expression and regulation, which includes several novel computational systems approaches with applications to cancer and developmental biology: 1) an algorithm to simultaneously cluster multi-layer networks such as gene co-expression networks across multiple species, which discovered novel human developmental genomic functions and behaviors1,2; 2) a logic-circuit based method to identify the genome-wide cooperative logics among gene regulatory factors and pathways for the first time in cancers such as acute myeloid leukemia, which provided unprecedented insights into the gene regulatory logics in complex biological systems3; 3) if time permits, an integrated method using the state-space model and dimensionality reduction to identify principal temporal expression patterns driven by internal and external gene regulatory networks, which established an entirely new analytical platform to identify systematic and robust dynamic patterns from high dimensional, complex and noisy biomedical data4.References

  1. Gerstein MB, Rozowsky J, Yan KK, Wang D, Cheng C, Brown JB, Davis CA, Hillier L, Sisu C, Li JJ, Pei B, Harmanci AO, Duff MO, Djebali S, Alexander RP, Alver BH, Auerbach R, Bell K, Bickel PJ, Boeck ME, Boley NP, Booth BW, Cherbas L, Cherbas P, Di C, Dobin A, Drenkow J, Ewing B, Fang G, Fastuca M, Feingold EA, Frankish A, Gao G, Good PJ, Guigó R, Hammonds A, Harrow J, Hoskins RA, Howald C, Hu L, Huang H, Hubbard TJ, Huynh C, Jha S, Kasper D, Kato M, Kaufman TC, Kitchen RR, Ladewig E, Lagarde J, Lai E, Leng J, Lu Z, MacCoss M, May G, McWhirter R, Merrihew G, Miller DM, Mortazavi A, Murad R, Oliver B, Olson S, Park PJ, Pazin MJ, Perrimon N, Pervouchine D, Reinke V, Reymond A, Robinson G, Samsonova A, Saunders GI, Schlesinger F, Sethi A, Slack FJ, Spencer WC, Stoiber MH, Strasbourger P, Tanzer A, Thompson OA, Wan KH, Wang G, Wang H, Watkins KL, Wen J, Wen K, Xue C, Yang L, Yip K, Zaleski C, Zhang Y, Zheng H, Brenner SE, Graveley BR, Celniker SE, Gingeras TR & Waterston R. Comparative analysis of the transcriptome across distant species. Nature 512, 7515:445-448 (2014). DOI: 10.1038/nature13424 
  2. Yan KK, Wang D, Rozowsky J, Zheng H, Cheng C & Gerstein M. OrthoClust: an orthology-based network framework for clustering data across multiple species. Genome Biol 15, 8:R100 (2014). DOI:  10.1186/gb-2014-15-8-r100 
  3. Wang D, Yan KK, Sisu C, Cheng C, Rozowsky J, Meyerson W & Gerstein MB. Loregic: a method to characterize the cooperative logic of regulatory factors. PLoS Comput Biol 11, 4:e1004132 (2015). DOI: 10.1371/journal.pcbi.1004132 
  4. Wang D et al. (In revision).
A Genome-wide clustering method based on dynamic patterns of Transcript levels and Transcription rates
Wonsuk Yoo1 , W. Art Chaovalitwongse2, Jose E. Perez-Ortin3 & Sungchul Ji4
1Augusta University
2University of Washington
3University of Valencia
4Rutgers University
Complementary DNA (cDNA) microarray technology has rapidly emerged as a novel method to analyze co-expression of thousands of genes. Nevertheless, current microarray methods have focused on analysis of steady-state mRNA levels. However, transcript levels (TL) depend on transcription rates (TR) and transcript degradation rates (TD). Here, we demonstrate that an analysis based on both TR and TL is more informative than that based only on data of mRNA levels. We, therefore, examined the whole budding yeast genes through dynamic sequences of increases/decreases of both TL and TR jointly. We show that the dynamic sequence patterns of known budding yeast genes following Glucose-Galactose Shift can be used to infer about the biological processes of unknown genes. Our approach can be used for analysis of gene expressions in a variety of organisms, cell lines and animal tissues. We develop a mechanic-based clustering method using dynamic patterns of both TR and TD for genomewide RNA levels to identify the trajectory of each gene and to categorize genes with similar trajectory patterns into the same group of characteristic. Eight modules was constructed based on changes in mRNA level considering rates of TR and TD defined as similar to that of Ji (2004), and angles of the dynamic direction between time intervals, which indicated that almost 50% might be correctly explained by the current cDNA microarray analysis methods. We construct a genomewide clustering analysis using dynamic patterns of trajectories for genes whose biological processes are known, and finally, the dynamic sequences of unknown genes are examined to predict potential biological process or a series of biological processes might be shown for each unknown genes. Under the assumption that the dynamic sequences that might represent biological processes, we may be able to predict the biological processes for unknown genes holding a specific sequence with the information from known genes holding the same sequence. If we have transcript levels and transcription rates at the genomewide level, we can express a dynamic sequence of gene’s movements on five intervals with a five-digit number, so this is a genome-wide clustering method to group genes with the assumption that the same dynamic sequences are due to the same biological process. Therefore, the proposed clustering method is used to predict the biological processes of genes by investigating the dynamic changes of TL and TR over time at the genome level. The sequence of 6-6-2-2-4 is most frequent so the genes with same sequences do similar biological processes. In conclusion, this approach can be an useful method to find unknown genes with the same function by categorizing dynamic sequences which represent a dynamic function of mRNA level. In this study, we formulated and examined whole budding yeast genes at the genome level through dynamic sequences which represent the increase/decrease of both TL and TR jointly, and show how these dynamic sequence patterns can be used to infer the biological processes of unknown genes by investigating the characteristics and patterns of the biological processes of known budding yeast genes. We apply this analysis to budding yeast data following Glucose-Galactose Shift.
Evaluation of genetic risk score method for dichotomous outcomes
Wonsuk Yoo1 , Michael Cho2, Steven S. Coughlin3 & Selina A. Smith1
1Augusta University
2Brigham & Women’s Hospital
3Emory University
Substantial uncertainty exists as to whether combining multiple disease-associated single nucleotide polymorphisms (SNPs) into a genotype risk score (GRS) can improve the ability to predict the risk of disease in a clinically relevant way. We calculated the ability of a simple count GRS to predict the risk of a dichotomous outcome under both multiplicative and additive models of combined effects. We then compared the results of these simulations with the observed results of published GRS measured within multiple epidemiologic cohorts. If the combined effect of each disease-associated SNP included in a GRS is multiplicative on the risk scale, then a count GRS score should be useful for risk prediction with as few as 10-20 SNPs. Adding additional SNPs to the GRS under this model dramatically improves risk prediction. By contrast, if the combined effect of each SNP included in a GRS is linearly additive on the risk scale, a simple count GRS is unlikely to provide clinically useful risk prediction. Adding additional SNPs to the GRS under this model does not improve risk prediction. The combined effect of SNPs included in several published GRS measured in several well-phenotyped epidemiologic cohort studies appears to be more consistent with a linearly additive effect. A simple count GRS is unlikely to be clinically useful for predicting the risk of a dichotomous outcome. Alternative methods for constructing GRS that attempt to identify and include SNPs that demonstrate multiplicative gene-gene or gene-environment interactive effects are needed.
13:40-17:30 Session 2: Pushing the boundaries of modeling
“Classical” dynamical modeling relies on the enumeration of biochemical components and processes, as well as large numbers of quantitative parameters. However, this information is often difficult to obtain. In addition, covalent modifications and assemblies of macromolecules cause a combinatorial explosion of states which is intractable to traditional methods. Several approaches have been designed to overcome these shortcomings and take advantage of the information that is available. This session featured a keynote talk from a researcher who has been instrumental in developing these methods and using them to analyze genome-scale data.
13:40-14:05 High-performance computing in Systems Biology: accelerating the simulation and analysis of large and complex biological systems
Giancarlo Mauri1,2 , Marco S. Nobile2,3, Paolo Cazzaniga1,2 & Daniela Besozzi1,2
1Department of Informatics, Systems and Communication, University of Milano-Bicocca, Milan, Italy
2SYSBIO.IT Centre of Systems Biology, Milan, Italy
3Department of Human and Social Sciences, University of Bergamo, Bergamo, Italy
Mathematical modeling and simulation algorithms nowadays provide solid grounds for quantitative investigations of biological systems, in a synergistic way with traditional experimental research. Despite the possibility offered by various computational methods to achieve an in-depth understanding of cells functioning, typical tasks for model definition, calibration and analysis (e.g., reverse engineering, parameter estimation, sensitivity analysis, etc.) are still computationally challenging. Indeed, these problems require to execute a large number of simulations of the same model, each one corresponding to a different physiological or perturbed system condition. In addition, in the case of large-scale systems, characterized by hundreds or thousands species and reactions, even a single simulation can be unfeasible if executed on conventional computing architectures like Central Processing Units (CPUs).To overcome these drawbacks, parallel infrastructures can be used to strongly reduce the prohibitive running times of computational methods in Systems Biology, by distributing the workload over multiple independent computing units. In particular, General-Purpose Graphics Processing Units (GPUs) are gaining an increasing attention by the scientific community, since they are pervasive, cheap and extremely efficient parallel multi-core co-processors, which give access to low-cost, energy-efficient means to achieve tera-scale performances on common workstations.In this talk, we present the GPU-powered simulators that we designed and implemented to accelerate the simulation and analysis of reaction-based models of biological systems (cuTauLeaping1, cupSODA2, LASSIE3), which can rely either on numerical integration methods or stochastic simulation algorithms. In particular, we present both coarse-grain and fine-grain methods, which allow to execute a large number of parallel simulations of the same model, and to parallelize all calculations required by a single simulation run, respectively. These methods were developed to carry out either stochastic or deterministic simulations of both small and large-scale models, providing a relevant reduction of the running time, up to two orders of magnitude with respect to a classic CPU-bound execution.Three models of biological systems characterized by an increasing complexity: the Michaelis-Menten (MM) enzymatic kinetics; a model of gene expression in prokaryotic organisms (PGN); the Ras/cAMP/PKA signaling pathway in the yeast S. cerevisiae, are used as test cases to show the speed-up obtained by GPU-based tools.References

  1. Nobile MS, Cazzaniga P, Besozzi D, Pescini D & Mauri G. cuTauLeaping: a GPU-powered tau-leaping stochastic simulator for massive parallel analyses of biological systems. PLoS One 9, 3:e91963 (2014). DOI: 10.1371/journal.pone.0091963 
  2. Nobile MS, Cazzaniga P, Besozzi D, Mauri G. GPU–accelerated simulations of mass–action kinetics models with cupSODA. J Supercomput 69, 1:17–24 (2014). DOI: 10.1007/s11227-014-1208-8 
  3. Tangherloni A, Nobile MS, Cazzaniga P, Besozzi D & Mauri G. LASSIE: A GPU-based deterministic simulator for large-scale models. (In preparation).
14:05-14:40 The ZBIT systems biology software and web service collection
Andreas Dräger 
Center for Bioinformatics Tuebingen (ZBIT), University of Tuebingen, Tübingen, Germany
Systems Biology Research Group, University of California, San Diego, La Jolla CA, USA
Background. Networks and models in systems biology today have been scaled up to the sizes of full genomes. They comprise thousands of reactions, metabolites, regulatory events, and many further biochemical components. In order to build, analyze, and explore networks and models on this scale, highly specific software solutions are required.Results. In this talk, we introduce the systems biology software collection that has been developed and continuously maintained at the Center for Bioinformatics Tuebingen (ZBIT) over more than one decade now. All tools have been created to solve research questions in ongoing large-scale systems biology projects with numerous national and international collaborators. These tools cover all aspects of the model life cycle. KEGGtranslator and BioPAX2SBML can be used to gather data for building draft networks based on the KEGG PATHWAY database and files in BioPAX format. SBMLsqueezer generates kinetic equations and derives the units for the parameters therein. SBMLsimulator unifies the Simulation Core Library and the heuristic optimization toolbox EvA2 for model simulation and calibration. ModelPolisher accesses the BiGG Models knowledge-base for model annotation. SBML2LaTeX documents models by creating human-readable reports.Conclusion. All tools presented are freely available and can either be used as online programs at http://webservices.cs.uni-tuebingen.de using any common web browser, or for download as desktop programs from http://www.cogsys.cs.uni-tuebingen.de/software. These tools have been used in several research projects, including community efforts, such as the path2models project. Their development has often been a driving force for further community efforts, such as the JSBML project.
14:40-15:05 Systematic multi-scale modeling and analysis for gene regulation
Daifeng Wang  & Mark Gerstein
Computational Biology and Bioinformatics, Yale University
The rapidly increasing quantity of biological data offers novel and diverse resources to study biological functions at the system level. Integrating and mining these various large-scale datasets is both a central priority and a great challenge for the field of systems biology and necessitates the development of specialized computational approaches. In this presentation, I will introduce how the multi-scale modeling framework can be used to systematically study gene expression and regulation, which includes several novel computational systems approaches with applications to cancer and developmental biology: 1) an algorithm to simultaneously cluster multi-layer networks such as gene co-expression networks across multiple species, which discovered novel human developmental genomic functions and behaviors1,2; 2) a logic-circuit based method to identify the genome-wide cooperative logics among gene regulatory factors and pathways for the first time in cancers such as acute myeloid leukemia, which provided unprecedented insights into the gene regulatory logics in complex biological systems3; 3) if time permits, an integrated method using the state-space model and dimensionality reduction to identify principal temporal expression patterns driven by internal and external gene regulatory networks, which established an entirely new analytical platform to identify systematic and robust dynamic patterns from high dimensional, complex and noisy biomedical data4.References

  1. Gerstein MB, Rozowsky J, Yan KK, Wang D, Cheng C, Brown JB, Davis CA, Hillier L, Sisu C, Li JJ, Pei B, Harmanci AO, Duff MO, Djebali S, Alexander RP, Alver BH, Auerbach R, Bell K, Bickel PJ, Boeck ME, Boley NP, Booth BW, Cherbas L, Cherbas P, Di C, Dobin A, Drenkow J, Ewing B, Fang G, Fastuca M, Feingold EA, Frankish A, Gao G, Good PJ, Guigó R, Hammonds A, Harrow J, Hoskins RA, Howald C, Hu L, Huang H, Hubbard TJ, Huynh C, Jha S, Kasper D, Kato M, Kaufman TC, Kitchen RR, Ladewig E, Lagarde J, Lai E, Leng J, Lu Z, MacCoss M, May G, McWhirter R, Merrihew G, Miller DM, Mortazavi A, Murad R, Oliver B, Olson S, Park PJ, Pazin MJ, Perrimon N, Pervouchine D, Reinke V, Reymond A, Robinson G, Samsonova A, Saunders GI, Schlesinger F, Sethi A, Slack FJ, Spencer WC, Stoiber MH, Strasbourger P, Tanzer A, Thompson OA, Wan KH, Wang G, Wang H, Watkins KL, Wen J, Wen K, Xue C, Yang L, Yip K, Zaleski C, Zhang Y, Zheng H, Brenner SE, Graveley BR, Celniker SE, Gingeras TR & Waterston R. Comparative analysis of the transcriptome across distant species. Nature 512, 7515:445-448 (2014). DOI: 10.1038/nature13424 
  2. Yan KK, Wang D, Rozowsky J, Zheng H, Cheng C & Gerstein M. OrthoClust: an orthology-based network framework for clustering data across multiple species. Genome Biol 15, 8:R100 (2014). DOI: 10.1186/gb-2014-15-8-r100 
  3. Wang D, Yan KK, Sisu C, Cheng C, Rozowsky J, Meyerson W & Gerstein MB. Loregic: a method to characterize the cooperative logic of regulatory factors. PLoS Comput Biol 11, 4:e1004132 (2015). DOI: 10.1371/journal.pcbi.1004132 
  4. Wang D et al. (In revision).
15:05-15:30 Rule-based modelling of clathrin polymerisation
Anatoly Sorokin1 , Katharina Heil2, J Douglas Armstrong2 & Oksana Sorokina2
1Institute of Cell Biophysics
2University of Edinburgh
Clathrin is the major component of clathrin-mediated endocytosis (CME). Due to the particular shape and (auto-) polymerisation capacity, when activated, clathrin forces the cell membrane to adapt a vesicular shape. The process is believed to be initiated by a range of different triggers, which forces further molecules to assemble on the extra- and intracellular side of the membrane so that ~30 proteins directly participate in the regulation of different steps of endocytosis1.Within the cell clathrin molecules tend to form trimers (triskelia), consistent of three clathrin molecules. Within the triskelia structure, one clathrin molecule is often referred to as one leg of a clathrin triskelia. Due to its internal trimeric structure, every single clathrin in the triskelia complex can bind to another clathrin, part of another clathrin triskelia. Hence, every triskelion is able to undergo interactions with three further triskelia. This leads to the formation of dimers and trimers up to large polymers. In a biological context, however, loops of hexagonal and pentagonal shapes are among the most observed. A certain combination of these loops induces the formation of closed cage structures. Three well-known closed structures, mini-coat, hexagonal barrel and soccer ball.Understanding of CME is not possible without the proper understanding of its key process, cage formation. A number of models have been developed to describe the formation of clathrin cages2 or pits and vesicles3. Most of the theoretical works study the thermodynamic equilibrium of the cage or the process of the cage formation with clathrin taken in isolation. It was shown recently that cage formation may start from the flat raft buildup, which is getting transferred into the curvature structure later4. Creation of the curved surface will require rearrangement of an internal structure of the already formed pit and, likely controlled by third molecules.We have created the rule-based model, describing polymerisation of the clathrin and various scenarios of the cage formation, which is able to reproduce budding of the cage from the flat raft. The key importance of our model is that it is able to serve as an assembly point for the large, biologically meaningful mechanistic model.References

  1. McMahon HT & Boucrot E. Molecular mechanism and physiological functions of clathrin-mediated endocytosis. Nat Rev Mol Cell Biol 12, 8:517-533 (2011). DOI: 10.1038/nrm3151 
  2. den Otter WK, Renes MR & Briels WJ. Asymmetry as the key to clathrin cage assembly. Biophys J 99, 4:1231-1238 (2010). DOI: 10.1016/j.bpj.2010.06.011 
  3. Banerjee A, Berezhkovskii A & Nossal R. Stochastic model of clathrin-coated pit assembly. Biophys J 102, 12:2725-2730 (2012). DOI: 10.1016/j.bpj.2012.05.010 
  4. Avinoam O, Schorb M, Beese CJ, Briggs JA & Kaksonen M. Endocytic sites mature by continuous bending and remodeling of the clathrin coat. Science 348, 6241:1369-1372 (2015). DOI: 10.1126/science.aaa9555 
15:30-16:00 Coffee break
16:00-16:50 From biocuration to model predictions and back
Ioannis Xenarios
Swiss Institute of Bioinformatics, Switzerland
16:50-17:15 Discussion and Q&A “Bioinformatics and modeling, bridging the divide”
17:15-17:30 Poster prize: Benjamin Gyori ; Harvard Medical School

Key dates

March 28, 2016: ISMB registration opens
April 2, 2016: SysMod abstract submission deadline
April 17, 2016: Extended SysMod abstract submission deadline
April 29, 2016: SysMod oral and poster acceptance notification
May 10, 2016: SysMod program available online
June 2, 2016: ISMB early registration deadline
June 24, 2016: ISMB online registration deadline
July 7, 2016: ISMB on site registration opens
July 9, 2016: SysMod meeting
July 10-12, 2016: ISMB conference
July 10, 2016: Joint NetBio/SysMod COSI session

Registration

Registration is available through the ISMB conference .

Accommodations

Accommodations are available at several hotels through ISMB .

More information

For more information, please contact the SysMod coordinators .

Sponsors

BioPathways ISCB COSI
Nature Partner Journals Systems Biology and Applications
Merrimack Pharmaceuticals