- Details
- Published: Tuesday, 09 October 2018 22:28
- Details
- Published: Monday, 02 July 2018 14:24
She Zhang to give Special Session talk at ISMB 2018
She Zhang, a student in Dr. Ivet Bahar's lab, will be presenting a Special Session talk at this year's ISMB conference on Saturday, July 7 at 5:45 pm.
Chromosomal dynamics predicted by an elastic network model explains genome-wide accessibility and long-range couplings
Abstract:
Understanding the three-dimensional (3D) architecture of chromatin and its relation to gene expression and regulation is fundamental to understanding how the genome functions. Advances in Hi-C technology now permit us to study 3D genome organization, but we still lack an understanding of the structural dynamics of chromosomes. The dynamic couplings between regions separated by large genomic distances (>50 Mb) have yet to be characterized. We adapted a well-established protein-modeling framework, the Gaussian Network Model (GNM), to model chromatin dynamics using Hi-C data. We show that the GNM can identify spatial couplings at multiple scales: it can quantify the correlated fluctuations in the positions of gene loci, find large genomic compartments and smaller topologically-associating domains (TADs) that undergo en bloc movements, and identify dynamically coupled distal regions along the chromosomes. We show that the predictions of the GNM correlate well with genome-wide experimental measurements. We use the GNM to identify novel cross-correlated distal domains (CCDDs) representing pairs of regions distinguished by their long-range dynamic coupling and show that CCDDs are associated with increased gene co-expression. Together, these results show that GNM provides a mathematically well-founded unified framework for modeling chromatin dynamics and assessing the structural basis of genome-wide observations.
- Details
- Published: Wednesday, 31 January 2018 15:04
Behind the Boom in Machine Learning
Terry Sejnowski, computational neuroscientist at the Salk Institute for Biological Studies, president of the Neural Information Processing System (NIPS) Foundation and project co-leader for the MMBioS TR&D2 project, was interviewed at the NIPS Conference in December about growth in machine learning.
Machine learning is a core technology driving advances in artificial intelligence. This week, some of its earliest practitioners and many of the world's top AI researchers are in Long Beach, CA, for the field's big annual gathering—the Neural Information Processing Systems (NIPS) conference. In all, some 7,700 people are to attend AI's version of high tech's glitzy South by Southwest conference, and the electronic device industry's even bigger annual CES conference.
It's NIPS' 31st year in what originally drew just a few hundred participants — computer scientists, physicists, mathematicians and neuroscientists all interested in AI. Terrence Sejnowski, a computational neuroscientist at the Salk Institute for Biological Studies and president of the NIPS Foundation, spoke with Axios about growth in the field and what's next.
How machine learning has grown since NIPS' start in the '80s: "Over that period what happened was a convergence of a number of different factors, one of them being the fact that computers got a million times faster. Back then we could only study little toy networks with a few hundred units. But now we can study networks with millions of units. The other thing was the training sets — you need to have examples of what it is you're trying to learn. The internet made it possible for us to get millions of training examples relatively easily, because there's so many images, abundant speech examples, and so forth, that you can download from the internet. Finally, there were breakthroughs along the way in the algorithms that we used to make them more efficient. We understood them a lot better in terms of something called regularization, which is how to keep the network from memorizing — you want it to generalize."