Computations in Science Seminars

Previous Talks: 1999

March 31, 1999
Shankar Venkataramani, University of Chicago

What can we learn from a crumpled sheet?
Everyday experience shows that a sheet of paper will crumple when it is crushed. Crumpling is an ubiquitous phenomenon that occurs in a variety of systems, ranging from the membranes of vesicles in living cells to the shells that are used in engineering and packaging. A basic puzzle is the following: Why does a sheet of paper crumple when it is crushed, that is, why does the geometry of the sheet become rough on the scales of the applied stress? I will talk about an analysis of this question and it's generalization where in one considers the confinement of a m-dimensional sheet in a d-dimensional sphere, for general m and d. This leads to some interesting problems in Partial Differential Equations and Differential geometry. I will discuss some of these issues and talk about how the ideas that come out of the analysis of a crumpled sheet may have applications in a variety of interesting problems including String theory, and the development of small scales in fluid flows.
April 14, 1999
Thomas Rosenbaum, University of Chicago

Quantum Annealing
Traditional simulated annealing utilizes thermal fluctuations for convergence in optimization problems. Quantum tunneling provides a different mechanism for moving between states, with the potential for reduced time scales. We compare thermal and quantum annealing in a model Ising magnet, LiHoxY1-xF4, where the effects of quantum mechanics can be tuned in the laboratory by varying a magnetic field applied transverse to the Ising axis. Our results indicate that quantum annealing indeed hastens convergence to the optimum state.
April 28, 1999
John Schiffer, Argonne

Order, Temperature and Stability in a Dynamically Driven System
Experiments and simulations have been carried out for a system of particles interacting by Coulomb forces, and contained in a sinusoidally varying quadrupole field, corresponding to a radiofrequency ion trap. As in the experiment, the simulated system settles into an ordered configuration. The kinetic energy of the motion imposed by the external containing field is up to 6 orders of magnitude greater than the thermal energies relevant to order (defined as displacements that are not periodic in the sinusoidally varying field). The coupling between the imposed motion and the 'temperature' is found to be remarkably small, though increasing with temperture.
May 5, 1999
Leon Glass, McGill University

Dynamics of Cardiac Arrhythmias
Cardiac arrhythmias are disturbances of the heartbeat in which there may be abnormal initiation of the heartbeat, abnormal conduction of the heartbeat or some combination of both. This talk explores how simple conceptual, computational, and experimental models are being used to help understand cardiac arrhythmias in people. A conceptual model for rapid heartbeats often employed by cardiologists assumes excitation travelling in a one dimensional ring. This model has surprisingly rich properties with regard to: instabilities of conduction, the effects of single and multiple stimuli, and the control of instabilities. I also discuss current attempts to develop practical applications of theoretical analysis to cardiology. This talk is directed to a general scientific audience and should be intelligible to cardiologists as well as physicists, mathematicians, and computer scientists.
May 12, 1999
Wendy Zhang, Harvard University

Similarity solutions for capillary pinch-off in fluids of differing viscosity
Self-similar profiles associated with capillary instability of a fluid thread in a viscous surrounding fluid are obtained for values of viscosity ratio (thread viscosity / surrounding fluid viscosity) from 1/16 to 16 via a simplified numerical scheme. Universal similarity scaling is preserved despite an asymptotically large velocity in the pinching neck driven by nonlocal dynamics. The numerical results agree well with experimental measurements by Cohen, Brenner, Eggers & Nagel. For all viscosity ratios, the self-similar profile is asymmetric and conical far from the minimum. The steep cone slope increases monotonically with viscosity ratio. The shallow cone slope is maximised around viscosity ratio of 1/4.
May 26, 1999
Gustavo Martinez Mekler, Centro de Ciencias Fisicas, Mexico

Scaling Crossover in Paleolake Sediments: From turbulence to fossils
Nature is full with examples of phenomena that show extensive scale invariant behavior as a function of time (e.g. 1/f noise) or of distance (e.g. fractal geometries). Here we address the question of the effect that external disturbances have on a system, embedded in a random environment, subject to a self-organizing dynamics which we assume generates a scale invariant behavior. In particular, we study the stratification of sedimentary deposition of a Pleistocene paleolake at the State of Tlaxcala, Mexico, where diatom fossils alternate with material associated to volcanic activity. In this case the lake's internal (autocthonous) evolutionary processes were disturbed by the external (allocthonous) volcanic activity. Using a formalism developed for the study of intermittency in fluids, we give an interpretation for the scaling crossover observed in the power spectra of the sediment density variations. A Markov chain model is also presented for the underlying dynamics. The model provides some clues for a magnitude-frequency relation for volcanic events.
June 2, 1999
Guy Dimonte, Livermore

Nonlinear Rayleigh-Taylor and Richtmyer-Meshkov instabilities
The Rayleigh-Taylor instability (RTI) and its shock driven analog, the Richtmyer-Meshkov instability (RMI) , affect a wide variety of important phenomena from sub-terrainian to astrophysical environments. The "fluids" are equally varied from plasmas and magnetic fields to elastic-plastic solids. In most applications, the instabilities occur with a complex acceleration history and evolve to a highly nonlinear state, making the theoretical description formidable. We will link the fluid and plasma regimes while describing the theoretical issues and basic experiments in different venues to isolate key physics issues. RMI experiments on the Nova laser investigate the affects of compressibility with strong radiatively driven shocks (Mach > 10) in near solid density plasmas of sub-millimeter scale. The growth of single sinusoidal and random 3-D perturbations are measured using backlit radiography. RTI experiments with the Linear Electric Motor (LEM) are conducted with a variety of acceleration (< 104 m/s2) histories and fluids of 10 cm scale. Turbulent RTI experiments with high Reynolds number liquids show self-similar growth which is characterized with laser induced fluorescence. LEM experiments with an elastic-plastic material (yogurt) exhibit a critical wavelength and amplitude for instability. The experimental results will be compared with nonlinear theories and hydrodynamic simulations.
June 9, 1999
Jared Bronski, UIUC

Semiclassical limit of the focusing nonlinear Schrödinger equation: Instability, ill-posedness and symmetry breaking
We present some recent work on the semiclassical (geomtric optics) limit of the focusing nonlinear Schrödinger (NLS) equation, which arises as a model of many strongly nonlinear, strongly dispersive wave phenomena, including plasmas, gravity waves and optical pulse propagation. The semiclassical limits of many nonlinear wave equations, such as the Korteveg-DeVries equation, are well understood, whereas the semiclassical limit of the focusing NLS is still poorly understood. This is because formal asymptotic calculations yield an elliptic equation, where the wave speeds are complex valued, leading to an ill-posed problem. In this talk we present some recent numerical and theoretical work, as well as some recent experiments with pulses in optical fibers which confirms the theoretical predictions.
June 16, 1999
Rich McLaughlin, University of North Carolina

Some Particle Studies of Turbulent Diffusion
Many questions regarding the mixing of a passive scalar in the presence of a complicated fluid flow can be phrased in terms of the behavior of systems of particles. In this lecture, we focus upon two cases for which such a description has been successful. Firstly, we consider the case of a single particle to study transport by a class of periodic flow fields. The large scale, long time effective transport coefficients are predicted by a homogenization theory in which the coefficients are tabulated through solutions of an auxiliary "cell" problem. We address how a drifting particle undergoing Brownian motion experiences the effective transport through careful Monte-Carlo simulation and examine how effects of time variation in the flow field may help to control the complicated Peclet scaling existing in the effective diffusion coefficients for steady flows. Secondly, we consider the case involving many interacting particles which arises naturally in models for scalar intermittency. We review the results for the rapidly fluctuating linear shear layer model of Majda, and results on the periodic shear flow models due to Bronski and McLaughlin. In joint work with Jared Bronski, we conclude with a discussion regarding the tails of the pdf in the Majda model.
June 23, 1999
Mark Sussman, UC Davis

A three dimensional adaptive, coupled level set method for computing solutions to incompressible two-phase flow in general geometries
We present an effective numerical method for solving complex multi-phase flow problems in science and industry. Example applications include the model problem of a drop hanging from a faucet, ink jet printers, ship waves and oil spreading under ice in water. These problems are characterized by complex topological changes in the free surface (e.g. the merge and/or break-up of the interface), large density and viscosity jumps (e.g. air/water) and stiff, singular source terms due to the surface tension force. We use a coupled level set and volume of fluid approach for representing the free surface between the gas and liquid. This enables us to accurately compute surface tension driven flows and conserve mass to within a fraction of a percent. We represent the geometry surrounding (or embedded within) the fluids by way of a level set representation. Our computational results are compared to experiments for the model problem of a drop hanging from a faucet. We apply our algorithm to micro-scale jetting applications where it is important not only to model the motion of the free-surface, but also to accurately take into account the geometry of the jetting device.
July 7, 1999
Rony Granek, Weizmann Institute, Israel

Anomalous Diffusion in Polymers, Membranes, and Active Biomembranes.
Anomalous diffusion has been subject to various theoretical modelling, e.g., in the context of random walk on fractals. Here I shall discuss the dynamics of membrane bilayers and semi-flexible linear polymers in solutions. The effect of thermal undulations on both the transverse and longitudinal stochastic motion of a tagged "monomer" will be discussed. It will be shown that the motion is SUBDIFFUSIVE. A similar behavior is found for polymeric sol-gel clusters. I will demonstrate how the subdiffusion leads to a stretched exponential decay of the dynamic structure factor, which has been observed in experiment. Biomembranes include, however, among other additional constituents, carrier proteins that act as active transport sites, for example, the ATPase controlling the sodium-potassium pump. The action of these active ion pumps induces a force noise, in addition to the thermal noise which results from collisions of the solvent molecules with the membrane. I will show how this active noise leads to an ENHANCED diffusion of a tagged membrane "monomer". I will also briefly discuss the effect of the cytoskeleton in plasma membranes on this motion.
July 14, 1999
Basile Audoly, Ecole Normale Superieure, France

The elasticity of thin elastic bodies: from plates to shells
The famous solution to the Elastica problem (the bending of a vertical bar loaded on top) goes back to Euler, and the mechanics of elastic bodies is still a vivid domain of research. Recent theoritical works have concentrated on elastic plates. Because a plate has a small thickness, it is much easier to bend that to stretch; this simple remark underlies many mechanical properties of the plates, like the easy apparition of singularities (as in a crumpled sheet of paper). In this talk, we discuss the extension of the mechanics of plates to that of shells, i.e. to curved, thin elastic bodies. In particular, we address the compression of a spherical elastic body (a ping-pong ball) by a plane. For large enough compressions, a circular ridge is formed; its scaling properties differ from that of the ridge on a plate. We also consider a mathematical problem, the existence of infinitesimal bendings on a given surface. When such deformations exist, the shell is mechanically weak. We point out "rigidifying curves", which, when included in the mean surface of the shell, make it rigid.
July 21, 1999
Jens Eggers, University of Essen, Germany

Sand as Maxwell's demon
This talk addresses a simple experiment: A gas of small plastic particles inside a box is kept in a stationary state by shaking. A wall separates the box into two identical compartments, save for a small hole at some finite height h. As the amplitude of the shaking is reduced, a second order phase transition occurs, in which the particles preferentially occupy one side of the box. We develop a quantitative theory of this clustering phenomenon and find good agreement with numerical simulations.
July 28, 1999
Jean Carlson, UC Santa Barbara

Highly Optimized Tolerance: Robustness and Power Laws in Complex Systems
Highly Optimized Tolerance (HOT) is a new mechanism for generating power law distributions, which is motivated by biological organisms and advanced engineering technologies. Our focus is on systems which are optimized, either through natural selection or engineering design, to provide robust performance despite uncertain environments. Possible domain applications (e.g. ecosystems and the internet) will be discussed. We suggest that power laws in these systems are due to tradeoffs between yield, cost of resources, and tolerance to risks. These tradeoffs lead to highly optimized designs that allow for occasional large events. We investigate the mechanism in the context of percolation and sand pile models in order to emphasize the sharp contrasts between HOT and self organized criticality (SOC), which has been widely suggested as the origin for power laws in complex systems. Like SOC, HOT produces power laws. However, optimization introduces new sensitivities, not present in critical systems, and compared to SOC, HOT states exist for densities which are higher than the critical density, and the power laws are not restricted to special values of the density.
August 4, 1999
Robin Ball, University of Warwick, UK

Optimisation under Uncertainty
In many problems one has to optimise a 'cost function' which, for each trial set of parameters, can only be estimated by statistical sampling over some distribution of external events or other unknowns. Examples include designing protein sequences for fast folding (where we don't know the detailed trajectory the molecule will follow), probabilistic versions of the travelling salesman problem (where the precise cities he will require to visit are not known in advance), and the exploitation of oil reservoirs (in the face of geological uncertainty). As one begins an optimisation, it is obviously not efficient to seek accurate evaluation of the cost function. I will discuss how by analogy with simulated annealing this approximation can be made a virtue.
August 11, 1999
Detlef Lohse, University of Twente, Netherlands

Scaling in thermal convection: A unifying theory
A theory for the scaling of the Nusselt number Nu and of the Reynolds number Re in strong Rayleigh-Benard convection is suggested and shown to be compatible with recent experiments. It assumes a coherent large scale convection roll (wind of turbulence) and is based on the dynamical equations both in the bulk and in the boundary layers. Several regimes are identified in the Rayleigh number Ra - Prandtl number Pr phase space, defined by whether the boundary layer or the bulk dominates the global kinetic and thermal dissipation, respectively, and whether the thermal or the kinetic boundary layer is thicker. The crossover between the regimes is calculated. In the regime which has most frequently been studied in experiment (Ra < 1011) the leading terms are Nu ~ Ra1/4Pr1/8, Re ~ Ra1/2Pr-3/4 for Pr < 1 and Nu ~ Ra1/4Pr-1/12, Re ~ Ra1/2Pr-5/6 for Pr > 1. In most measurements these laws are modified by additive corrections from the neighboring regimes so that the impression of a slightly larger (effective) Nu vs. Ra-scaling exponent can arise.
August 25, 1999
Robin Ball, University of Warwick, UK

Scaling and Crossovers in Diffusion Limited Aggregation
We discuss the scaling of characteristic lengths in diffusion limited aggregation clusters in light of recent developments using conformal maps. We are led to conjecture that apparently anomalous scaling (of the lengths) be due to one slow correction to scaling. This is supported by analytical argument for the scaling of the penetration depth of newly arrived random walkers,and by numerical evidence on the Laurent coefficients which uniquely determine each cluster. It gives a strong hint as to the correct Renormalisation Group for Diffusion Limited Aggregation.
September 1, 1999
Peter Thomas, University of Chicago

Pattern Formation in Visual Cortex
Stimulus response properties of cells in visual cortex exhibit intriguing spatial organization. Patterns of cell preferences for oriented stimuli, ocular dominance of cell responses, and the establishment of a topographic retinotopic map may be accounted for through the Turing pattern formation mechanism. Starting from a biologically realistic model system I find reduced variables describing the pattern of synaptic weights in the visual system in which the Turing mechanism is particularly transparent. Simple Hamiltonians written in these reduced variables may readily be simulated with Monte Carlo techniques.
September 8, 1999
Elaine Oran, Navy Research Lab

Numerical Simulations of Detonations: Fundamentals and Applications
Detonations are the fastest, most intense form of energy release in an energetic material. In the simplest theories, a propagating detonation front can be considered as a discontinuity moving through the material at a speed characteristic of the energetic and background materials. Closer examination reveals the importance of much more complex and dynamic structure. This presentation describes the methodology and applications of multidimensional, time-dependent numerical simulations of detonations. Because such simulations are extremely computer intensive, they require the highest performance computing tools available and every effort must be made to produce scalable code that runs on a variety of computers. Applications discussed include the fundamental structure of gas-phase chemical detonations, nuclear detonations in Type Ia supernovae, and design of a detonation incinerator for disposing of explosives, munitions, and chemical and biological agents.
September 15, 1999
Lance Becker and Terry Vanden Hoek, University of Chicago

When Cells Die
Saving lives of patients may depend on preventing cell death in critical tissues like the heart and brain. Yet the specific factors responsible for cell death remain a mystery and the understanding of these factors is likely to lead to major medical breakthroughs. Most cell death (following a wide range of diseases including stroke, myocardial infarction, cardiac arrest, drowning, and trauma) is primarily due to the effects of ischemia, or the lack of blood flow to the cells. During ischemia, reduced oxygen causes a fall in cellular ATP; classic physiology suggested that cells die because of this critical ATP deficit. However, a number of recent observations suggest that this simple "lack of ATP causes cell death" concept is not true. An important observation in the 80's was that while tissues seemed to be injured during ischemia - markers of irreversible cell death did not appear until reperfusion with the restoration of oxygen and nutrients. This paradoxical finding, that cells deteriorate as they are being restored to normal, has been termed "reperfusion injury". Reperfusion injury would suggest that lethal events for many cells occur not during ischemia, but rather during reperfusion. Reperfusion injury remains one of the most controversial issues in medical science with strongly divided advocates both for and against. Studies at the University of Chicago during the last five years lend strong support to the concepts of reperfusion injury. Careful studies of cells during ischemia and reperfusion confirm the majority of cell death occurs during the reperfusion phase. Further studies support an important role for free radicals during reperfusion injury. During ischemia, free radicals are produced from the mitochondria and a large (very likely lethal) burst of free radicals can be detected within the first minutes of reperfusion following ischemia - a possible explanation for reperfusion injury. Additional mechanisms suggest ways to treat reperfusion injury. For example, adaptive responses to ischemia (termed "preconditioning") reveal a novel pathway to cellular protection. Hypothermia may be another viable treatment option. Finally, a better understanding of non-linear dynamics ("chaos") may allow for enhanced resuscitation efforts and will be discussed.
September 22, 1999
Heinrich Jaeger, University of Chicago

Vortex Flow in Mesoscopic Channels
In type-II superconductors, magnetic fields are not shielded from the interior of the material but can enter in form of quantized flux bundles (vortices). On macroscopic scales, large ensembles of vortices respond to temperature or external forcing very much like ordinary states of matter such as liquids or solids. But how does the transition between liquid and solid take place when the flow is confined to mesoscopic channels, only a few vortices wide? What are the dynamic characteristics of such ultrathin vortex liquid layers? We have addressed this questions in experiments on model devices containing nanometer-scale, weak-pinning channels embedded in a strong-pinning host. In the solid phase, flux lines are pinned at defects along the channel walls. Caging and frustration of vortex arrangements inside the channels give rise to field history memory. We find that the driven flux line configuration is only marginally stable exhibiting novel commensurability effects and threshold dynamics.
September 29, 1999
Michael Brenner, MIT

What is the Diffusivity of a Sediment?
A sediment consists of a viscous fluid with a concentration of solid particles which are denser than the fluid, and therefore fall. This talk addresses the question of what is the effective diffusivity of the sediment. While all reasonable theoretical estimates have predicted that the diffusivity diverges with the system size (even in the limit of vanishing particle concentration), experiments have tended to claim that the diffusivity is independent of the system size, and determined by an as yet unknown dynamical process. I will describe our current research aimed at understanding this question, by developing a simple physical picture for what causes the diffusivity. Numerical simulations will be presented to illustrate the spatial and temporal evolution of the fluctuations in the sediment, and some simple scaling arguments will be given to explain them.
October 20, 1999
Julia Parrish, University of Washington

Patterns in Nature: The Epiphenomenology of Aggregation
One of the most striking patterns in biology is the formation of animal aggregations. Fish schools, insect swarms, ungulate herds, and bird flocks are all classic examples. Consisting of individual members acting selfishly, aggregations nevertheless function as an integrated system, displaying a complex set of behaviors not possible at the level of the organism. Because pattern is increasingly at larger scales of space and time, individual members operate without knowledge of the whole. Indeed, there are no leaders. And yet, many aggregations are architecturally arranged with resultant properties including polarity, repeated units, uniform density, distinct edges, complex shape, emergent functions and a behavioral repetoire. Complexity theory indicates that large populations of units can self-organize into aggregations that generate pattern, store information, and engage in collective decision-making. This talk will explore the patterns displayed by animal aggregates in the context of evolutionary theory dictating selfish individuality and address the question: Is the whole more than the sum of its parts?
October 27, 1999
Brad Werner, UC San Diego

Hierarchical Modeling in Geomorphology
Efforts to model processes and features of Earths landscape are hampered by its nonlinear, dissipative and open nature. A hierarchical modeling methodology for geomorphic systems is proposed as an alternative to the commonly applied approaches of Reductionism and Universality. Variables and processes characterizing a system are arranged temporally. Abstractions of processes at faster scales (simplifications of these processes that survive over longer time periods) determine the dynamics at any one temporal level and processes at slower scales provide a slowly varying context. Boundaries of the system are chosen to minimize coupling to the external environment. The theoretical consistency of the hierarchy is tested by comparing the predictions of models at two different temporal scales for the same phenomenon. This methodology is illustrated with a hierarchy of models for bedforms resulting from sediment transport, such as ripples or dunes.
November 3, 1999
Art Winfree, University of Arizona, Tucson

Unsolved Problems of the Heart
The electrical behavior of heart muscle is normally periodic, but can go turbulent. This change of dynamical mode does not require a change of parameters: it is an alternative basin of attraction. The difference stems from there being two kinds of "action potential": the linearly propagating pulse of the text books, and alternative vortex-like solutions. The latter, called "rotors" have many curious properties, especially in 3 dimensions. They are better understood by computation than by analysis or by in-vivo experiments, at the moment. But experiments do confirm their existence and crucial role in starting "sudden cardiac death."
November 10, 1999
Matthew Hastings, Princeton

Pole Dynamics in the Dielectric Breakdown Model: From KPZ to DLA
The dielectric breakdown model (DBM) is a model of Laplacian growth in two dimensions that gives rise to fractal clusters. In this model, a cluster grows in the presence of an electrostatic potential obeying Laplace's equation, with the interface growing fastest where the electric field is highest; a parameter eta is introduced so that the interface speed is proportional to the eta'th power of the normal component of the electric field. At eta=1, this model is equivalent to the diffusion-limited aggregation model (DLA). At other values of eta, the DBM can describe the patterns of breakdown in dielectrics.
In this talk I will consider the DBM in the limit of small eta. In this limit, the fractal clusters of the DBM become compact, and we are left with a surface growth model that still includes many of the features of DLA: growth of fingers, competition between fingers, existence of a linearly stable fingering solution, and non-linear instability of that solution to an exponentially small amount of noise.
The dynamics is shown to have an interesting representation in terms of poles of an analytic function, which I use to derive a large family of solutions to the equation of motion in the absence of noise. Some results will be presented on the statistical properties of the surface in the presence of noise. This approach may be useful in understanding the DBM at finite eta in terms of perturbations to the small eta model.
November 17, 1999
Michael Stern, National Institute on Aging

The Virtues of Peace and Quiet: Noise-Exclusion and Noise-Imprinting in the Darwinian Evolution of Digital Organisms
Homeostasis, the creation of a stabilized internal milieu, is a ubiquitous phenomenon in biological evolution, despite the entropic cost of excluding noise information from a region. The advantages of stability seem self-evident, but the alternatives are not as clear. This issue was studied by means of numerical experiments on a simple evolution model: a population of boolean network organisms selected for performance of a curve-fitting task while subjected to noise inputs. During evolution, noise-sensitivity increased with fitness. Noise-exclusion evolved spontaneously, but only if the noise was sufficiently unpredictable. Noise that was limited to one or a few stereotyped patterns caused a symmetry-breaking that prevented noise-exclusion. Instead, the organisms incorporated the noise into their function, at little cost in ultimate fitness, and became totally noise-dependent. This noise-imprinting suggests caution when interpreting apparent adaptations seen in nature. If the noise was totally random from generation to generation, noise-exclusion evolved reliably and irreversibly, but if the noise was correlated over several generations, maladaptive selection of noise-dependent traits could reverse noise-exclusion, with catastrophic effect on population fitness. Noise entering the selection process rather than the organism had a different effect: adaptive evolution was totally abolished above a critical noise amplitude, in a manner resembling a thermodynamic phase transition. This effect may be explained qualitatively by a simple analytical model. Evolutionary adaptation to noise involves the creation of a sub-system screened from noise information, but increasingly vulnerable to its effects. Similar considerations may apply to information channeling in human cultural evolution.
December 8, 1999
Henrik Nordborg, University of Chicago

Commensurability Effects for Vortices in Superconductors: An Application of the Time-Dependent Ginzburg-Landau Equations
It has recently become possible to manufacture superconducting films with artificial features such as channels and periodic pinning arrays. These samples are ideal for studying commensurability effects in the vortex lattice, where the physics is determined by the competition between length scales. Theoretically, the systems can be well described by the time-dependent Ginzburg-Landau equations in two dimensions, which can be simulated efficiently on parallel computers.
The talk will give an introduction to the time-dependent Ginzburg-Landau (TDGL) equations and explain why they are useful for this particular problem. We will then give a short overview of the numerical methods involved and their implementation. In particular, we are interested in the validity of a widely used simplification, the so called frozen field approximation. Finally, we will present the results from simulations of a number of different problems.
December 15, 1999
John Marko, University of Illinois at Chicago

Stretching Genes: Elasticity of DNA and Chromosomes
I will review what we know about the microscopic elastic response of double-helix DNA, much of which has been the result of rapid developments in both experiment and theory for single-molecule micromanipulation. Much of what we understand can be summed up in terms of an effecive DNA Young modulus of about 300 MPa. This number implies that DNA should undergo thermal structural fluctuations of amplitude much larger than the accuracy to which DNA structural properties are usually quoted. Finally I will discuss how what we know about DNA and protein elastic responses is helping us understand microelasticity experiments on whole chromosomes.