Computations in Science Seminars

Previous Talks: 2003


December 10, 2003
Steve Kron, University of Chicago

Cellular computations and chemical modifications of proteins
We are all aware of the potential for biological complexity implicit in the independent control of individual genes. Independently modulating the cellular concentration of each protein offers a nearly infinite number of steady-states. In turn, a change in the DNA encoding in even a single gene will affect the behavior of a cell over its remaining lifetime and will be transmitted to its progeny. However, much like people, the cells are defined by their environment and history as much as by their chromosomes. Cells are complex, unstable systems that are continuously responding to their surroundings. Importantly, the time scales of many cell responses are far faster (seconds) and/or slower (years) than characteristic times of gene expression changes or mutations (minutes to days). The answer to this paradox is that many transient and persistent cell responses are mediated by covalent chemical modifications of proteins already present in the cell. Most changes in gene expression are simply down-stream effects.

Using a few examples under study in the Kron lab, we will examine protein modifications that underlie the "cognitive", "emotional" and "memory" states of individual cells. Even transient deregulation of protein modifications can lead to cell confusion and human disease. Not surprisingly, the mutations that underlie the malignancy of cancer cells often affect the proteins that modify other proteins. We will touch on recent developments in targeting such chemical modifications as treatments for cancer and other diseases.



December 3, 2003
Grigory Barenblatt, University of California, Berkeley
, Peter Constantin (and Leo Kadanoff)
Turbulence at very high Reynold's numbers: hypotheses and facts
Turbulence is the state of vortex fluid motions where the properties of the flow field (velocity, pressure, etc.) vary in time and space randomly. First recognized and even baptized by Leonardo da Vinci, turbulence has been studied more than a century by scientists and engineers, including the giants, Kolmogorov, Heisenberg, Taylor, Prandtl and von Kármán.

Turbulence at very high Reynolds numbers (often called developed turbulence) was widely considered to be a happy province of the turbulence realm, as it was widely thought that two of its basic results are well-established and will enter, basically untouched, into a future pure self-contained theory of turbulence. These results are the von Kármán--Prandtl universal logarithmic law for wall-bounded turbulent shear flows, and Kolmogorov--Obukhov scaling laws for the local structure of developed turbulent flows.

In this lecture I will present and discuss basically the results obtained by A. J. Chorin, V. M. Prostokishin and myself during the decade 1991--2002, concerning steady wall-bounded turbulent shear flows where the average velocity varies only in the direction perpendicular to the wall. These flows are of basic fundamental and practical importance: flow in pipes is a common, familiar and useful example of such flows.



Tuesday, November 18, 2003, same time, KPTC 206
Alfonso Ganan Calvo, Escuela Superior de Ingenieros, Universidad de Sevilla
,
Close to the limits of liquid atomisation: combining capillary flow focusing and electrospray.
To finely disperse a liquid into a gas (i.e. atomise) is one of the most ubiquitous needs in human activities involving chemical/biochemical processes and energy conversion: ground transportation alone requires the atomisation of an estimated global flow rate of about 100 to 300 m3/s on earth. In liquid atomisation, a continuous supply of energy is "directed" to disrupt the bulk liquid and create small droplets of controllable size.

I will highlight two current techniques for ultra-fine liquid atomisation, electrospraying and flow focusing. I will briefly discuss how close to the physical "limits" of liquid atomisation in the micro- and nano- scale can we get with these methods. More importantly, I show that when properly combined, the two techniques can impart a larger momentum to the liquid, resulting in smaller jet and droplet diameters. In addition, the gas stream of flow focusing would exert an important stabilization effect on the cone-like meniscus of electrospraying, and would "flush" the spray away from the liquid cone. This allows an enormous increase over the maximum liquid flow rate possible for a stable cone-jet with electrospray alone. Finally this combination may be made so simple that it can be scaled-up for real-world applications. Such a combination atomisation device is not only possible but also brings along additional, unexpected, and rather extraordinary gifts. I will present some of which we have discovered so far, but my belief is that there is an open, rich and deep new valley* for exploration.

*for parametrical optimisation.


November 12, 2003
Marc Feldman, Stanford University
,
Models for the evolution of interactions between genes
Various lines of evolutionary genetic theory suggest that the action of genes should evolve to become modular. In classical terms, this would amount to a tendency for gene action to become more additive across genes. The talk will suggest a mathematical framework for posing the question of whether genes evolve to act more independently or whether tighter interactions should form. The analysis will bear similarities to earlier general theorems on evolution of modifiers of gene action.


November 5, 2003
Nigel Goldenfeld, University of Illinois at Urbana-Champaign
,
Biocomplexity in Action: Pattern Formation and Microbial Ecology at Yellowstone's Hot Springs
Biocomplexity is the term that is becoming used to describe efforts to understand strongly-interacting dynamical systems with a biological, ecological or even social component. I provide a brief overview of why this field is not only interesting for physicists, but can benefit substantially from their participation. As a case study, I present my own work on geobiological pattern formation.

There is increasing evidence that geological features can arise as bacteria interact with purely physical and chemical processes. I describe our on-going attempts to determine the origin of apparently scale-invariant terrace patterns that generically accompany travertine formation at carbonate hot springs throughout the world. Do these striking patterns arise because of the activity of the microbe population that is present in the spring water? The ability to distinguish both ancient and modern geological features that are biologically influenced from those that are purely abiotic in origin can potentially advance our understanding of the timing and pattern of evolution, and may even provide a tool with which to identify evidence for life on other planets.

Work performed in collaboration with: G. Bonheyo, J. Frias-Lopez, H. Garcia Martin, J. Veysey, B. Fouke. Work supported by the US National Science Foundation.


October 27-29, 2003
NOTICE: Due to this special series of talks, there will be no seminar in KPTC 213 on October 29.
Leigh Tesfatsion (*), Dept. of Economics, Iowa State University
, (and )

Monday, Oct. 27, University of Chicago, Ryerson 251, Reception 4:10-4:30, Talk 4:30-5:30.
Agent-Based Computational Economics: A Constructive Approach to Economic Theory
Agent-based computational economics (ACE) is the computational study of economies modeled as evolving systems of autonomous interacting agents with learning capabilities. This presentation will discuss the complexity of decentralized market economies, and the potential usefulness of ACE for the constructive study of decentralized market processes. As an illustrative application, attention will be focused on labor institutions in relation to market performance: specifically, on unemployment benefit programs. An ACE labor market will be presented, consisting of strategically interacting workers and employers who evolve their work-site behaviors over time. Experimental findings will be given regarding market performance in response to successive increases in the level of unemployment benefits. These findings will be compared with findings from a parallel labor market experiment conducted with human subjects. Extensive ACE research and teaching resources can be accessed on-line at http://www.econ.iastate.edu/tesfatsi/ace.htm

Tuesday, Oct. 28, University of Chicago, Ryerson 251, Reception 4:10-4:30, Talk 4:30-5:30.
Agent-Based Computational Economics: Virtual Economic Reality
Agent-based computational economics (ACE) is the computational study of economies modeled as evolving systems of autonomous interacting agents with learning capabilities. This presentation will focus on the development and use of computational laboratories for ACE research. The Trade Network Game (TNG) Lab will be used for concrete illustration. The TNG Lab is designed for the study of trade network formation among buyers, sellers, and dealers who repeatedly engage in risky trades and who evolve their trading strategies over time. The TNG Lab provides run-time visualization of network formation as well as run-time displays of profit outcomes for individual traders. Research papers, manuals, C++ source code, and an automatic installation program for the TNG Lab can be accessed on-line at http://www.econ.iastate.edu/tnghome.htm

Wednesday, Oct. 29, Argonne National Laboratory, Building 900 Room J01, Talk 1:30-2:30.
Electricity Market Design: An Agent-Based Computational Approach
Agent-based computational economics (ACE) is the computational study of economies modeled as evolving systems of autonomous interacting agents with learning capabilities. This presentation will discuss the potential usefulness of ACE for electricity market design. Two applications will be discussed. The first application focuses on a short-run wholesale electricity market modeled as a double auction. The key issue addressed is the sensitivity of market performance to changes in market structure when wholesale traders evolve their bid/ask pricing strategies over time. The second application (in progress) focuses on the Wholesale Power Market Platform proposed by the Federal Energy Regulatory Commission in April 2003 for common adoption by U.S. wholesale electricity markets. The key issue addressed is the ability of this market design to sustain fair, efficient, and orderly market outcomes when profit-seeking market participants are free to evolve their pricing strategies over time. Resources related to ACE electricity research (readings, software, and pointers to individuals, groups, and websites) can be accessed on-line at http://www.econ.iastate.edu/tesfatsi/aelect.htm


October 22, 2003
Adrienne Fairhall, Princeton University
,
Neural computation, adaptation and information processing.
The fact that almost all neurons adapt implies that adaptation must be useful to the system in some way. Since the first observations of spiking neurons in the 20s, physiologists have speculated about the role of adaptation in neural information processing. Recent experiments formulate the issue more precisely: natural stimuli are drawn from a distribution that defines their context. Can we see evidence of adaptation to the stimulus context? In the fly visual system, we show that the motion sensitive neuron H1 uses an adaptive code that allows it to optimize its responses for maximal information transmission under conditions where the context of the stimulus changes constantly. The downside of such an adaptive code is the problem of ambiguity: in order to interpret the output appropriately, the system must also have information about the context. We show how this problem is resolved for H1 via a novel decoding strategy. Typical natural stimuli are characterized by long- tailed spatial and temporal distributions. We discuss potential mechanisms which may underlie adaptation on many timescales.


October 15, 2003
Bud Homsy, University of California, Santa Barbara

Novel Marangoni Flows
In this talk I will describe three recent studies of novel Marangoni flows, i.e. flows that are driven by tangential stresses that are produced by temperature, compositional, or electrical fields. The first two of these are flows driven or modified by the non-uniform in-situ production of surfactants by chemical reactions. Such surfactant gradients give rise to surface tension gradients which drive bulk flows. We study experimentally the effect of such reactions on viscous fingering in the tip-splitting regime, finding that Marangoni stresses result in wider fingers and a suppression of the tip-splitting instability. We then describe an amazing phenomena of spontaneous, self-sustained chemically driven oscillations at the tip of a drop suspended from the tip of a needle and connect this phenomena to the well-known tip-streaming in extensional flow near drops. Finally, we describe theory and experiment on the manipulation of tangential electrical stresses to drive chaotic advection in translating drops of dielectric liquids.


October 8, 2003
Igor Mezic, University of California, Santa Barbara

Control of mixing and application in microfluidic devices
The theory of mixing is based on concepts from dynamical systems theory, as established in the 1980's. In this talk I will present an extension of this theory to accomodate applications for control of mixing. First, I will present a prototypical control of mixing scenario where two maps on a torus are applied in periodic protocols. The problem is to determine protocol with maximal Kolmogorov-Sinai entropy. Then I will discuss a micromixing set-up, the Shear Superposition Micromixer, that was designed based on the ideas from the above theory of shear superposition for mixing and present need for additional theory that needs to be developed because of requirements on mixing. One step towards such theory for control of mixing is a "cost function", that allows for comparing different mixed states of concentrations evolving under nonlinear dynamics. We developed such a cost function, called the mix-norm. This norm is based on weak convergence and has nice properties with respect to the classical notion of mixing in ergodic theory. I will conclude with an application of the mix-norm to optimization of mixing in a micromixing device.


October 1, 2003
Alan Calder, University of Chicago

Validating an astrophysical simulation code
Verification and validation (V & V) tests of numerical methods and models are essential ingredients for establishing credibility in any numerical modeling effort. The strong connection between the ASCI/Alliances Flash Center and the DOE Laboratories enables close collaboration between theorists and experimentalists probing the basic physics of astrophysical events, providing a unique opportunity for validation. The Flash Center has established an ongoing, formal V & V effort for FLASH, a parallel, adaptive-mesh simulation code for the compressible, reactive flows found in many astrophysical settings. In this talk, I will present results of V & V tests of FLASH. The verification tests are designed to test and quantify the accuracy of the code. The two validation tests are meant to ensure that the simulations meaningfully describe the real world by carefully comparing the results of simulations and astrophysically-relevant laboratory experiments. The first experiment consists of a laser-driven shock propagating through a multi-layer target, a configuration similar to the shock propagating outward through a massive star in a core collapse supernova. The second experiment is a "classic" Rayleigh-Taylor fluid instability, where a heavy fluid is accelerated by a light fluid. Our simulations of the multi-layer targets showed good agreement with the experimental results, but our simulations of the Rayleigh-Taylor instability did not. I will discuss our findings and possible explanations for the disagreement.


September 24, 2003
Jane Wang, Cornell University
Falling Paper, Flapping Flight, and Making a Virtual Insect
A piece of paper or a leaf flutters and tumbles down in a seemingly unpredictable manner. A casual observer might notice that while falling downward on average, A piece of paper or a leaf can rise momentarily as if picked up by a wind. To investigate how it elevates, we quantify the fluid force by solving the Navier-Stokes equations governing the flow around a falling rigid plate. By comparing the computed forces and torque against the predition of classical theory, we identify a lift mechanism for the center of mass elevation. The comparison further suggests an ODE model of a falling plate, which is somewhat different from those used in the literature. To check our numerical results, we compare them with experiments of falling aluminum strips in water with matching parameters. If time permits, I will discuss some of our recent progress in designing a three dimensional flexible wing driven by muscles on computer.


September 17, 2003
NOTICE: Seminar given in Room RI-480, same time
Leo Kadanoff, University of Chicago

Loewner Evolution Maps and Shapes in two Dimensions
This talk describes an exciting new method for approaching two-dimensional problems with interesting geometry. The method is based upon old work by C. Loewner in which he describes how an ordinary differential equation can generate a continually lengthening curve in the complex plane. His evolution equation contains a real function of time, the "forcing", which determines the two-dimensional shape. Smooth forcings generate non-self-intersecting curves. Rough forcings generate shapes with singularities. If the forcing is the stochastic process of Brownian motion, with a parameter , κ which defines the strength of the forcing. For different values of , κ the ensemble of generated shapes depend upon κ. For different values of , κ the resulting ensembles are believed to be identical to the ensemble of random walks, self-avoiding walks, percolation, and the various shapes of critical clusters in critical phenomena.
This talk outlines some of our knowledge in this area, and describes a few exact solutions of the Loewner differential equation.


September 10, 2003
Amitava Bhattacharjee, University of New Hampshire

Vortex and Current Singularities: Drivers of Impulsive Reconnection
Vortex and current singularities in fluids and plasmas often grow from smooth initial conditions, and play a crucial role in dynamical processes involving vortex and magnetic reconnection. Reconnection of vorticity and magnetic field lines occur when topological invariants are broken due to the presence of small but finite dissipation. Although vorticity lines in fluids and magnetic field lines in plasmas have very different dynamics, vortex and magnetic reconnection phenomena have similar geometrical underpinnings. The geometrical sites where vortex and current singularities tend to appear are often similar. These are the sites where fast reconnection tends to occur. Although classical analytical models have tended to focus on steady reconnection, reconnection in nature is rarely steady. It is often impulsive or bursty, characterized not only by a fast growth rate, but a rapid change in the time-derivative of the growth rate. Recent computational developments (involving adaptive mesh refinement techniques) have enabled us to investigate vortex and current singularities and their effect on reconnection at high levels of resolution. We will report on recent analytical and computational results involving a variety of fluid and plasma configurations, and their implication for laboratory and astrophysical observations.


September 3, 2003
Misha Chertkov, Los Alamos National laboratory

Phenomenology of Rayleigh-Taylor Turbulence
We analyze the advanced mixing regime of the Rayleigh-Taylor (RT) incompressible turbulence in the small Atwood number Boussinesq approximation. The prime focus of our phenomenological approach is to resolve the temporal behavior and the small scale spatial correlations of velocity and temperature fields inside the mixing zone, which grows as t^2. We show that the 5/3-Kolmogorov scenario for velocity and temperature spectra is realized in three spatial dimensions with the viscous and dissipative scales decreasing in time, t^-1/4. The Bolgiano-Obukhov scenario is shown to be valid in two dimensions with the viscous and dissipative scales growing, t^1/8.


August 27, 2003
Ariel Fernandez

Structural signal of molecular disease
Biology's matrix is water. Water is nurturing but, being a formidable hydrogen bond maker, it is also an unforgiving solvent. For intramolecular hydrogen bonds to be the primary determinants of structure, as Pauling, Watson and Crick noticed and emphasized in their molecular constructions, they must be very well protected from water attack, and this imposes a very strong constraint on what kind of structures are biologically relevant. DNA has a geometry that inherently leads to protection of its hydrogen bonded base pairing. Proteins are in this sense different, their geometric constraints allow for exposed or shielded amide-carbonyl hydrogen bonds.
In view of these facts, I decided to introduce a category, the wrapping, which is built upon structure but differs from it. It is also different from packing. The wrapping assesses the extent of intramolecular desolvation of backbone hydrogen bonds in protein structure and, based on statistical regularities, identifies under-wrapped hydrogen bonds, now termed dehydrons. Dehydrons are inherently adhesive, as exogenous water removal from their surroundings is energetically and thermodynamically favored.
When the wrapping of structure is examined along folding pathways and in protein complexation, one realizes that life, examined at the nanoscale, reveals a struggle for the survival of hydrogen bonds. Inherent structural disorder in a monomeric structure indicates an inability to fulfill intramolecularly minimal wrapping constraints that have been well established.
If under-wrapped regions are adhesive, they are necessarily interactive -if they find a geometric match-, and thus relevant to biological function. A preliminary proteomic proof of their functional role is given by the fact that aminoacid variability at a particular position on the sequence decreases as the level of wrapping of hydrogen bonds engaging the residue at that position decreases.
This being said, and to deal with the molecular basis of disease, I propose to investigate the "derivative" of wrapping with respect to mutation. That is, I will try to address the question: Where is the interactivity of protein structure most affected by a point mutation in the sequence? That is, I will introduce a sort of "wrapping susceptibility". An extreme susceptibility of wrapping to genetic accident may be a signature for cancer, as preliminary evidence suggests.
But how could cancer arise evolutionary? We learned that the folds for protein domains are conserved across species, itself a remarkable fact. On the other hand, the number of dehydrons for a conserved fold across progressively diverging species is not conserved: it increases monotonically as new species diverge. That means that proteins become more interactive, benefiting more and more from water removal on their surface as higher organisms keep diverging in phyla. Then, the sensitivity of wrapping to genetic accident must also increase, since there is a higher probability that more dehydrons are actually wrapped by a single conserved residue (simply because there are more dehydrons!). It is possible that susceptibility hot spots are actually sites for oncogenic mutations, as my preliminary work reveals.
The emerging picture is that cancer is possibly a prize higher organisms must pay for their complexity.

August 13, 2003
Yuan-Nan Young, Stanford University

A hybrid level set method and its application to fluid mixing problems
Level set method has been widely applied to diverse fields from image processing of medical MRI scans, morphing of consecutive images in target tracking, to multi-phase flow problems in fluid dynamics. In this talk I will first give a broad introduction by presenting some of my recent research using level set method in image processing and fluid mechanics. I will then focus on a hybrid particle level set method (Enright et. al.), where particles are added to help track the interface more accurately. I will also show how such a hybrid method can be improved. Finally results of applying the particle level set method to mixing of two-dimensional multi-phase fluids will be presented.

July 23, 2003
Susan N. Coppersmith, University of Wisconsin

Comparing classical and quantum complex systems
It is well-established that classical systems with many degrees of freedom can exhibit behavior that differs qualitatively from that of equilibrium systems. It is not known whether large nonequilibrium quantum systems differ qualitatively from classical ones, though results from the field of quantum computing indicate that this might be the case. Investigating this question is hard for the same reason that qualitatively new phenomena could emerge -- specifying a system of N degrees of freedom classically can be done using a number of variables linear in N, while quantum mechanically the number of variables grows exponentially with N. We address the question of the nature of large quantum systems by examining a two-dimensional quantum spin glass in the limit of strong interactions, computing numerically exact results for system sizes enormously larger than accessible previously. The ground state of this system is a complex superposition of a substantial fraction of all the classical ground states, and yet the dynamical susceptibility exhibits sharp resonances reminiscent of the behavior of single spins. These results show that strongly interacting quantum systems can self-organize to generate coherent excitations and shed light on recent experiments demonstrating that coherent excitations are present in a disordered spin liquid (Ghosh et al., Science 296, 2195 (2002)).

July 16, 2003
Itai Cohen, Harvard University

The Shear Excitement of Confined Colloidal Particles
We have constructed a shear cell which can be loaded onto a confocal microscope thus allowing us to observe the 3-D micro-structure of a sheared colloidal suspension. Under slow shear, Brownian diffusion plays a significant role in the relaxation processes which take place over the timescale of an oscillation. In this regime, the suspension can display liquid, crystalline, and glass-like thermodynamic phases. However, under fast shear, the system is driven out of equilibrium and forced to adopt new micro-structures. Therefore, the physics of the shearing experiment changes with shear rate. Confinement can also play a crucial role in the suspension rheology. For example, we find that under high shear, and when the gap between the shearing plates is less than 10 particle diameters, the suspension forms a beautiful buckled pattern which is not observed in bulk. In this talk, we will take a little tour of the shear rate vs. confinement parameter space. I will describe the observed patterns and discuss how to use the lessons learned from such observations more generally in studying colloidal suspensions under shear.

July 9, 2003
Scott Feller, Wabash College
Molecular dynamics simulations of lipid bilayer membranes
We have carried out all-atom molecular dynamics computer simulations of lipid bilayer membranes to study the structure and dynamics of this important class of biomolecules. This talk will explore the effect of fatty acid composition on membrane properties and lipid-protein interactions. A focus will be the unique properties of highly polyunsaturated fatty acids.

July 2, 2003
Jacqueline Ashmore, Harvard University

Thin-film free-surface flows
This talk will focus on two problems which fall into the class of thin-film free-surface flows. The first problem concerns the interface shape of liquid that coats the inside of a horizontal cylinder, which is rotating about its axis with a small fraction of its volume filled with viscous Newtonian liquid. By accounting for surface tension effects, which have generally been neglected in previous analytical studies, we find a new axially uniform steady solution valid at slow rotation rates. In such coating-flow problems the free-surface shape is described by a nonlinear third-order differential equation, which can be analyzed using the method of matched asymptotics (Landau and Levich, 1942; Derjaguin, 1943). Analytical predictions of the thin-film thickness based on the method used by Landau, Levich and Derjaguin are confirmed numerically.

The second problem concerns the motion of a sphere, under the action of gravity alone, down an inclined plane which is coated with a thin film of viscous liquid. The steady translation speed and rotation rate of the sphere are determined by the force balance tangential to the plane and the torque balance on the sphere. We consider the dominant effects in the fluid flow in the meniscus and in the narrow gap between the sphere and the plane, and characterize the scaling of the forces and torques that the flow exerts on the sphere. This leads to a theoretical result for the scaling of the translational speed of the sphere, which is compared with the experimental measurements of J. Bico (MIT).

June 25, 2003
Koji Ohkitani, Research Institute for Mathematical Sciences, Kyoto University

Linear strain flows with and without boundaries -- the regularizing effect of the pressure term
Whether or not the Euler equations for incompressible flow admit solutions with finite-time singularities, it is clear that the nonlocal action of pressure (non-isotropic Hessian terms) plays a critical role. To address this question we contrast the boundary-free, linear strain flow u=-(y+z, z+x, x+y) that has nonunique solutions including some which blowup in finite time, and some bounded flows with similar behavior near the origin, eg, u=-(sin y + sin z, sin z + sin x, sin x + sin y). Using both pseudospectral and power series in time, it is found that there is no evidence for blowup of the bounded flows. The nonuniqueness in the boundary-free flow is interpreted as an arbitrariness of the homogeneous solution of the pressure Poisson equation. The (1-t)^-1 blowup follows from the inclusion of the particular solution only. In expanding about the origin, it is found that only the first spherical harmonic contributes to the non-isotropic Hessian. Strong growth in this mode, which is required for desingularization, is exhibited in the solution of the bounded flows. [This is a joint work with late Richard B. Pelz (Rutgers).]
May 28, 2003
Robert Calderbank, AT&T Labs Research

Computational Science at ATT
What makes ATT unusual is the challenge of operations at extraordinary scale. This talk will describe how access to operations is creating a new research frontier in speech, networking, information mining and software.
May 14, 2003
Michael J. North, Argonne National Laboratory

Can Complexity Be Captured with Agent-Based Modeling and Simulation?
Complex adaptive systems (CAS) are structures composed of many components that interact and reproduce while adapting to a changing environment. These systems often have numerous nested levels of interaction that span many scales of measurement. A few examples of CAS include bacterial populations with chemical, cellular, microscopic, and macroscopic ranges of interaction; economic markets with individual, local, regional, national, and global scales of transactions; and ecosystems with individual, group, and species scales of dependencies. In many cases, traditional analytical, statistical, optimization, and simulation modeling techniques may no longer be adequate to support further research into these systems. Agent-based modeling and simulation (ABMS) offers a solution. ABMS captures the behavior of CAS using sets of agents and frameworks for simulating the agent's decisions and interactions. ABMS can show how a given CAS evolves through time from a multi-level or multi-scale perspective. The foundations and future of ABMS will be discussed in relation to the question: Can complexity be captured with agent-based modeling and simulation?

Michael J. North is the Deputy Director of the Center for Complex Adaptive Systems Simulation at Argonne National Laboratory. Mr. North has over twelve years of experience developing advanced modeling and simulation applications for various branches of the federal government and several international agencies. Mr. North has authored several referred journal articles and many published conference papers on ABMS. Mr. North is active in teaching ABMS in a variety of contexts including the Santa Fe Institute and the University of Chicago. More information on Mr. North's work can be found at http://www.cas.anl.gov/.
April 30, 2003
Eran Sharon, University of Texas

Geometrically Driven Buckling Cascade Observed in Free Sheets and Leaves
We present an experimental study of the buckling cascades that are formed along the edge of a torn plastic sheet. The edge is composed of an organized cascade with up to six generations of waves. The waves are similar in shape but differ greatly in scale, leading to the formation of a fractal edge as an equilibrium configuration. We show that the tearing process prescribes a highly curved hyperbolic metric near the edge of the sheet. This metric should be satisfied in order to reduce the stretching energy. However, we show that isometrics of such surfaces cannot be generated by buckling in a single wave along the edge. More waves are necessary in order to generate the prescribed geodesic curvature, which increases towards the edge. The formation of a cascade of waves is, thus, geometrically inevitable. However, our data show that the precise scaling of the cascades is not given by geometry alone. It depends on the sheet thickness as well, indicating the relevance of bending-stretching competition at all scales. This might be an indication for the absence of a smooth imbedding of the generated metrics in Euclidean space. Similar geometrical features (Similar metrics) could result from very simple growth mechanisms. We, thus, suggest that some of the complex shapes of leaves and flowers might result from this buckling instability. The complexity, in this case, results from elasticity and not from complex growth processes, as commonly accepted. Finally, I will present preliminary results from experiments in plants and environmentally responsive gels.
April 16, 2003
Maximino Aldana, University of Chicago

The role of the scale-free topology in Boolean network models of genetic networks.
I implement the scale-free topology into the Boolean network model proposed by Stuart Kauffman in 1969 to describe generically the dynamics involved in the processes of gene regulation and cell differentiation. In the original Kauffman model, the network topology is homogeneously random and the parameters of the model have to be fine-tuned in order to achieve the dynamical stability required by living organisms to perform with reliability. Such fine-tuning is contrary to experimental observations. However, when the scale-free topology is implemented into the Kauffman model, stable dynamics are obtained without fine-tuning the parameters of the model. Additionally, by analyzing how perturbations propagate through the network, one can conclude that the scale-free topology provides the network with both the dynamical stability and the evolvability essential for living organisms to perform with reliability and at the same time to adapt and evolve. It seems that the scale-free topology favors the evolution and adaptatioin of the network functioning.
April 9, 2003
Rafael Barrio, Physics Institute, UNAM.

Symmetric Pattern Formation in Finite Domains
The study of pattern formation in complex systems has proved extremely useful to deal with the problem of morphogenesis in living organisms. In this talk I shall examine a general model to describe the spatio-temporal dynamics of two morphogens. The diffusive part of the model incorporates the dynamics, growth and curvature of one and two dimensional domains.
Numerical calculations are performed by using a third order activator-inhibitor mechanism for the kinetic part in two dimensional growing domains having different geometries. The simulations show the crucial role of both, growth and curvature, on pattern selection. Centrosymmetric patterns are obtained for small domains. It is shown that both effects might be biologically relevant in explaining the selection of some observed patterns.
April 3, 2003, Ryerson 255

Please note special date (Thursday) and place!

Doyne Farmer, Santa Fe Institute.

Explaining the statistical properties of markets via random low-intelligence agents
We develop a microscopic statistical model for the continuous double auction under the assumption of random order flow, and test this model on data from the London Stock Exchange. We investigate the model using methods from statistical mechanics. While the predictions of the model are not perfect, they are extremely good in many respects, e.g., they explain about 70% of the variance in the daily bid-ask spread. We show that in non-dimensional coordinates the short term price impact of trading, which is closely related to supply and demand functions, approximates a universal function. New York Stock Exchange data shows similar behavior.

On a broader level, this work demonstrates that stochastic models based on zero-intelligence agents are useful to probe the effect of market institutions. Like perfect rationality, a stochastic zero-intelligence model can be used to make strong predictions based on parsimonious assumptions, even if these assumptions are highly oversimplified. The standard research program in contemporary economics is to perturb equilibria based on perfect rationality, adding imperfections such as asymmetric information or bounded rationality. We propose inverting this approach, perturbing zero-intelligence models by adding a little intelligence.
March 19, 2003
Jens Eggers, Universität GH Essen


Moving contact lines
When a tape plunges into or is pulled from a pool of liquid, the contact line of solid/liquid/gas coexistence is moving relative to the substrate. Huh and Scriven discovered 30 years ago that the fluid motion near the contact line entails a singularity of the energy dissipation. Thus ordinary hydrodynamics needs to be augmented to include some of the micro-scale corrections usually unobservable on large scales. This invasion of chemistry into the seemingly simple problem of predicting the shape of the fluid interface has initiated long-standing and heated debates. In this talk, I will give a brief overview and then focus on two subjects:
(a) What experimental observations exist that allow to distinguish between different microscopic mechanisms near the contact line?
(b) What type of instabilities limit the speed at which the tape can be pulled?
March 12, 2003
*Harold A. Scheraga, Cornell University

(Computational Institute Distinguished Lecturer)


Ab Initio Calculation of Protein Structure by Global Optimization of Potential Energy
The thermodynamic hypothesis, enunciated by C.B. Anfinsen, proposes that the amino acid sequence of a protein contains all the necessary information to determine its three-dimensional structure as the thermodynamically most stable one. We have developed empirical potential functions and global optimization algorithms to compute the native structures of polypeptides and proteins. The evolution of this methodology, leading to our current procedures to compute the three-dimensional structures of globular proteins, will be described.
March 5, 2003
Hugh Gusterson, Anthropology Department, MIT.

The Virtual Nuclear Weapons Laboratory: An Anthropologist Explores the National Ignition Facility.
If it is ever completed, the National Ignition Facility (NIF) at the Lawrence Livermore National Laboratory will be the largest and most powerful laser in the world. The facility has been beset by controversy from the beginning. Some physicists have insisted that it will not achieve its advertized goal of "ignition", and there are questions whether the lenses can withstand the power of the beams. NIF was originally slated to open in 2003 at a cost of $1.2 billion. Laboratory managers now say it will open in 2008 and its costs are variously estimated at $3.5 to $5 billion. NIF's cost overruns have already brought down one Director of Lasers at the Livermore Laboratory as well as the Laboratory director himself and have provoked a Congressional investigation. But what is the laser for? To different constituencies it is, variously, our best hope for a clean energy future, a useful tool for applied astrophysics, an essential technology for maintaining the nuclear stockpile, a threat to world peace, or a colossal waste of money. Hugh Gusterson is an anthropologist who has been studying the organizational culture of the Livermore Laboratory since 1987. His talk seeks to make sense of the contending views of NIF and the light it may throw on the enterprise of "Big Science" in the post-cold war era.
February 26, 2003
Rustem F. Ismagilov, University of Chicago

Nonlinear phenomena in microfluidic channels -- experimental results looking for computation
This talk will describe experiments and simple analytical theory for flow of immiscible fluids in microfluidic channels. We have observed that there is an instability in such flows that leads to formation of droplets on pL scale at low values of the Capillary number and the Reynolds number. I will explain -- from the experimental point of view -- why this system works while many other systems don't. I will show how this instability can be used to make droplets made up of several solutions. This gives us the ability to understand mixing of these solutions. Mixing can be accomplished by recirculating flows inside flowing drops, and I will show examples where this works well and where it does not. I will discuss our simple approach to mixing inside droplets using the principle of chaotic advection, which is significantly more robust than mixing by steady recirculation. I will then present our theoretical thoughts on the scaling of mixing. I will also talk about several other issues, such as merging and splitting of droplets. In conclusion, I will show how this system can be used to measure chemical reaction kinetics better, faster and with lower sample volumes than it is currently done. I will also raise questions that are important for the field and may be addressed computationally/theoretically. Time permitting, I will give an overview of several other projects we are pursuing where interaction between computation and experiment may provide exciting opportunities.
February 19, 2003
Evelyn Fox Keller, Program in Science, Technology and Society, MIT

The Cultural Divide between Mathematics and Biology, and What it will take to heal it.
The role of mathematics in Developmental Biology has a long and vexed history in the U.S., and it raises critical questions about differences in the meanings of 'theory' and 'explanation' assumed by workers in the Mathematical and Biological Sciences. Indeed, I argue for a difference in "epistemological culture." There is evidence, however, of a convergence now taking place between these two cultures, and I will examine some of the conditions currently forcing the changes (in both cultures) that facilitate convergence.
February 12, 2003
Robert W. Batterman, Ohio State University

Asymptotics: Explanation and Reduction
This paper discusses a type of reasoning that I call "asymptotic reasoning". Such reasoning plays an essential role a wide range of problems and investigations in physics and applied mathematics. Philosophers of science who are interested such methodological issues as the nature of scientific explanation and various reductive relations between theories can learn much from the study of this type of reasoning. I examine various issues about explanation, understanding, and reduction in the context of a particular illustrative example involving the wave and ray theories of light.
February 5, 2003
Chao Tang, NEC Research Institute

Designability of protein structures
Nature uses a very small number (~1000) of folds (chain geometries) to make proteins. Is this an arbitrary outcome of evolution or is there a selection principle behind. Has nature exhausted all the possibilities? Can we discover protein folds not found by nature? We address these questions starting from simple models to more complex and realist models that require extensive computation to experimentation. The talk will be at a very pedagogical level and no prior knowledge of protein structures is required.
January 29, 2003
Gregory Ryskin, Northwestern University.

The origin of the Earth's magnetic field - a new hypothesis
According to the conventional model, the geomagnetic field is generated by the hydromagnetic dynamo action in the Earth's outer core, consisting mainly of liquid iron. There are, however, a number of problems with this model. For example, there is no evidence of hydrodynamic motion in the outer core independent of the belief that this motion is the raison d'etre of the geomagnetic field (and therefore must exist). Also, it is not clear what could drive the motion. Natural convection is a viable mechanism, but thermal buoyancy is insufficient, and may even have a wrong sign. Compositional buoyancy is thought to be the answer; it arises because lighter components dissolved in the liquid iron are rejected at the inner-core boundary where the liquid solidifies to form the inner core. This mechanism, however, could not operate before the inner core appeared, ~ 2 billion years ago, and reached some reasonable size, whereas the paleomagnetic evidence indicates that the field existed, at about the same strength as today, since much earlier times. Other serious problems exist as well. In this talk, I will briefly summarize the puzzles and paradoxes of the conventional model, and propose a new hypothesis concerning the origin of the Earth's magnetic field. This will be the first public presentation of the hypothesis; I expect a lively debate.
January 22, 2003
David Nelson, Department of Physics, Harvard University.

Viruses, Vesicles and Colloidosomes: The Thomson Problem Revisited
The problem of determining the ground state of particles packed on spherical shells was first posed for physicists by J. J. Thomson in 1904 as a model for the periodic table. Icosadeltahedral packings, similar to fullerene molecules or the panels of a soccer ball, describe how proteins are arranged in the shells of spherical viruses. We argue that these regular packings must become unstable to either faceting and a proliferation of grain boundaries for sufficiently large R/a, where R is the sphere radius and a is the particle spacing. The theory is relevant to the shapes of large viruses, crystallization of lipid molecules in spherical vesicles and "colloidosomes", where the particle packings can be imaged directly with confocal microscopy.
January 15, 2003
*Saul Teukolsky, Astronomy Department, Cornell University.

Numerical Simulations of Black Holes
Einstein's equations of general relativity are prime candidates for numerical solution on supercomputers. There is some urgency in being able to carry out such simulations: Large-scale gravitational wave detectors are now coming on line, and the most important expected signals cannot be predicted except numerically.

Problems involving black holes are perhaps the most interesting, yet also particularly challenging computationally. One difficulty is that inside a black hole there is a physical singularity that cannot be part of the computational domain. A second difficulty is the disparity in length scales between the size of the black hole and the wavelength of the gravitational radiation emitted. A third difficulty is that all existing methods of evolving black holes in three spatial dimensions are plagued by instabilities that prohibit long-term evolution.

I will describe how two ideas that have been successful in other areas of computational physics are being introduced in numerical relativity to deal with these problems. The first technique, multidomain spectral methods, can deal with the multiple length scales. The second idea is to seek new formulations of Einstein's equations that are manifestly hyperbolic to control the instabilities. And it turns out that these two techniques together can deal with the black hole singularities. Needless to say, no knowledge of general relativity will be assumed for the talk.
January 8, 2003
Note: Special time, 12:30 p.m.
Sidney Nagel, University of Chicago

Jamming in the Cold: How things get stuck at T=0
Jamming occurs in a wide variety of situations. Normally one thinks of traffic jams on a highway or the jamming that occurs when solid particles become impacted on leaving an orifice. I will argue that the transition from a flowing to a jammed state may be similar in many respects to other situations as well. The case I have in mind is the glass transition where a liquid becomes progressively sluggish as the temperature is lowered until it eventually becomes a glass where it stops moving entirely. In the present talk, I will emphasize the jamming transition at zero temperature near close packing. Although in many ways it resembles a critical point, this transition also has unique properties that distinguish it from ordinary critical behavior.