# Introduction to Biological Systems and Soft Condensed Matter

Lecture notes for the course given by Prof. David Andelman, Tel-Aviv University 2009

Original author: Guy Cohen, Tel-Aviv University

## Introduction and PreliminariesEdit

We will make several assumptions throughout the course:

- The physics in question are generally in the classical regime, .
- Materials are "soft": quantitatively, this implies that all relevant energy scales are of the order of .
- Condensed matter physics deals with systems composed of particles, and statistical mechanics applies. We are always interested in a reduced description, in terms of continuum mechanics and elasticity, hydrodynamics, macroscopic electrodynamics and so on.

We begin with an example from Chaikin & Lubensky, the story of an H_{2}O molecule. This molecule is bound together by a chemical bond which is around at room temperature and not easily broken under normal circumstances. What happens when we put water molecules is a container? First of all,

with such large numbers we can safely discuss phases of matter: namely

Gas is typical to low density, high temperature and low pressure. It is generally prone to changes in shape and volume, homogeneous, isotropic, weakly interacting and insulating. This is the least ordered form of matter relevant to our scenario, and relatively easy to treat since order parameters are small. The liquid phase is typical of intermediate temperatures. It flows but is not very compressible. It is homogeneous, isotropic, dense and strongly interacting. Its response to external forces depends on the rate of its deformation. Liquids are hard to treat theoretically, as their intermediate properties make simple approximations less effective. The solid is a dense ordered phase with low entropy and strong interactions. It is anisotropic and does not flow, it strongly resists compression and its response to forces depends on the amount of deformation they cause (elastic). Transitions between these phases occur at specific values of thermodynamic parameters (see diagram (1)). First order changes (volume/density "jumps" at the transition, and no jump in pressure/temperature) occur on the lines; at the critical liquid/gas point, second order phase transitions occur; at the triple point, all three phases (solid/liquid/gas) coexist. The systems we are interested in are characterized by several kinds of interactions between their constituent molecules: for example, Coulombic interactions of the form when charged particles are present, fixed dipole interaction of the form when permanent dipoles exist, and almost always induced dipole/van der Waals interaction of the form . At close range we also have the "hard core" or steric repulsion, sometimes modeled by a potential. Simulations often use the so-called Lennard-Jones potential (as pictured in (2)), which with appropriate parameters correctly describes both condensation and crystallization in some cases.

**Sidenote**

When only the repulsive potential exists (for instance, for billiard balls), crystallization still takes place but no condensation/evaporation phase transition between the liquid and gas phases exists.

Starting from a classical Hamiltonian such as , we can predict all three phases of matter and the transitions between them. In biological systems, this simple picture does not suffice: the basic consideration behind this is that of effects which occur at different scales between the nanometric scale, through the mesoscopic and up to the macroscopic scale. Biological systems are mesoscopic in nature, and their properties cannot be described correctly when a coarse-graining is performed without accurately accounting for mesoscopic properties. A few examples follow:

#### Liquid crystalsEdit

The most basic assumption we need in order to model liquid crystals is that isotropy at the molecular level is broken: molecules are represented by rods rather than spheres. Such a description was suggested by Onsager and others, and leads to three phases as shown in (3).

#### PolymersEdit

When molecules are interconnected at mesoscopic ranges, new phases and properties are encountered.

#### Soap/beer foamEdit

This kind of substance is approximately 95% agent, with the remainder water - yet it behaves like a weak solid as long as its deformations are small. This is because a tight formation of ordered cells separated by thin liquid films is formed, and in order for the material to change shape the cells must be rearranged. This need for restructuring is the cause of such systems' solid-like resistance to change.

#### Structured fluidsEdit

Polymers or macromolecules in liquid state, liquid crystals, emulsions and colloidal solutions and gels display complex visco-elastic behavior as a result of mesoscopic super-structures within them.

#### Soft 2D membranesEdit

Interfaces between fluids have interesting properties: they act as a 2D liquid within the interface, yet respond elastically to any bending of the surface. Surfactant molecules will spontaneously form membranes within the same fluid, which also have these properties at appropriate temperatures. Surfactants in solution also form lamellar structures - multilayered structures in which the basic units are the membranes rather than single molecules. 03/19/2009

## PolymersEdit

Books: Doi, de Gennes, Rubinstein, Doi & Edwards.

### IntroductionEdit

#### Brief historyEdit

Natural polymers like rubber have been known since the dawn of history, but not understood. The first artificial polymer was made . Stadinger was the first to understand that polymers are formed by molecular chains and is considered to be the father of synthetic polymers. Most polymers were made by petrochemical industry. Nylon was born in 1940. Various uses and unique properties (light, strong, thermally insulating; available in many different forms from strings and sheets to bulk; cheap, easy to process, shape and mass-produce...) have made them very attractive commercially. Later on, some leading scientists were Kuhn and Flory in chemistry (30's to 70's) and Stockmayer in physical chemistry (50's and 60's). The famous modern theory of polymers was first formulated by P.G. de Gennes and Sam Edwards.

#### What is a polymer?Edit

Material composed of chains, having a repeating basic unit (monomer). Connections between monomers are made by chemical (covalent) bonds,

and are strong at room temperature.

is the polymerization index.

**Sidenote**

More generally, this kind of structure is called a *homopolymer* . *Heteropolymers* - which have several repeating constituent units - also exist. These can have a random structure () or a block structure (), in which case they are called *block copolymers* . These can self-assemble into complex ordered structures and are often very useful.

**Sidenote**

For an example, look up ester monomers and polyester, or polyethylene.

Polymerization is also the name of the process by which polymers are synthesized, which involves a chain reaction where a reactive site exists at the end of the chain. Some chemical reactions increase the chain length by one unit, while simultaneously moving the reactive

site to the new end:

There also exist condensation processes, by which chains unite:

where . A briefer notation, dropping the name of the

monomer, is

Consider the example of hydrocarbon polymers, where we have a monomer which is (Check this...). As a larger number of such units is joined together to become polyethylene molecules, the material composed of these molecules changes drastically in nature:

phase | type of material | |
---|---|---|

1-4 | gas | flammable gas |

5-15 | thin liquid | liquid fuel/organic solvents |

16-25 | thick liquid | motor oil |

20-50 | soft solid | wax, paraffin |

1000 | hard solid | plastic |

#### Types of polymer structuresEdit

Polymers can exist in different topologies, which affect the macroscopic properties of the material they form (see (4)):

- Linear chains (this is the simplest case, which we will be discussing).
- Rings (chains connected at the ends).
- Stars (several chain arms connected at a central point).
- Tree (connected stars).
- Comb (one main chain with side chains branching out).
- Dendrimer (ordered branching structure).

#### Polymer phases of matterEdit

Depending on the environment and larger-scale structure, polymers can exist in many states:

- Gas of isolated chains (not very relevant).
- In solution (water or organic solvents). In dilute solutions, polymer chains float freely like gas molecules, but their length alters their behavior.
- In a liquid state of chains (called a melt).
- In solid state (plastic) - crystals, poly-crystals, amorphous/glassy materials.
- Liquid crystal formed by polymer chains (Polymeric Liquid Cristal or PLC)
- Gels and rubber: networks of chains tied together.

### Ideal Polymer Chains in SolutionEdit

#### Some basic models of polymer chainsEdit

The simplest model of an ideal polymer chain is the *freely jointed chain* (FJC), where each monomer performs a completely independent random rotation. Here, at equilibrium the end-to-end length of the chain is , where is the contour length.

A slightly more realistic model is the *freely rotating chain*

(FRC), where monomers are locked at some chemically meaningful *bond angle* and rotate freely around it via the *torsional angle* . Here,

Note that for we find that and this is identical to the FJC. For very small , we can expand the cosine an obtain

This is the rigid rod limit (to be discussed later in detail).

A second possible improvement is the *hindered rotation* (HR) model. Here the angles have a minimum-energy value, and are taken from an uncorrelated Boltzmann distribution with some

potential . This gives

**Sidenote**

See Flory's book for details.

Another option is called the *rotational isomeric state* model. Here, a finite number of angles are possible for each monomer junction and the state of the full chain is given in terms of these. Correlations are also taken into account and the solution is numeric, but aside from a complicated this is still an ideal chain with .

#### Calculating the end-to-end radiusEdit

For the polymer chain of (5), obviously we will always have . The variance, however, is generally not zero: using ,

FJC

In the freely jointed chain (FJC) model, there are neither correlations between different sites nor restrictions on the rotational angles. We therefore have ,

and

**Sidenote**

The mathematics are similar to that of a random walk or diffusion process, where in 1D .

Therefore, .

FRC

In the freely rotating chain model, the bond angles are held constant at angles while the torsion angles are taken from a uniform distribution between and . This introduces some correlation between the angles: since (for one definition of the ) ,

and since the are independent and any averaging over a sine of cosine of one or more of them will result in a zero, only the independent terms survive and by recursion this correlation has the simple form

The end-to-end radius is

At large we can approximate the two sums in by the series , giving

To extract the Kuhn length from this expression, we rewrite in in the following way:

To go back from this to the FRC limit, we would consider a chain with a random distribution of angles such that .

#### Gyration radiusEdit

Consider once again the polymer chain of (5). Define:

The unprimed coordinate system is refocused on the center of mass, such that . Now, it is easier to work with

the following expression:

We will calculate for a long FJC. For we can replace the sums with integrals, obtaining

This gives the gyration radius for an FJC:

#### Polymers and Gaussian distributionsEdit

An ideal chain is a Gaussian chain, in the sense that the end-to-end radius is taken from a Gaussian distribution. We will see two proofs of this.

Random walk proof

One way to show this (see Rubinstein, de Gennes) is to begin with a random walk. For one dimension, if we begin at and at each

time step move left or right with moves and the final displacement

then

We define as the number of configurations of steps with a final displacement of .

is the associated normalized probability.

In fact, for the central limit theorem tells us that will have a Gaussian distribution for any distribution of the . This can be extended to dimensions

with a displacement :

To find the normalization constant we must integrate over all dimensions:

Some notes:

- An ideal chain can now be redefined as one such that is Gaussian in any dimension .
- This is also true for a long chain with local interactions only, such that .
- The probability of being in a spherical shell with radius
**is .** - The chance of returning to the origin is . is typical of an ideal chain.
- For any dimension , .

Formal proof

Another way to show this follows, which is also extensible to other distributions of the .

**Sidenote**

This proof can be found in Doi and Edwards.

In general, we can write

In the absence of correlations, we can factorize :

For example, for a freely jointed chain . The normalization constant is found from ,

giving

We can replace the delta functions with ,

leaving us with

In spherical coordinates,

which gives

We are left with the task of evaluating the integral. This can be done analytically with the Laplace method for large , since the largest contribution is around : we can approximate by .

The integral is then

This is, of course, the same Gaussian form we have obtained from the random walk (we have done the special case of , but once again this process can be repeated for a general dimension ).

03/26/2009

### Rigid and Semi-Rigid Polymer Chains in SolutionEdit

#### Worm-like chainEdit

In considering the limit of the freely rotating chain, we have seen that . This is of course unphysical, and this limit is actually important for many interesting cases of stiff chains (for instance, DNA). If we take the limit along with

and start over, we can make the following change of variables:

which defines the *persistence length* . For the FRC

model,

This is a useful concept in general, however: it defines the typical length scale over which correlations between chain angles dies out, and is therefore an expression of the chain's rigidity.

At small we can expand the logarithm to get

Taking the continuum limit carefully then requires us to consider and such that is constant. Now, we can calculate the end-to-end length

at the continuum limit using out the new form for the correlations:

To simplify the calculation, we can define the dimensionless variable , and .

With these replacements,

The final result (known as the Kratchky-Porod worm-like-chain or WLC)

is

Importantly, is does not depend on or but only on the physically transparent persistence length and contour length.

We will consider the two limits where one parameter is much larger than the other. First, for we encounter the

*rigid rod* limit: we can expand the previous expression into

The fact that rather than is a result of the long-range correlations we have introduced, and is an indication that at this regime the material is in an essentially different phase. Somewhere between the ideal chain and the rigid rod, a crossover regime must exist.

**Sidenote**

While an ideal chain has and a rigid rod has , in general polymer chains can have a scaling law . The power need not be an integer.

For we can neglect the exponent, obtaining

This therefore returns us to the ideal chain limit, with a Kuhn length . The crossover phenomenon we discussed occurs on the chain itself here as we observe correlation between its pieces at differing length scales: at small scales () it behaves like a rigid rod, while at long scales we have an uncorrelated random walk. An interesting example is a DNA chain, which can be described by a worm-like chain with and : it will therefore typically cover a radius of .

### Free Energy of the Ideal Chain and Entropic SpringsEdit

We have calculated distributions of for Gaussian chains with components, . Let's consider

the entropy of such chains:

The logarithm of is the same as that of , aside from a factor which does

not depend on . Therefore,

The free energy is

since for an ideal chain.

What does mean? It represents the energy needed to stretch the polymer, and this energy is like a harmonic spring () with . Note that the polymer becomes *less* elastic (more rigid) as the temperature increases, unlike most solids. This is a physical result and can be verified experimentally: for instance, the spring constant of rubber (which is made of networks of polymer chains) increases linearly with temperature. Consider an experiment where instead of holding the chain at constant length, we apply a perturbatively weak force to its ends and measure its average length. We can perform a Legendre transform between distance and force: from equality of forces along the direction

in which they are applied,

To be in this linear response () region, we must demand that ,

and to stress this we can write

Numerically, with a nanometric and at room temperature the forces should be in the picoNewton range to meet this requirement. A more rigorous treatment which works at arbitrary forces can be carried out by considering an FJC with oppositely charged () ends in an electric field . The chain's sites are at with .

The potential is

Since , we can write the potential as

with . The

partition function is

The function is separable into product of functions .

Now,

In spherical coordinates

we can solve the integral:

The Gibbs free energy (Gibbs because the external force is fixed)

is then

and the average extension

The *Langevin function* is also typical of spin magnetization in external magnetic fields and of dipoles in electric fields at finite temperatures. 04/02/2009

### Polymers and Fractal CurvesEdit

#### Introduction to fractalsEdit

Book: B. Mandelbrot.

A fractal is an object with *fractal* *dimensionality* , called also the *Hausdorff dimension* . This implies a new definition of dimensionality, which we will discuss. Consider a sphere of radius . It is considered three-dimensional because it has and for . A plane has by the same reasoning for , and is therefore a object. Fractals are mathematical objects such that by the same sort of calculation they will have , for a which is not necessarily an integer number (this definition is due to Hausdorff). One example is the Koch curve (see (7)): in each of its iterations, we decrease the length of a segment by a factor

of 3 and decrease its mass by a factor of 4. We will therefore have

Note that a fractal's "real" length is infinite, and its approximations will depend on the resolution. The structure exhibits *self-similarity:* namely, on different length scales it will look the same. This can be seen in the Koch snowflake: at any magnification, a part of the curve looks similar to the whole curve. There's a very nice animation of this in Wikipedia. The total length of the curve depends on the the ruler used to measure it: the actual length at iteration is .

Another definition for the fractal dimension is

#### Linking fractals to polymersEdit

**Sidenote**

The Flory exponent is defined from such that .

Consider the ideal Gaussian chain again. It has . Since is proportional to the mass, we have an object with a fractal dimension of 2 no matter what the dimensionality of the actual space is. We can say that a polymer in -space fills only dimensions of the space it occupies, where is 2 for an ideal polymer Gaussian and in general. Flory has shown that in some cases a non-ideal polymer can also have , in particular when a self-avoiding walk (SAW) is accounted for. The SAW as opposed to the Gaussian walk (GW) is the defining property of a physical rather than ideal polymer, and gives a fractal dimension of . A collapsed polymer has and fills space completely. Note that two polymers with fractal dimensions and do not "feel" each other statistically if .

### Polymers, Path Integrals and Green's FunctionsEdit

Books: Doi & Edwards, F. Wiegel, or Feynman & Hibbs.

#### Local Gaussian chain model and the continuum limitEdit

This model is also known as LGC. We start from an FJC in 3D where and . By the central limit theorem will always be taken from a Gaussian distribution when the number of monomers is large (whatever the form of , as long as it

is symmetrical around zero such that ):

In the LGC approximation we exchange the rigid rods for Gaussian springs with and , by

setting

We can then obtain for the full probability distribution

where . describes harmonic springs with connected

in series:

An exact property of the Gaussian distributions we have been using is that a sub chain of monomers (such as the sub chain starting at index and ending at ) will also have a a Gaussian distribution

of the end-to-end length:

At the continuum limit, we will get *Wiener distributions* : the correct way to calculate the limit is to take and with remaining constant. The length along the chain up to site is then described by , . At this limit we can also substitute derivatives for the finite differences ,

such that

If we add an external spatial potential (which is single-body), its contribution to the free energy will amount

in a factor of

to the Boltzmann factor. 04/23/2009

#### Functional path integrals and the continuum distribution functionEdit

Books: F. Wiegel, Doi & Edwards.

Consider what happens when we hold the ends of a chain defined by in place, such that and . We can calculate the probability

of this configuration from

At the continuum limit the definition of the chain configurations translates into a function and the product of integrals can be taken as a path integral according to . The probability for each configuration with our constraint is a functional

of . The partition function is:

and we can normalize it to obtain a probability distribution function,

given in terms of this path integral:

We now introduce the Green's function which as we will soon see describes the evolution from

to in steps. We define it as:

Note that while the nominator is proportional to the probability , the denominator does *not* include include the external potential.

has several important properties:

- It is equal to the exact probability for Gaussian chains in the absence of external potential.
- If we consider that the chain might be divided into one sub chain between step and and a second sub chain from step to step , then

We can use this property to compute expectations values of observables. If we have some function of a specific monomer , for instance:

- The Green's function is the solution of the differential equation (see proof in Doi & Edwards and in homework):

- The Green's function is defined as 0 for and is equal to when in order to satisfy the boundary conditions.

#### Relationship to quantum mechanicsEdit

This equation for , is very similar in form to the Schrödinger equation. To see this, we

can rewrite it as:

If we make the replacement , and this is identical to . Like the quantum Hamiltonian the Hermitian operator has eigenfunctions such that , which according to Sturm-Liouville theory span the solution space () and can be orthonormalized ().

The solution of the non-homogeneous problem is therefore

where the are solutions of the homogeneous equation .

Example A polymer chain in a box of dimensions : The potential is within the box and on the edges. The boundary conditions are if or are on the boundary. The

function is also separable in Cartesian coordinates:

Let's solve for (the other functions are

similar):

If we separate variables again with the ansatz

we obtain

With the boundary condition

This gives an expression for the energy and eigenfunctions:

The Green's function can finally be written as

Since with the Cartesian symmetry of the box the partition function is also separable and using

we can calculate

We can now go on to calculate , and we can for instance calculate the pressure on the box edges in the

direction:

Two limiting cases can be done analytically: first, if the box is much larger than the polymer, and

This is equivalent to a dilute gas of polymers (done here for a single chain). At the opposite limit, , the polymer should be "squeezed". The Gaussian approximation will be no good if we squeeze too hard, but at least for some intermediate regime

we can neglect all but the first term in the series:

There is a large extra pressure caused by the "squeezing" of the chain and the corresponding loss of its entropy.

04/30/2009

The same formalism can be used to treat polymers near a wall or in a well near a wall, for instance (see the homework for details). In the well case, like in the similar quantum problem, we will have bound states for (where the critical temperature is defined by a critical value of , and describes the condition for the potential well to be "deep" enough to contain a bound state).

#### Dominant ground stateEdit

Note that since

where is positive and the are real and ordered (assuming no degeneracy, ), at large we can neglect

all but the leading terms (smallest energies) and

This is possible because the exponent is decreasing rather than oscillating, as it is in the quantum mechanics case. Taking only the first term in this series is called the *dominant ground state approximation* .

### Polymers in Good Solutions and Self-Avoiding WalksEdit

#### Virial expansionEdit

So far, in treating Gaussian chains, we have neglected any long-ranged interactions. However, polymers in solution cannot self-intersect, and this introduces interactions into the picture which are local in real-space, but are long ranged in terms of the contour spacing - that is, they are not limited to . The importance of this effect depends on dimensionality: it is easy to imagine that intersections in 2D are more effective in restricting a polymer's shape than intersections in 3D.

The interaction potential can in general have both attractive and repulsive parts, and depends on the detailed properties of the solvent. If we consider it to be due to a long ranged attractive Van der-Waals interaction and a short ranged repulsive hard-core interaction, it might be modeled by a Lennard-Jones potential. To treat interaction perturbatively within statistical mechanics, we can use a virial expansion (this is a statistical-mechanical expansion in powers of the density, useful for systematic perturbative corrections to non-interacting calculations when one wants to include

many-body interactions). The second virial coefficient is

To make the calculation easy, consider a potential even simpler than

the 6-12 Lennard-Jones:

This gives

This can be positive (signifying net repulsion between the particles) at or negative (signifying attraction) for . While the details of this calculation depend on our choice and parametrization of the potential, in general we will have some special temperature known as the temperature (in our case )

where

This allows us to define a good solvent: such a solvent must have at our working temperature. This assures us (within the second Virial approximation, at least) that the interactions are repulsive and (as can be shown separately) the chain is *swollen* . A bad solvent for which will have attractive interactions, resulting in *collapse* . A solvent for which is called a solvent, and returns us to a Gaussian chain unless the next Virial coefficient is taken.

#### Lattice modelEdit

A common numerical treatment for this kind of system is to draw the polymer on a grid and make Monte-Carlo runs, where steps must be self-avoiding and their probability is taken from a thermal distribution while maintaining detailed balance. This gives in 3D where .

#### Renormalization groupEdit

A connection between SAWs and critical phenomena was made by de Gennes in the 1970's. Some of the similarities are summarized in the table below. Using renormalization group methods, de Gennes showed by analogy

to a certain spin model that

This gives in 3D a result very close to the SAW: .

Polymers | Magnetic Systems |
---|---|

, | (critical temperature) is a small parameter. |

. | Correlation length - critical exponent . |

Gaussian chains (non-SAW). | Mean field theory. |

. | |

For , . | MFT is accurate for (Ising model: ). |

#### Flory modelEdit

This is a very crude model which gives surprisingly good results. We write the free energy as . For the entropic part we take the expression for an ideal chain: , . For the interaction, we use the second virial

coefficient:

Here is a local density such that its average value is .

If we neglect local fluctuations in , then

The total free energy is then

The free parameter here is , but we do not know how it relates

to . For constant the minimum is at

which gives the Flory exponent

This exponent is exact for 1, 2 and 4 dimensions, and gives a very good approximation (0.6) for 3 dimensions, but it misses completely for more than 4 dimensions. For a numerical example consider a polymer of monomers each of which is about in length.

From the expressions above,

This difference is large enough to be experimentally detectable by the scattering techniques to be explained next.

The reason the Flory method provides such good results turns out to be a matter of lucky cancellation between two mistakes, both of which are by orders of magnitude: the entropy is overestimated and the correlations are underestimated. This is discussed in detail in all the books.

#### Field Theory of SAWEdit

Books: Doi & Edwards, Wiegel

The seminal article of S.F. Edwards in 1965 was the first application of field-theoretic methods to the physics of polymers. To insert interactions into the Wiener distribution, we take sum over the two-body interactions to the continuum limit .

This formalism is rather complicated and not much can be done by hand. One possible simplification is to consider an excluded-volume (or self-exclusion) interaction of Dirac delta function form, which prevents

two monomers from occupying the same point in space:

The advantage of this is that a simple form is obtained in which only the second virial coefficient is taken into account. The

expression for the distribution is then

With expressions of this sort, one can apply standard field-theory/many-body methods to evaluate the Green's function and calculate observables. This is more advanced and we will not be going into it. 05/07/2009

### Scattering and Polymer SolutionsEdit

#### The form factorEdit

Materials can be probed by scattering experiments, and for dilute polymer solutions this is one way to learn about the polymers within them. Laser scattering requires relatively little equipment and can be done in any lab, while x-ray scattering (SAXS) requires a synchrotron and neutron scattering (SANS) requires a nuclear reactor. We will discuss structural properties on the scale of chains rather than individual monomers, which means relatively small wavenumbers. It will also soon be clear that small angles are of interest.

**Sidenote**

Modeling the monomers as points is reasonable when considering probing on the scale of the complete chain.

If we assume that the individual monomers act as point scatterers (see (8)) and consider a process which scatters the incoming wave at to , we can define a scattering angle and a scattering wave vector (which becomes smaller in magnitude as the angle becomes smaller). We then measure scattered waves at some outgoing angle for some incoming angle as illustrated in (9), where in fact many chain scatterers are involved we should have an ensemble average over the chain configurations (which should be incoherent since the chains are far apart compared with the typical decoherence length scale). All this is discussed in more detail below.

**Sidenote**

For this kind of experiment to work with lasers or x-rays, there must be a *contrast* : the polymer and solvent must have different indices of refraction. X-Ray experiments rely on different electronic densities. In neutron scattering experiments, contrast is achieved artificially by labeling the polymers or solvent - that is, replacing hydrogen with deuterium.

Within a chain scattering is mostly coherent such that that the scattered wavefunction is . The intensity or power should be proportional to ).

If we specialize to homogeneous chains where , then

This expression is suitable for a single static chain in a specific configuration . For an ensemble of chains in solution, we average over all chain configurations incoherently,

defining the *structure factor* :

The normalization is with respect to the unscattered wave at , . Note that in an isotropic system like the system of chain molecules in a solvent, the structure factor must depend only on the magnitude of .

Inserting the expression for into the above equation gives

We now switch to spherical coordinates with parallel to with the added notation . Since in these coordinates ,

we can write

#### The gyration radius and small angle scatteringEdit

For small (which at least in the elastic case implies small ), we can expand the above expression for in powers

of to obtain

The last equality is due to the fact . If the scattering is elastic,

and

With this expression for in terms of the angle ,

the structure factor is then

From an experimental point of view, we can plot as a function of and determine the polymer's gyration radius from the slope.

The approximation we have made is good when , and this determines the range of angles that should be taken into account: we must have . For laser scattering usually (about enough to measure ) while for neutron scattering (meaning we must take only very small angles into account to measure , but also allowing for more detailed information about correlations within the chain to be collected).

#### Debye scattering functionEdit

Around 1947, Debye gave an exact result (the *Debye function* )

for Gaussian chains:

At the limit where we can expand around , yielding the limit we have encountered earlier. For , .

**Sidenote**

Another way to observe GW behavior is to use a -solvent.

This also works very well for non-Gaussian chains in non-dilute solutions, where a small percentage of the chains is replaced by isotopic variants. This gives an effectively dilute solution of isotopic chains, which can be distinguished from the rest, and these chains are effectively Gaussian for reasons which we will mention later. An example from Rubinstein is neutron scattering from PMMA as done by R. Kirste et al (1975), which fits very nicely to the Debye function for . In general, however, a SAW in a dilute solution modifies the tail of the Debye function, since and for a SAW.

#### The structure factor and monomer correlationsEdit

Consider the full distribution function of the distances .

This is related to the correlation function for monomer :

This function is evaluated by fixing a certain monomer and counting which other monomers are at a distance from it, averaging over all chain configurations. If we now average over all monomers

, we obtain

Fourier transforming it,

The fact that the structure function is the Fourier transform of the scatterer density correlation function is, of course, not unique to the case of polymers. At large , it can be shown (homework) that if then . We can therefore determine the fractal dimension of the chain from the large tail of the structure factor (see table).

Model | |||
---|---|---|---|

3D GW | |||

3D SAW | |||

3D collapsed chain |

### Polymer SolutionsEdit

#### Dilute and semi-dilute solutionsEdit

Up to this point, we have considered only independent chains in dilute solutions. We have also discussed the quality of solvents and the temperature. Now, we consider multiple chains in a good solvent (good because we do not want them in a collapsed state). The concentrations of monomers is defined as the number of monomers (for all chains) per unit volume. A solution is dilute if the typical distance between chains is more that and semi-dilute if it is more that . Between these limits, the concentration passes through a crossover value where the inter-chain distance is equal to the typical chain size .

**Sidenote**

A *concentrated* solution is defined by . If the solvent is removed completely, one obtains a *melt* , composed of polymerchains in a liquid state (a viscoelastic material). We will not be discussing these cases further - see Rubinstein for details.

We can calculate by calculating the concentration of monomers within a single chain and equating it to the average monomer concentration:

For instance, in a 3D SAW and such that . We can also work in terms of volume fraction . This turns out to be very small (for it is about 0.001 and for it is about 0.4%). 05/14/2009

#### Free energy of mixingEdit

If we have a mixture of two components - units of and units of on a lattice model with cell length such that is the total number of cells - we can define the relative volumes and . The free energy of mixing (in the simple isotropic

case) is then

From a combinatorical argument and with the help of the Stirling series,

The average entropy of mixing per cell is therefore

We now consider interactions , and between nearest neighbors of the two species. The specific form of the interaction depends on the coordination number , or the number of nearest neighbors per grid point: for instance, on a square 2D grid .

The mixing interaction energy can be written as

where the count the number of boundaries of the different types within the system. In the *mean field approximation* , we

can evaluate them by neglecting local variations in density:

The interaction energy per-particle due to mixing is then

and we will subtract from it the enthalpy of the "pure" system,

where the components are unmixed:

The difference between these two quantities it the change in enthalpy

per unit cell due to mixing:

The sign of the *Flory parameter* determines whether the minimum of the energy will be at the center or edges of the parabola

in .

This is the MFT approximation for the free energy of mixing.

**Sidenote**If the two components have different volumes, then

Otherwise, the treatment is very similar (see homework).

#### The Flory-Huggins model for polymer solutionsEdit

This is based on work mostly done by Huggins around 1942. The basic idea is to consider a lattice like the one shown in (11), with chains (inhabiting blocks in the example) in a solvent (which can also be a set of chains, but in the example the number of blocks per solvent unit is ). The enthalpy of mixing is approximately independent of the change from the molecule-solvent system to this polymer-solvent system, at least within the MFT approximation. We can therefore set ( is the number of monomers and the number of solvent units; is the number of chains) and use the previous expressions for and . The fact we have chains rather than individual monomers is of crucial importance when we calculate the entropy, though: chains have more constraints and therefore a lower entropy than isolated monomers. We will make an approximation (correct to first order in for ) based on the assumption that the chains are solid objects and can only be transformed, rather than also rotated and conformed around their center of mass.

**Sidenote**

This is treated in detail in the books by Flory and by Doi & Edwards.

This gives, making the Stirling approximation

as before,

If we neglect the term linear in , which we will later show is of no importance, these two expressions lead to the Flory-Huggins free energy of mixing:

Compared to our previous expression, we see that the only difference is in the division of the second term by .

**Sidenote**

This formula is for , but it is easy to show (homework) that for a *blend* of two polymers with and , we will still have and Similarly, if the molecular volumes are different we can define and , then continue to use the same expression.

#### Polymer/solvent phase transfersEdit

A system composed of a polymer immersed in a solvent can be in a uniform phase (corresponding to a good solvent) or separated into two distinct phases (a bad solvent). Qualitatively, this depends on : the entropic contribution to the free energy from will always prefer mixing, but the preference of depends on the the sign of . Phase transfers can only possibly exist if , because otherwise the total change in energy due to mixing is always negative. When discussing Helmholtz free energy, is the degree of freedom - however, in the physical case of interest it constant and we must perform a Legendre transformation, or in other words introduce a Lagrange multiplier to impose the constraint that . We therefore

define

and after is minimized will be determined so as to maintain our constraint (it turns out that is the difference between the chemical potentials of the polymer and solvent). When has multiple minima ( for more than one ), a phase transfer can exist.

If has only one minimum at , then we must have . If has two minima, a first order phase transfer will exist when the free energy at these two minima is the same. This amount

to a *common tangent construction* condition for (see 12):

This requires . The two formulations (in terms of and ) are of course identical. The common tangent actually describes the free energy of a mixed phase system (having a volume at concentration and a volume at concentration , such that ). When this line is always lower than the concave profile of the uniform system with concentration , and therefore the mixed-phase system must be the stable state.

Note that any additional term to which is linear in will only produce a shift in , and not qualitatively change the phase

diagram. This is because

Returning to the Flory-Huggins mixing energy, for we can see that has two minima and the system can therefore be in two phases. For only one minimum exist, and therefore only one phase. Generalizing beyond the Flory-Huggins model, at any temperature there exists some , and often a dependence works well experimentally (we have found a dependence assuming that the interactions are independent of temperature). For every or , we can find and from the procedure above where two phases exist. This produces a phase diagram similar to (13), where the curve is known as the *binodal* or *demixing curve* .

The phase diagram (13) includes a few more details: one is the critical point or , beyond which two solution can no longer exist. Another is the *spinodal curve*, existing within the demixing curve at , marks the point of transition between metastability and instability (within the spinodal curve, phase spearation occurs spontaneously, while between the spinodal and binodal curves it requires some initial nucleation). The spinodal curve is usually quite close to the binodal curve, and since it can be found analytically provides a useful estimate:

The endpoint of the spinodal curve is also the endpoint of the binodal curve; also, this endpoint is the same for the

and curves. We can find it from

Inserting this into the equation for gives

There is a great deal to expand on here. Chapter 4 in Rubinstein is a good place to start.

## Surfaces, Interfaces and MembranesEdit

### Introduction and MotivationEdit

We will differentiate between several types of surfaces:

- An outer
*surface*(or boundary) between a liquid phase and a solid boundary or surface. This surface needs not be in thermal equilibrium and exists under external constraints. - An
*interface*between two phases in equilibrium with each other, like the A/B liquid mixture that was studied earlier. *Membranes*have a molecular thickness and are in equilibrium with surrounding water.

We will talk now separately about flat interfaces first, and then extend the discussion to curved and fluctuating interfaces.

### Flat SurfacesEdit

#### Ginzburg-Landau formalismEdit

The simplest kind of non-homogeneous system one can imagine may be described by the variation in some order parameter or concentration as a function of a single spatial direction, . For instance, if we have a gas at and a liquid at , there will be some crossover regime between them. This kind of physics can be treated with a Ginzburg-Landau formalism, which can be derived from the continuum limit of a lattice gas/Ising model.

If every cell (with size ) is parametrized by a discrete

spin variable $S_{i}$ such that

we may write the Hamiltonian as

The are the interaction constants between

cells. Note that

The partition function is

with . We can formulate now a mean-field theory (by neglecting correlations such as: ) for this model in cases of spatial inhomogeneities (presence of walls and interfaces). The full development is left as an exercise: the result assumes a local thermal equilibrium

and gives

Separating this $F_{0}$ into internal energy and entropy,

In the continuum limit and and

neglecting long-term interactions, we can perform a Taylor expansion:

$z$ is the coordination number.

Adding the continuum limit entropy,

We can find the profile at equilibrium by minimizing the free energy functional with respect to and taking external constraints into account. Normally, and the minimum of is homogeneous other than surfaces and interfaces. If , the minimal solution is a constant

and we will have a single homogeneous phase. On the other hand, if

and we are in the two-phase region in ( then a 1D profile must exist that solves the Euler-Lagrange equation, and becomes approximately homogeneous far from the center of the interface.

#### 1D profile at an interfaceEdit

Quite independently of the previous treatment and the microscopic model, the free energy can be written as a functional of an order

parameter and its gradients:

Since , for the system avoids strong local fluctuations and smooth states have smaller energies. A uniform state is therefore preferred, and if the system is not allowed to become fully uniform then regions of different phases form in equilibrium with each other. This is shown in (16), and can also be described by a tangent construction of the type illustrated in (12). In the two phase example above, due to the symmetry of in , the critical point is clearly at . We will make a Taylor expansion of

around $\phi_{c}$:

Due to the same symmetry in , an expansion of in should contain only even powers. Performing this expansion gives the

result

In general the will be replaced by some positive numerical factor . To obtain the correct critical behavior (note that ) we assume a linear dependence of the form ,

and minimize

The above expansion of the inhomogeneous free energy is called the Ginzburg-Landau (GL) model or expansion. By applying a variational

principle on this free energy, we obtain the Euler-Lagrange (EL) equations:

06/09/2009

Here

In particular, and .

The EL equation is therefore

This is the well-known Ginzburg-Landau (GL) equation. For the only homogeneous (bulk) solution (arrived at by

neglecting the Laplacian term) is

In the other case when , the system has two homogeneous

solutions

If we do not neglect the derivative but assume a 1D profile with

and $\psi^{\prime}\left(\pm\infty\right)=0$, we must solve the equation

The exact solution of the GL model is

We have introduced the correlation length , which is typical of the width of the meniscus (the layer in which the phases are mixed). As a matter of fact, is also the correlation length by the definition . The dependence is the mean field result with an exponent . In general, . We also have for the order parameter dependence where we have obtained in MFT .

#### Surface energy and surface tensionEdit

*Surface energy* is the excess of energy in the system with respect to the bulk. *Surface tension* is defined as the surface energy per unit area. Therefore, in our case of two phases separated

by a meniscus, $\sigma$ can be calculated using

Here, we have subtracted the bulk energy of the separate surfaces from the energy of the full system including the interface. Note that in equilibrium, by definition . With the 1D dependence we are treating, then,

and

This is not an extensive quantity like , a single number in the size of the system: it is rather a geometry independent parameter with units of energy per unit area.

The first term above is reminiscent of kinetic energy and the second of potential energy. An analogy to the classical mechanics of a point particle exists, as detailed in the following table. \begin{table}[H] \centering{}\begin{tabular}{|c|c|} \hline & (time)\tabularnewline \hline \hline & (distance)\tabularnewline \hline & (kinetic energy)\tabularnewline \hline & (potential energy)\tabularnewline \hline & (total energy)\tabularnewline \hline \end{tabular} \end{table} With this analogy in mind, we can derive an expression similar to energy conservation in mechanics. From applying the variational principle

to $f_{0}$ we obtain

Multiplying this by $\psi^{\prime}$ gives

Integrating over $z$ from $-\infty$ to $+\infty$,

The last term disappears due to to the boundary condition at , where and therefore . The analogy between this equation and the law of conservation of mechanical

energy can be stressed by writing it as

Returning to the surface tension, we can use this conservation law

to rewrite it in the simpler form

An estimate may be obtained from

or

The exact expression for may be obtained from the exact GL form that we have derived for . In any case, the temperature

dependence of $\sigma$ is of the form

If we insert the general exponential dependencies of and into the equation, we will see that the exponent for surface energy as function of is . This discussion can be extended to systems which do not have symmetry between and , such as a liquid/gas system with two densities and . Without proof, we will state that

within the GL formalism it can be shown that

The surface energy will be

For a profile in the $z$ direction,

After variation, one obtains for the two coexisting phases with $n_{\ell}>n>n_{g}$

with and .

The density profile interpolates smoothly between the two phases:

A few generalizations:

- Surfactants or surface active materials: this includes soap, detergent, biological membranes composed of biological amphiphiles called phospholipids and more. What they have in common is that they are formed of molecules with charged or polarized {}"heads" connected to long hydrocarbon {}"tails". These molecules are called
*amphiphyllic*, since the tails are*hydrophobic*and the heads*hydrophillic*. This causes them to accumulate on interfaces between water and air, and reduce surface tension (by a factor $\sim2-3$):

where is the surface concentration of the soap molecules.

- Emulsions: drops of oil in water (or water in oil), stabilized by some sort of emulsifier (which is also a surfactant). Some common examples are milk and mayonnaise.

**Sidenote**

There is a French biochemist by the name of Herve This who specializes in molecular gastronomy, who has some very interesting popular lectures which are worth looking up. He authored several books (one is called "molecular gastronomy"), and appeared on several TV shows. In his presentations he explains how food preparations depends crucially on physical chemical processes on the molecular level. This includes preparation of mousse, whipped cream, sauces, thickeners and emulsifiers.

- Detergency of soap: while soap reduces surface tension between oil and water, it does not create a phase where oil and water are mixed on a molecular level. Rather, micrometric oil droplets are formed in the aqueous solution. The process of cleaning is the process where oily dirt is solubilized in the aqueous solution and is washed away from the object we clean.

06/11/2009

### Curved SurfacesEdit

#### Review of differential geometryEdit

\begin{description}

[{Books:}] The book by Safran has a short introduction which will be followed here. The one by Visconti is more thorough and oriented towards other physics problems such as relativity . There also exists a multi-authored book on the subject edited by David Nelson, and a mathematical book on the theory of manifolds by Spivak. \end{description} In order to discuss surfaces and curves which exhibit local curvature, we will need to introduce a few mathematical concepts. A brief introduction follows. \paragraph{Curves} A *parametric curve* is a set of vectors along some contour in space, expressed as a function of the parameter , which may vary, for example, from to the length of the curve. The differential length element

along the curve can be expressed by

A tangent vector $\hat{\mathbf{t}}$ can be found from

Note that from the magnitude of this expression, is always a unit vector. It is tangent to the curve because it is proportional to .

With these definitions, we can define *curvature* as one extra

derivative:

The unit vector is a unique vector perpendicular to (this is easy to show by taking ),

and we can also write

It is also useful to define the local *radius of curvature* . Some intuition can be gained from an analogy with the kinetics of point particles moving without a friction on a curve in space, if is replaced by the time . The tangent and curvature vectors can then be related to the velocity and acceleration, respectively.

\paragraph{Surfaces}

Similarly to curves, *a parametric surface Failed to parse (syntax error): \mathbf{r'' =\mathbf{R}\left(u,v\right) }* in space can be defined as a function of two parameters. There are

three scalar functions contained in this explicit definition:

Note that it is also possible to represent surfaces implicitly as

where other than its zeros is arbitrary.

Any explicit definition requires some particular choice of and

$v$. For instance, one choice (called the Monge representation) is

In vector notation,

This works only if there is a single value for each choice of and , and is very convenient for surfaces which are almost flat. Another common choice useful for nearly spherical surfaces is the spherical representation, where and .

In spherical coordinates, this is

We can define two tangent vectors and at every point on the surface, such that . The unit vector normal to the surface is . It is easy to find the unit vector from the implicit representation, and one can usually find an implicit representation: for instance, starting from Monge . On the surface,

implies

The vector can be any vector tangent to the surface, and therefore must be proportional to

the normal vector:

\paragraph{Metric of a curved surface}

A surface has been defined as an ensemble of points embedded in 3-dimensional space. In order to measure length along such a surface, we must integrate along a differential length element

within it:

The metric is defined as

It is positive definite since

The surface element can be expressed in terms of the metric:

We illustrate this in the Monge representation as an example. Here,

The surface element is

with the metric

The length element is

and therefore we have in the Monge representation

In the implicit representation, one can begin the same process by

writing the surface element in terms of the volume element:

using the 3D Dirac delta function .

A general property of the Dirac delta is that

where is the inverse function such that . In terms of the function such that the surface is defined by

$F=0$, we can use this property to write

or

Returning to the implicit version of the Monge representation,

\paragraph{Curvature of surfaces}

So far we have discussed first order differential expressions and the area element. This has to do with properties like surface energy . Curvature is a second order property, useful in discussing deformations and fluctuations. Consider a curve with on a surface parametrized by and . On the curve, and . If is a vector normal

to the surface, the local curvature (of the curve) is

The first derivative is

and the second derivative

Since is perpendicular to and

$\mathbf{r}_{v}$, we are left with

\begin{minipage}[t]{1\columnwidth} \begin{shaded} (some missing formulas...)\end{shaded}

\end{minipage}

We finally obtain

and with

and $\mathrm{d}\hat{\mathbf{n}}=\hat{\mathbf{n}}_{u}\mathrm{d}u+\hat{\mathbf{n}}_{v}\mathrm{d}v$,

(missing diagram...)

\paragraph{Curvature tensor}

Since $\mathrm{d}\mathbf{r}\cdot\mathbf{\hat{n}}=0$,

or

The quantity

is a second rank tensor or dyadic.

Now, we can write with \begin{minipage}[t]{1\columnwidth} \begin{shaded} (some missing formulas...)\end{shaded}

\end{minipage}

or

where **Failed to parse (unknown function "\normalcolor"): \mathbf{r}'(s)={\normalcolor \frac{\mathrm{d}\mathbf{r}}{\mathrm{d}s}}** .

This can be used for the case of an implicitly defined surface where

$\hat{\mathbf{n}}=\frac{\triangledown F}{\left|\triangledown F\right|}$:

Using $\partial_{i}\left|\triangledown F\right|=\partial_{i}\sqrt{\left(\partial_{x}F\right)^{2}+\left(\partial_{y}F\right)^{2}+\left(\partial_{z}F\right)^{2}}$,

06/16/2009

\paragraph{The curvature tensor and its invariants}

The dyadic matrix has eigenvalues , a trace and a determinant which are invariant under similarity transformations . The sum of the principal minors is also invariant:

to see this, consider the characteristic polynomial

Here is the unit matrix. Expanding in

powers of $\lambda$,

We can identify clearly the coefficients of the polynomial as the 3 invariants. One eigenvalue is always equal to zero (as an exercise do it in the implicit representation). If we choose , we are reduced to two nontrivial invariants: and (as **Failed to parse (unknown function "\normalcolor"): {\normalcolor \mathrm{Det}}(Q)=0)** . These invariants are called the *mean curvature* and the

*Gaussian curvature* $K$:

For example, in the implicit representation we can write

where

Note that since, for instance,

with a few more steps we can show (another exercise) that

where by cyclic permutations we mean permuting the axes: . In the case of the Monge representation where , and have a simpler form:

One can then show that

#### Small disturbances of planar surfacesEdit

To treat nearly flat surfaces, one can use the Monge representation to expand a Taylor series around a completely flat surface in derivatives

of $h\left(x,y\right)$:

or equivalently

From similar arguments, one can show that

In the general parametric representation,

with and . Picking a unit vector in the plane, the curvature in the direction of

is given by

The parameters $l$ and $m$ must obey

In investigating as a function of the direction of , we can find its extrema with the constraint

$a=1$ by adding a Lagrange multiplier:

The solution takes the form of a quadratic equation

which has 2 roots: and This extremum finding process defines the *principal directions* , which (we will state without proof) are always perpendicular to each other.

The two invariants are then

Consider a few cases in terms of the radii of curvature and :

- If at some point both radii are positive, then , , and are all positive. The surface is convex around the point, as in (17a).
- If both are negative, then and . The surface is concave around the point, as in (17b).
- If the two have opposing signs, is negative and one is at a saddle point of the surface, as in (17c).
- The special surface having at
*any*point is called a*minimal surface*(or*Schwartz surface,*after the 19th century mathematician who studied them in detail). These surfaces have a saddle at every point, as one curvature is always positive and the second negative: . Hence, their Gaussian curvature is always negative:

We will use the principal directions to describe a local paraboloid

expansion of a nearly flat surface. In general,

In the Monge representation,

#### Free energy of soft surfacesEdit

\begin{description}

[{Book:}] Landau & Lifshitz' book on *Elasticity* has a chapter on elasticity of hard (solid) shells. There is also a book by Boal on *elasticity and mechanics of fluid membranes* . Safran's book shows how the parameters we describe can be derived from a microscopic model where the lipid (surfactant) molecules are modeled as beads connected with various springs. \end{description} Consider a liquid surface or fluid membrane: as such a surface curves,

its free energy varies. Phenomenologically,

All the integrals are taken over the surface. The fact that the above expression models a fluid membrane is related to the fact that we do not account for any lateral shear forces. Molecules composing the fluid membrane are free to flow inside the membrane but they resist elastic deformations such as bending. The first term describes the contribution of surface tension, which is proportional to the total surface area. The geometric values and are the mean and Gaussian curvatures we have already encountered. The coefficients and (with units of energy) depend, like , on the material properties in question. The spontaneous curvature is also a material property: it defines a certain preferred angle (perhaps due to the shape of surfactant molecules), and its sign depends on the preferred direction of curvature. See (18) for an illustration. Unless there is an active process that causes an asymmetry in the lipid composition of the two leaflets, the bilayer will have the same lipid composition on the inside and outside, and therefore has in total . Usually for fluid membranes, and range from to .

One example is a sphere of radius $R$, where:

This gives

The interesting fact that the surface integral over the Gaussian curvature gives a constant value of - independent on the radius of the sphere - has a deep meaning. It is related to the famous Gauss-Bonnet theorem which will be stated here without further details: according to this theorem, the integral over the Gaussian curvature is a topological invariant of the surface whose value is equal to , where is the *genus* of the surface. A sphere or any closed object with no {}"holes" has and an integrated Gaussian curvature of , while a torus (or {}"donut") with one hole has and hence a zero integrated Gaussian curvature.

**Sidenote**

More information about the Gauss-Bonnet theorem may be found in books on differential geometry

A second example is an infinite cylinder with radius . Here,

and $\kappa_{b}=0$. The free energy per unit length is

An even simpler example is the infinite plane, where .

This yields

06/18/2009

#### Thermal fluctuations of a planeEdit

\begin{description}

[{Book:}] Safran's book. \end{description} To second order in derivatives of in the Monge representation

for $\bar{k}=0$,

The minimum of energy is obtained for a flat surface. Going to a Fourier

transformed form, we have

This gives for the free energy in terms of the normal surface modes

$\left\{ h_{q}\right\} $:

With real, we know that , or . From the classical equipartition theorem we can estimate the equilibrium

energy for the average of this quantity:

It is now useful to define the new length scale , and examine the limits of and . In the limit, one obtains a surface dominated by surface

tension. Consider the real space thermal correlation function

Since this integral diverges at both large and small , to obtain a physically meaningful result we must introduce cutoffs to the range of : where is the linear dimension of the system, and where is the typical molecular size. This gives an example of a famous result from the 1930's, known as Landau-Peierls instability

for 2-dimensional systems and the lack of an ordered phase at $T>0$:

Since the logarithmic divergence is very weak, it turns out that the thermal fluctuations are two or three Angstroms in size for a water surface of macroscopic (a few millimeters or centimeters) dimension. These thermal fluctuations are not easy to measure because the signal should come only from the water molecules at the water surface. In the 1980s they were measured for the first time for water surfaces at room temperature using a powerful synchrotron X-ray source. The technique employs scattering at very low angles (called grazing incidence) from the water surface and obtains the intensity of the scattered X-ray as function of This quantity is proportional to . In the opposite limit where , the membrane is dominated

by its elastic energy, and $\sigma\ll kq^{2}$ can be neglected:

The divergence at small is much larger here than in the first

case, and

and

In such membranes, which are dominated by elasticity, the fluctuations increase linearly with membrane size. For a membrane around in length, a typical amplitude is in the range. Another interesting observation is that . For small (flexible membranes), as well as for higher temperatures, the fluctuations become larger. This is valid as long as the condition of the elastic-dominated case, , remains satisfied. Also, recall that the source of the large membrane fluctuations comes from the small or large wavelengths, and not from small wiggles associated with the motion of individual molecules.

#### Rayleigh instabilityEdit

Due to surface tension, a cylinder of liquid created in air (or surrounded by another immiscible liquid) is unstable and will break into spherical droplets. Let's consider the following model: a cylinder of length and smaller radius , which contains inside it an *incompressible* liquid with a total volume of . For simplicity, we will consider perturbations which preserve the body-of-revolution symmetry around the main axis and the cylinder length , such that only the local radius along the cylinder's axis may vary. Expanding in normal modes

then gives

The mode amplitudes are .

Note that

with depending on . This dependence can be found from

the constant volume constraint

This is exact, but for small perturbations we can expand the root

and obtain

The surface energy of the distorted cylinder will be

(We have used expression for the surface area of a body-of-revolution with axial symmetry). Expanding all quantities up to second order

in gives

Finally,

The conclusion is that modes having will reduce the original cylinder free energy Hence, this is the onset of an instability called the Rayleigh instability of a liquid cylinder. A liquid cylinder will spontaneously start to develop undulations of wavelength . These undulations will grow and eventually break up the cylinder into spherical droplets of size . Note that if we go back to the planar surface by taking the limit , no such instability will occur since the planar geometry has the lowest surface area with respect to any other fluctuating surface.

## Student ProjectsEdit

### Polymer DynamicsEdit

\noindent \begin{center} {\huge Physical Models in Biological } \par\end{center}{\huge \par} \noindent \begin{center} {\huge Systems and Soft Matter}

{\huge ~}

~

~{\huge }

\par\end{center}{\huge \par} \noindent \begin{center} {\huge Final Course Project}

{\huge ~}

{\huge ~}

{\huge ~}

{\huge ~}

\par\end{center}{\huge \par} \begin{center} \includegraphics[scale=0.6]{Photo-of-Combi-Formulations-Example-4} \par\end{center} ~ ~ ~ ~ \noindent \begin{center} {\huge A Guided Tour to the Essence }

{\huge ~}

{\huge of Polymer Dynamics} \par\end{center}{\huge \par} ~ \noindent \begin{center} {\large Submitted by : Shlomi Reuveni} \par\end{center}{\large \par} ~ ~ ~ ~\newpage{} \tableofcontents{} \newpage{}

#### What is this document all about?Edit

This paper is submitted as a final project in the course {\small {}"Physical Models in Biological Systems and Soft Matter". }Writing this document I aimed at achieving two goals. The first was getting to know a little better a subject that I found interesting and was not covered during the course. As an interesting by product I have also profoundly improved my knowledge on diffusion, a subject I was already superficially acquainted with. The second goal was to provide an accessible exposition to the subject of polymer dynamics aimed mainly for advanced undergraduate students who are curious about the subject and would like an easy start. This is also the reason this document is titled: {}"A Guided Tour to the Essence of Polymer Dynamics" and for the fact it is written in the form of questions and answers.

The saying goes: {}"There are two ways by which one can really master a subject: research and teaching". I felt that the effort I have put into making this document readable for advanced undergraduate students taught me more than I would have learned by passive reading. I have tried hard to make this document as self contained and self explanatory as possible and therefore hope that it will be of some help to you the curious student. So, if you wonder {}"What do you mean by polymer dynamics?" and {}"How can this subject be of any interest to me?" please read on. \newpage{} \section{O.K, sum it up in a few lines so I can decide if I want to go on reading!}

##### What's a polymer?Edit

A polymer is a large molecule (macro-molecule) composed of repeating structural units (monomers) typically connected by covalent chemical bonds. Due to the extraordinary range of properties accessible in polymeric materials, they have come to play an essential and ubiquitous role in everyday life - from plastics and elastomers on the one hand to natural biopolymers such as DNA and proteins that are essential for life on the other. \begin{figure}[H] \begin{centering} \includegraphics[scale=0.5]{Single_Polymer_Chains_AFM} \par\end{centering} \caption{Appearance of real linear polymer chains as recorded using an atomic force microscope on surface under liquid medium. Chain contour length for this polymer is ; thickness is . Taken from: Y. Roiter and S. Minko, AFM Single Molecule Experiments at the Solid-Liquid Interface: In Situ Conformation of Adsorbed Flexible Polyelectrolyte Chains, Journal of the American Chemical Society, vol. 127, iss. 45, pp. 15688-15689 (2005) }

\end{figure}

##### What's polymer dynamics?Edit

As every other molecule a polymer is also affected by the thermal motion of surrounding molecules. It is this thermal agitation that causes a flexible polymer to move about in the solution while constantly changing its shape. This motion is referred to as polymer dynamics.

\begin{figure}[H] \begin{centering} \includegraphics[scale=0.5]{Motion} \par\end{centering}

\caption{Photographs of DNA polymers in aqueous solution taken by fluorescence microscopy. There is a 1 second interval between successive frames. The motion is clearly visible. Taken from: Introduction to Polymer Physics, M. Doi Translated by H. See, Clarendon Press, 30 November 1995.}

\end{figure}

##### What can I find in the rest of this document?Edit

If you ever wondered how can one understand the motion of a polymer and what are the physical properties emanating from the dynamics of these materials you should read on. We will start with the building blocks, the dynamics of a single particle in solution. We will then gradually build on, presenting two models for polymer dynamics. Experimental observations will also be discussed as we confront our models with reality.

\newpage{}

\section{I knew there must be some preliminaries, can you keep it short and to the point? }

\subsection{Why do you bore me with this? why can't I skip directly to section 4?}

If you are familiar with concepts such as Diffusion, Einstein relation and Brownian motion you would find this section easier to read. If you are also familiar with the mathematics behind these concepts, Smoluchowski and Langevin equations, you can skip directly to section 4. In order to understand polymer dynamics we have to start from something more basic. A polymer can be thought of as long chain of particles (the monomers), the particles are connected to one another and hence interact. It would be wise to first try and understand the dynamics of a single particle and only then take into account these interactions. The dynamics of a single particle lies in the heart of the section.

\subsection{Can't say I know much about any of the stuff you mentioned above but first thing is first, what is diffusion?}

Molecular diffusion, often called simply diffusion, is a net transport of molecules from a region of higher concentration to one of lower concentration by random molecular motion. The result of diffusion is a gradual mixing of material. In a phase with uniform temperature, absent external net forces acting on the particles, the diffusion process will eventually result in complete mixing or a state of equilibrium. Basically, it is the movement of molecules from an area of high concentration to a lower area. \begin{figure}[H] \begin{centering} \includegraphics[scale=0.55]{Diffusion_(1)}

~

\includegraphics[scale=0.14]{cell_diffusion_ink_India} \par\end{centering}

\centering{}\caption{Top: Schematic representation of mixing of two substances by diffusion. Bottom: Ink diffusing in water.}

\end{figure}

##### How can we mathematically treat diffusion?Edit

As mentioned above diffusion is basically the movement of molecules from an area of high concentration to an area of lower concentration. For simplicity we will consider one-dimensional diffusion. Let be the concentration at position and time . A Phenomenological

description of diffusion is given by Fick's law:

In words: if the concentration is not uniform, there will be a flux of matter which is proportional to the gradient in concentration. The proportionality constant is called the diffusion constant and it is denoted by its units are . The minus sign is there to take care of the fact that the flow is from the higher concentration region to the lower concentration region.

#### Where is this flux coming from?Edit

Its microscopic origin is the random thermal motion of the particles. The average velocity of each particle is zero, and there is an equal probability for each particle to have a velocity directioned right or left. However, if the concentration is not uniform the number of particles which happen to flow from the higher concentration region to the lower concentration region is higher than the number of particles flowing in the other direction simply because there are more particles there. \begin{figure}[H] \begin{centering} \includegraphics[scale=0.5]{\string"23-07-2009 22-19-57\string".eps} \par\end{centering} \centering{}\caption{Microscopic explanation for Fick's law. Suppose that the particle concentration is not uniform. If the particles move randomly as shown by the arrows, there is a net flux of particles flowing from the higher concentration region to the lower concentration region. Here the diffusion constant of the particle, which determines the average length of the arrows, is assumed to be constant. }

\end{figure}

###### How do we go on?Edit

We now write an equation for the conservation of matter, the change in the number of particles located at the interval from time to time is given by the number of particles coming/going from the left minus the number of particles

coming/going from the right: {\tiny

}or:

taking the limits and assuming continuity and differentiability of the concentration and the flux

we obtain:

Substituting the expression for the flux gives the well known diffusion

equation:

\subsubsection{What happens if the particles are under the influence of some kind of a potential ?}

If this happens Fick's law must be modified since the potential

exerts a force:

on the particle and gives an non zero average velocity . If the force is weak there is a linear relation between force and velocity

given by:

the constant is called the friction constant and its inverse is called the mobility.

\subsubsection{How come the velocity doesn't grow indefinitely? there is a constant force!}

Correct, but it is not the only force acting on the particle. There are also friction and random forces exerted by other particles and hence like a feather falling under its own weight the particle reaches a finite average velocity.

###### O.K, and what do we do now?Edit

We will obtain the Smoluchowski equation that takes the potential into account, but first we will obtain an important relation between the diffusion constant the temperature and the friction constant. The average velocity of the particle gives an additional flux

so that the total flux is:

An important relation is obtained from this equation. As you may recall from statistical mechanics, in equilibrium the concentration is given

by the Boltzmann distribution:

for which the flux must vanish and hence:

Substituting for $c_{eq}(x,t)$ we get:

Since this is true for every $x$ it follows that:

this relation is called the Einstein relation. The Einstein relation states that the diffusion constant which characterizes the thermal motion is related to the friction constant which specifies the response to external force. The Einstein relation is a special case of a general theorem called the fluctuation dissipation theorem, which states the spontaneous thermal fluctuations are related to the characteristics of the system response to an external field. ====And the Smoluchowski equation is obtained by plugging in the {===="new" flux into the continuity equation right?} Exactly right! using the Einstein relation we rewrite the flux as:

Substituting this into the continuity equation we get the Smoluchowski

equation:

which serves as a phenomenological description of diffusion under the influence of an external potential. Although we have derived the above equation for the concentration the same equation will also hold for the probability distribution function that a particular particle is found at position at time . This is true since the distinction between and is, for non-interacting particles, only the fact that is normalized. The evolution equation for the probability

is hence written as:

which will also be termed the Smoluchowski equation.

### What's Brownian motion?Edit

Brownian motion (named after the Scottish botanist Robert Brown) is the seemingly random movement of particles suspended in a fluid (i.e. a liquid or gas) or the mathematical model used to describe such random movements. Brownian motion is traditionally regarded as discovered by the botanist Robert Brown in 1827. It is believed that Brown was studying pollen particles floating in water under the microscope. He then observed small particles within the vacuoles of the pollen grains executing a jittery motion. By repeating the experiment with particles of dust, he was able to rule out that the motion was due to pollen particles being 'alive', although the origin of the motion was yet to be explained. \begin{figure}[H] \begin{centering} \includegraphics[scale=0.5]{PerrinPlot2} \par\end{centering} \caption{Three tracings of the motion of colloidal particles of radius 0.53\textmu{}m, as seen under the microscope, are displayed. Successive positions every 30 seconds are joined by straight line segments (the mesh size is 3.2\textmu{}m).Reproduced from the book of Jean Baptiste Perrin, Les Atomes,Perrin, 1914, p. 115.} \end{figure}

##### And the mathematical treatment?Edit

\subsubsection{Before we start I have to say that it seems awfully similar to diffusion, what's new?} You are right! these are opposite sides of the same coin. However, the approach we take here is microscopic rather than macroscopic. Instead of starting from a macroscopic quantity, the concentration, we will start from the equation of motion for a single particle in

solution, Newton's second law:

Here the first term on the right hand side is the friction force which is assumed to take a standard form of being opposite in direction and proportional to the velocity. The second term is the force exerted as a consequence of the external potential and the third term is a random force that represents the sum of the forces due to collisions with surrounding particles. Let us now rewrite this equation in the

form:

where we have defined . Our next step is an approximation, treating very small and light weight particles we will drop the inertial term

assuming it is negligible and obtain:

we will refer to this equation as the Langevin equation. This equation describes the motion of a single Brownian particle, solving it one can (in principle) obtain a trajectory of such a particle.

###### I don't understand why you throw away the inertial term, please explain!Edit

This can be further explained by the following example. Consider a particle immersed in some solvent moving under the influence of a constant external force . Let us

denote the velocity:

the equation of motion for $v$ is given by:

For simplicity let us factor out the random force by taking an ensemble average (to avoid the subtleties of taking the time average) of both

sides of the equation and obtaining an equation for the average velocity:

Multiplying both sides by and integrating

from zero to $t$ we are able to solve for $<v>$:

Where we have assumed that the particle was at rest at time zero. We see that the velocity approaches an asymptotic value of exponentially fast and that the characteristic relaxation time is . Dropping the inertial term in the first place

we would have simply gotten:

i.e an immediate response to the force. It is now clear that if the relaxation time is small, dropping the inertial term is a good approximation! In the case of small particles (atoms,molecules,colloidal particles, etc...) immersed in a liquid, the relaxation time is indeed very small supporting the validity of our approximation. \subsubsection{If these are opposite sides of the same coin how does the Langevin equation relate to the Smoluchowski equation?} As mentioned earlier, since we don't know the exact time dependence of we will treat it as a random force. The freedom in choosing the distribution of is very large, here however we will limit ourselves to a model which will be equivalent to the Smoluchowski equation. \subsubsection{The Langevin equation gives us trajectories, the Smoluchowski equation gives us a probability distribution for the position, how can they be equivalent?} Excellent question! Examining many trajectories one can generate the probability distribution for the position. For example starting the particles from a given origin and following its trajectory up to some time one can record the position Repeating the processes many many times will yield many many different Creating a histogram one can generate an empirical probability distribution for the position at time . One can show () that if the probability distribution of is assumed to be Gaussian and is characterized

by:

then the distribution of determined by the Langevin equation satisfies the Smoluchowski equation. In other words, if is a Gaussian random variable with zero mean and variance and if and are independent for then the above statement holds. \subsubsection{I still don't understand, can you demonstrate on a simple special case?} Yes! Consider the Brownian motion of a free particle (no external

potential) for which the Langevin equation reads:

If the particle is at at time , its position at time

$t$ is given by:

From the above we deduce that is a linear combination independent Gaussian random variables. We now recall that the sum independent Gaussian random variables is a Gaussian random variable itself and hence the probability distribution of may be written

as:

where:

The mean is calculated from:

For the variance we have:

**Failed to parse (lexing error): B=<\left(\overset{t}{\underset{0}{\int}}g(t')dt'\right)\left(\overset{t}{\underset{0}{\int}}g(t")dt"\right)>=\overset{t}{\underset{0}{\int}}\overset{t}{\underset{0}{\int}}<g(t')g(t")>dt'dt"**

hence:

**Failed to parse (lexing error): B=\overset{t}{\underset{0}{\int}}\overset{t}{\underset{0}{\int}}\frac{2k_{B}T}{\zeta}\delta(t-t')dt'dt"=\frac{2k_{B}T}{\zeta}t=2Dt**

and thus:

which is exactly (check by direct differentiation) the solution for

the Smoluchowski equation:

In other words, both equations result in the same probability distribution for . An important conclusion is that the mean square displacement of a Brownian particle from the origin is given by and is hence linear in time.

##### O.K, I think we covered everything! anything else?Edit

We are almost done but in order to complete our analysis we need to analyze one more problem, the Brownian motion of a harmonic oscillator.

\subsubsection{Why do we have to do this? how come we always have to talk about the harmonic oscillator?}

The harmonic oscillator is a simple system that serves as a prototype for problems we will solve later one. Treating it here will ease things for us later.

###### I understand, please go on.Edit

Consider a Brownian particle moving under the following potential:

The equation of motion for this particle is given by:

In order to get a formal solution for we multiply both sides

by $e^{\frac{k}{\zeta}t'}$and do some algebra:

We now integrate from $-\infty$ to $t$ and get:

Assuming the following boundary condition:

We conclude that:

It is also possible to solve under the initial condition ,

in that case:

and we have

\subsubsection{O.K, but is a random variable and hence is also one that doesn't tell me much... Can we calculate some moments? Start with the case of the particle that has been with us since .}

First we note that for the mean position we have:

and the mean position is hence zero. We now aim at finding an expression for the mean square displacement from the origin , the variance of will be calculated as a by product. We start

with the time correlation function of $x(t)$:

Recalling that:

we get:

Here we assumed that and used the fact that

since $t_{2}<0$. Similarly if $t<0$ we get:

We may hence conclude that:

Letting $t=0$ we get

which coincides with the known result obtained from statistical mechanics with the use of the Boltzmann distribution .

We will now show that this is also the variance:

**Failed to parse (lexing error): B=<(x(t)-A(t))^{2}>=<x(t)x(t)>=\overset{t}{\underset{-\infty}{\int}}\overset{t}{\underset{-\infty}{\int}}<g(t')g(t")>e^{\frac{k}{\zeta}\left(t'+t"-2t\right)}dt'dt"**

and hence:

The mean square displacement can now be easily

calculated:{\scriptsize

}and hence:

Here, unlike the case of free diffusion, for long times the mean square displacement is bounded by . The bound is approached exponentially fast with a characteristic relaxation time . Considering the opposite limit (very

short times) we have (to first order):

Indeed in this limit the particle has yet to {}"feel" the harmonic potential and we expect regular diffusion.

\subsubsection{I think that since is a linear sum of Gaussian random variables and hence Gaussian itself, we can also write an expression for the it probability distribution. Am I right?} Yes you are! We already found the mean and variance and hence the

probability distribution for $x(t)$ is:

which is exactly the Boltzmann distribution. We could have guessed that this will be so since we have given the particle an infinite amount of time to equilibrate with the potential well. ====Let's proceed to the case of the particle that started at **Failed to parse (syntax error): x_{0====** ! }

First we note that for the mean position we have:

the mean position depends on time and exponentially decays towards

zero. For the variance we have:

**Failed to parse (lexing error): B=<(x(t)-A(t))^{2}>=\overset{t}{\underset{0}{\int}}\overset{t}{\underset{0}{\int}}<g(t')g(t")>e^{\frac{k}{\zeta}\left(t'+t"-2t\right)}dt'dt"**

and hence:

Here again the variance exponentially decays towards the equilibrium variance. The probability distribution is Gaussian again and we have:

{\footnotesize

}which for short times is the same as

free diffusion:

and for long times gives the Boltzmann distribution:

\newpage{}

#### The Bead-Spring (Rouse) Model for Polymer DynamicsEdit

##### Give me the simplest model for polymer dynamics!Edit

A polymer is a chain of monomers linked to one another by covalent bonds. It is natural to represent a polymer by a set of beads connected to one another by springs. The dynamics of the polymer is modeled by the Brownian motion of these beads. Such a model was first proposed by Rouse in the fifties of the the twentieth century and has been the basis of the dynamics of polymers in dilute solutions. \begin{figure}[H] \begin{centering} \includegraphics[scale=0.5]{\string"26-07-2009 16-17-49\string".eps} \par\end{centering} \caption{A pictorial description of the Rouse model.} \end{figure}

###### But now the the beads are connected! how do we take that into account?Edit

Let be the positions of the beads, if we assume the beads experience a drag force proportional to their velocity as they move through the solvent, then for each bead we can

write the following Langevin equation:

Here is the friction coefficient of the bead and from now on we will assume that the beads are all alike and hence for every . The random force

is Gaussian with the following characteristics:

i.e the random forces acting on different beads and/or in perpendicular directions and/or in different times are independent.

###### And the potential ? Harmonic as always?Edit

Indeed, having harmonic springs connecting the beads, we will take

it as:

In this model the Langevin equation becomes a linear equation for

$\vec{R}_{n}(t)$, for the internal beads we have:

and for the beads at each end we have:

In order to unify the treatment we define two additional hypothetical

beads $\vec{R}_{0}$ and $\vec{R}_{N+1}$ as:

under this definition the Langevin equation for beads

is given by:

###### How do we proceed?Edit

In order to proceed it is convenient to assume that the beads are continuously distributed along the polymer chain. We first recall

that in the continuum limit:

Letting be a continuous variable, and writing

as $\vec{R}(n,t)$ the Langevin equation takes the form:

The definitions we made regarding the additional hypothetical beads and now turn into the following boundary

conditions:

\subsubsection{I don't know how to solve this one, can we bring it to a form of something we have solved before? }

Yes we can, as a first step we define normal coordinates by the following

transformation:

whose inverse is given by:

\subsubsection{Defining new coordinates (call them as you will) is one thing but the inverse must be defined such that it takes you back to the original coordinates! Is this truly the correct inverse? }

We verify this by direct substitution:

The first term gives:

Using the trigonometric identity:

the second term is written as:

which gives:

We conclude that:

which proves that the inverse transformation is defined correctly.

###### How does this new set of coordinates help us?Edit

We will now show that the equations of motion for the normal coordinates are the equations of motion for an infinite set of uncoupled Brownian harmonic oscillators. Since we have already treated the problem of a Brownian harmonic oscillator, this will ease our lives considerably. We start by applying

to both side of the Langevin equation for $\vec{R}(n,t)$: {\footnotesize

}The left hand side term is identified as:

The first term on the right hand side gives:

by integration by parts. Invoking the boundary condition for

the first term drops, another round of integration by parts gives:

Here the sine kills the first term and the second term is identified

as:

where we have defined:

We are left with the second term on the right hand side of the original

equation which we deal with by defining the random forces:

Which are characterized by zero mean:

And by:

since: {\footnotesize

}and use of the trigonometric identity:

gives: {\footnotesize

}which yields the result after preforming the integration. This means that the random forces with different values of and/or acting in perpendicular directions and/or acting in different times are independent. The equations of motion for the normal coordinates

are given by:

and since the random forces are independent of each other, the motions of the 's are also independent of each other. These are the equations of motion for an infinite set of uncoupled Brownian harmonic oscillators, each with a force constant and friction constant of its own. We have gone from one partial differential equation (which we don't know how to solve directly) for to an infinite set of uncoupled ordinary differential equations (from a type we are already familiar with) for the normal coordinates .

##### Great, we can now do some analysis!Edit

###### What can we say about the motion of the center of mass?Edit

Using the results of section 3 we will now calculate two time correlation function that will help us in the near future. We first note that since , is actually preforming free diffusion

and hence:{\tiny

}On the other hand, the time correlation function for

($p>0)$ is the one for a Brownian harmonic oscillator and hence:

where the relaxation time $\tau_{p}$ is given by:

A conclusion from the previous result is that:

We are now ready to calculate some real features of the Brownian motion of a polymer. We start with the motion of the center of mass, the

position of the center of mass:

is the same as the normal coordinate . The mean square

displacement of the center of mass is hence given by:

where the diffusion constant $D_{G}$ is given by:

and we note that it is inversely proportional to the number of monomers.

#### What can we say about rotational motion?Edit

To characterize rotational motion of the polymer molecule as a whole, let us consider the time correlation function of the end to end vector . Using normal coordinates,

can be written as:

which results in:

We therefore conclude that:

This time correlation function is a summation over many terms with different relaxation times. We will now see that for large enough times this infinite sum is well approximated by the first term. We

rewrite the correlation function as:

but since:

we have:

We also know that:

and hence the second term in the parentheses is bounded by an exponentially

decaying function and moreover it is never larger than $1/4$:

We conclude that the second term may be neglected for large times

and the correlation function is approximated to be:

which decays exponentially with a single relaxation time . The relaxation time is called the rotational relaxation

time, it is also denoted $\tau_{r}$ and is given by:

#### What can we say about the motion of one specific bead?Edit

We now turn to study the internal motion of a polymer chain focusing

on the mean square displacement of the $n-th$ monomer:

Direct substitution for and gives:

utilizing the correlation functions we have obtained above all the

cross terms vanish and we are left with:

Let us examine this expression in two limits, for :

The second term is a constant that doesn't depend on time (it is easily seen that the infinite sum converges) and hence is linear in in this limit. For large enough times the displacement of the monomer is determined by the diffusion constant of the center of mass as the monomer drifts away with the polymer as a whole. On the other hand, for , the motion of the segments reflects the internal motion due to the many modes of vibration. In this limit we may approximate by replacing summation with integration and by its average value :

Doing the integral by parts we get: {\tiny

}The first term vanishes (basic calculus) and the second term is transformed

into a Gaussian integral which gives:

We can now write the $\phi(n,t)$ as:

and observe that in this limit the mean square displacement of the monomer increases like , i.e in a sub-diffusive manner.

##### How does the Rouse model stand in comparison to experimental results?Edit

Unfortunately not as good as one might have hoped. The Rouse model may seem to be a very natural way to describe the Brownian motion of a polymer chain, but unfortunately the conclusions drawn from it do not agree with the experimental results. As we saw above, for the

Rouse model:

where is the molecular weight of the polymer. Experimentally

however, the following dependencies were measured:

Here, the exponent is that which is used to express the dependence of the radius of gyration on molecular weight ():

The value of is determined by the nature of the interaction between the solvent and the polymer, in a good solvent and in the state ().

The reason for the discrepancy between experiments and the Rouse model is that in the latter we have assumed the average velocity of a particular bead is determined only by the external force acting on it, and is independent of the motion of the other beads. However, in reality the motion of one bead is influenced by the motion of the surrounding beads through the medium of the solvent. For example, if one bead moves the solvent surrounding it will also move, and as a result other beads will be dragged along. This type of interaction transmitted by the motion of the solvent is called hydrodynamic interaction. We will discuss a model taking this interaction into account in the next section. \begin{figure}[H] \begin{centering} \includegraphics[scale=0.5]{\string"29-07-2009 14-52-26\string".eps} \par\end{centering} \caption{The hydrodynamic interaction. If bead moves under the action of the force , a flow is created in the surrounding fluid, which causes the other beads to move.} \end{figure} \newpage{}

#### The Zimm Model for Polymer DynamicsEdit

\subsection{So we need a model that will take into account hydrodynamics interactions, but how do we do that?} In the Rouse model we have assumed the average velocity of a particular bead is determined only by the external force acting on it, and is independent of the motion of the other beads. This assumption led

to the following Langevin equation:

In order to take into account hydrodynamic interaction we can generalize this assumption. Denoting the forces acting on the beads by , we assume that there is a linear relationship between these forces and the average velocity and so the following

holds:

Here is a matrix, the component of . It is now our task to calculate and write the appropriate Langevin equation. This can be done using hydrodynamics and some approximations (),

the result of the calculation gives:

where is the viscosity of the liquid, is the identity matrix, and is a unit vector in the direction of . The appropriate Langevin equation is given by (taking the same potential

$U$ as in the Rouse model):

and the random force is Gaussian with the following

characteristics:

\subsubsection{The Langevin equation we got seems complicated, it is not even linear in ! I guess there is an approximation coming my way, am I right? }

Since depends on the Langevin equation we got is not linear in and hence tremendously hard to solve. Zimm's idea was to replace (the factor that is causing the non-linearity) by its equilibrium average value , this is called the preaveraging approximation. In general the equilibrium value of depends on the interactions between the solvent and the polymer and hence will have a different value in a good/medium/bad solvents. Here we will concentrate on a special state of a polymer in solution, this state was also mentioned earlier and is called the state (). For a polymer in conditions, the vector is characterized by a Gaussian distribution with zero mean and a variance of . Here is the distance between two adjacent monomers and it follows

that the probability density function for $\vec{r}_{nm}$ is:

Since is a function only of we can calculate

$<H_{nm}>_{eq}$ (for $n\neq m$) as follows:{\scriptsize

}Noting that in spherical coordinates:

**Failed to parse (syntax error): \hat{r}\hat{r}=\left[\begin{matrix}{ccc} sin^{2}\theta cos^{2}\phi & sin^{2}\theta cos\phi sin\phi & cos\theta sin\theta cos\phi sin^{2}\theta cos\phi sin\phi & sin^{2}\theta sin^{2}\phi & cos\theta sin\theta sin\phi cos\theta sin\theta cos\phi & cos\theta sin\theta sin\phi & cos^{2}\theta\end{array}\right]**

We have:

**Failed to parse (syntax error): \overset{\pi}{\underset{0}{\int}}sin\theta d\theta\overset{2\pi}{\underset{0}{\int}}d\phi\hat{r}\hat{r}=\left[\begin{matrix}{ccc} \frac{4\pi}{3} & 0 & 0 0 & \frac{4\pi}{3} & 0 0 & 0 & \frac{4\pi}{3}\end{array}\right]=\frac{4\pi}{3}I**

and hence:{\scriptsize

}The integral is calculated in a straight forward way, defining

we have:{\scriptsize

}and hence:

where we have defined:

Substituting this result into our Langevin equation and re-writing

it in the continuum limit we get:

where the random force is Gaussian with the following

characteristics:{\small

}Note that depend only on and we have indeed linearized our equation as promised.

###### Seems familiar, shall we try normal coordinates again?Edit

Yes, we will one again use the normal coordinates defined for the Rouse model. We start by applying

to both side of the Langevin equation for $\vec{R}(n,t)$:{\tiny

}The left hand side term is identified as:

The first term on the right hand side gives:

which yields:

and with some additional algebra we get:

Defining:

this term can be written as:

###### But this doesn't decouple the equations! another approximation?Edit

Indeed, we will approximate by neglecting all the off diagonal terms. The reasoning goes as follows, we first note that setting

and noting that $h(n-m)=h(m-n)$ we can write $h_{pq}$ as:

we now use a trigonometric identity:

to get:{\tiny

}For large , the two inner integrals rapidly approach the following

integrals:

With this substitution $h_{pq}$ becomes:\underbar{ }

and after using the trigonometric identity:

{\small

}If is small our approximation is still fair but for the case it is invalid and this case deserves special attention. The

careful reader may have noticed that the sum:

starts from and it may seem that a discussion regarding is pointless. We will nevertheless require this case () later

on and so we calculate directly{\small : }

The inner integral gives:

which results in:

Substituting this into the expression for $h_{p0}$ gives:

where we have changed variables to . It is now easy to see

that for odd $p$: $h_{p0}=0$, while for even $p$ we get:

For $p=0$ this gives:

while for even , the integral may be re-expressed in terms of the Fresnel integral

to give:

and we see that:

concluding that for $p>0$:

We see that for large , is small and also decays with . We will hence neglect for and keep only the diagonal term .

###### O.K, what about the random forces?Edit

We are left with the second term on the right hand side of the original

equation which we deal with by defining the random forces:

Which are characterized by zero mean:

And by:

since: {\footnotesize

}gives: {\footnotesize

}which yields the result by definition of . This means that the random forces with different values of (remember ) and/or acting in perpendicular directions and/or acting in different times are independent.

###### That was a bit long, could you please sum up the main result?Edit

The main result is that we have found the equations of motion for the normal coordinates and that they are given by:

with

and since the random forces are independent of each other, the motions of the 's are also independent of each other. These are the equations of motion for an infinite set of uncoupled Brownian harmonic oscillators, each with a force constant and friction coefficient of its own. We have once again gone from one partial differential equation (which we don't know how to solve directly) for to an infinite set of uncoupled ordinary differential equations (from a type we are already familiar with) for the normal coordinates .

\subsubsection{Great! This is very similar to what we got for the Rouse model, are we going to repeat the same type of analysis? }

Since the equation for the normal modes is the same as that for the Rouse model, we can immediately write the expressions for the diffusion constant of the center of mass and the rotational relaxation time

using the results of the previous section:

##### How does the Zimm model stand in comparison to experimental results?Edit

As can been seen and depend on the molecular

weight $M$ as follows (recall that $M\propto N$):

The dependence of these quantities on the molecular weight agrees with experiments performed on solutions in the state. Furthermore,

the relaxation times of the normal modes are:

and hence for short times () the average mean square

displacement of the $n-th$ monomer is given by:

integration by parts gives:

The first term drops (elementary calculus), the second term is treated

by a change of variable $x=tp^{3/2}/\tau_{r}$ :

where we have identified the gamma function . The relation has been confirmed by analysis of the Brownian motion of DNA molecules. \begin{figure}[H] \begin{centering} \includegraphics[scale=0.5]{\string"31-07-2009 01-21-59\string".eps} \par\end{centering} \caption{The average mean square displacement of the terminal segment of a DNA molecule (solid line), observed by fluorescence microscopy. The dashed line is calculated from the theory of Zimm. The graph is plotted on a log-log scale, on this type of plot the slope of the lines corresponds to the exponent in the relation . The fact the the lines are parallel, supports the prediction . Taken from: J. Polym. Sci., 30, 779, Fig. 5. } \end{figure} \newpage{}

#### I Have More Questions, Where can I get Answers?Edit

\begin{thebibliography}{3} \bibitem{key-4}Introduction to Polymer Physics, Chapters 4\&5, M. Doi Translated by H. See, Clarendon Press, 30 November 1995. \bibitem{key-5}The Theory of Polymer Dynamics, Chapters 3-5, M. Doi and S. F. Edwards, Clarendon Press, 3 November 1988. \bibitem{key-6}Polymer Physics, Chapter 8, Michael Rubinstein and Ralph H. Colby Oxford University Press, 26 June 2003. \end{thebibliography}

A Wikibookian believes this page should be split into smaller pages with a narrower subtopic.You can help by splitting this big page into smaller ones. Please make sure to follow the naming policy. Dividing books into smaller sections can provide more focus and allow each one to do one thing well, which benefits everyone. |