# Introduction to Biological Systems and Soft Condensed Matter

Lecture notes for the course given by Prof. David Andelman, Tel-Aviv University 2009

Original author: Guy Cohen, Tel-Aviv University

## Introduction and Preliminaries

We will make several assumptions throughout the course:

1. The physics in question are generally in the classical regime, ${\displaystyle \hbar \rightarrow 0}$.
2. Materials are "soft": quantitatively, this implies that all relevant energy scales are of the order of ${\displaystyle k_{B}T}$.
3. Condensed matter physics deals with systems composed of ${\displaystyle O\left(10^{23}\right)}$ particles, and statistical mechanics applies. We are always interested in a reduced description, in terms of continuum mechanics and elasticity, hydrodynamics, macroscopic electrodynamics and so on.

We begin with an example from Chaikin & Lubensky, the story of an H2O molecule. This molecule is bound together by a chemical bond which is around ${\displaystyle 20k_{B}T}$ at room temperature and not easily broken under normal circumstances. What happens when we put ${\displaystyle \sim 10^{23}}$ water molecules is a container? First of all, with such large numbers we can safely discuss phases of matter: namely

${\displaystyle {\underset {({\mbox{amorphous/crystal ice)}}}{\mbox{solid}}}\leftrightarrow {\underset {\mbox{(water)}}{\mbox{fluid}}}\leftrightarrow {\underset {\mbox{(steam)}}{\mbox{gas}}}.}$

Gas is typical to low density, high temperature and low pressure. It is generally prone to changes in shape and volume, homogeneous, isotropic, weakly interacting and insulating. This is the least orderedform of matter relevant to our scenario, and relatively easy to treat since order parameters are small. The liquid phase is typical of intermediate temperatures. It flows but is not very compressible. It is homogeneous, isotropic, dense and strongly interacting. Its response to external forces depends on the rate of its deformation. Liquids are hard to treat theoretically, as their intermediate properties make simple approximations less effective. The solid is a dense ordered phase with low entropy and strong interactions. It is anisotropic and does not flow, it strongly resists compression and its response to forces depends on the amount of deformation they cause (elastic). Transitions between these phases occur at specific values of thermodynamic parameters (see diagram (1)). First order changes (volume/density "jumps" at the transition, and no jump in pressure/temperature) occur on the lines; at the critical liquid/gas point, second order phase transitions occur; at the triple point, all three phases (solid/liquid/gas) coexist. The systems we are interested in are characterized by several kinds of interactions between their constituent molecules: for example, Coulombic interactions of the form ${\displaystyle {\frac {q^{2}}{r^{2}}}}$ when charged particles are present, fixed dipole interaction of the form ${\displaystyle {\frac {\mathbf {p} _{1}\cdot \mathbf {p} _{2}}{r^{3}}}}$ when permanent dipoles exist, and almost always induced dipole/van der Waals interaction of the form ${\displaystyle {\frac {\Delta \mathbf {p} _{1}\cdot \Delta \mathbf {p} _{2}}{r^{6}}}}$. At close range we also have the "hard core" or steric repulsion, sometimes modeled by a ${\displaystyle \sim {\frac {1}{r^{12}}}}$ potential. Simulations often use the so-called ${\displaystyle 12-6}$ Lennard-Jones potential ${\displaystyle U=4\varepsilon \left[\left({\frac {\sigma }{r}}\right)^{12}-\left({\frac {\sigma }{r}}\right)^{6}\right]}$(as pictured in (2)), which with appropriate parameters correctly describes both condensation and crystallization in some cases.

Sidenote

When only the repulsive potential exists (for instance, for billiard balls), crystallization still takes place but no condensation/evaporation phase transition between the liquid and gas phases exists.

Starting from a classical Hamiltonian such as ${\displaystyle H=\sum _{i}\left({\frac {\mathbf {p} _{i}^{2}}{2m}}+V_{VdW}\right)}$, we can predict all three phases of matter and the transitions between them. In biological systems, this simple picture does not suffice: the basic consideration behind this is that of effects which occur at different scales between the nanometric scale, through the mesoscopic and up to the macroscopic scale. Biological systems are mesoscopic in nature, and their properties cannot be described correctly when a coarse-graining is performed without accurately accounting for mesoscopic properties.

A few examples follow:

#### Liquid crystals

The most basic assumption we need in order to model liquid crystals is that isotropy at the molecular level is broken: molecules are represented by rods rather than spheres. Such a description was suggested by Onsager and others, and leads to three phases as shown in (3).

#### Polymers

When molecules are interconnected at mesoscopic ranges, new phases and properties are encountered.

#### Soap/beer foam

This kind of substance is approximately 95% agent, with the remainder water – yet it behaves like a weak solid as long as its deformations are small. This is because a tight formation of ordered cells separated by thin liquid films is formed, and in order for the material to change shape the cells must be rearranged. This need for restructuring is the cause of such systems' solid-like resistance to change.

#### Structured fluids

Polymers or macromolecules in liquid state, liquid crystals, emulsions and colloidal solutions and gels display complex visco-elastic behavior as a result of mesoscopic super-structures within them.

#### Soft 2D membranes

Interfaces between fluids have interesting properties: they act as a 2D liquid within the interface, yet respond elastically to any bending of the surface. Surfactant molecules will spontaneously form membranes within the same fluid, which also have these properties at appropriate temperatures. Surfactants in solution also form lamellar structures - multilayered structures in which the basic units are the membranes rather than single molecules.

## Polymers

Books: Doi, de Gennes, Rubinstein, Doi & Edwards.

### Introduction

#### Brief history

Natural polymers like rubber have been known since the dawn of history, but not understood. The first artificial polymer was made ${\displaystyle \sim 1905}$. Stadinger was the first to understand that polymers are formed by molecular chains and is considered to be the father of synthetic polymers. Most polymers were made by petrochemical industry. Nylon was born in 1940. Various uses and unique properties (light, strong, thermally insulating; available in many different forms from strings and sheets to bulk; cheap, easy to process, shape and mass-produce...) have made them very attractive commercially. Later on, some leading scientists were Kuhn and Flory in chemistry (30's to 70's) and Stockmayer in physical chemistry (50's and 60's). The famous modern theory of polymers was first formulated by P.G. de Gennes and Sam Edwards.

#### What is a polymer?

Material composed of chains, having a repeating basic unit (monomer). Connections between monomers are made by chemical (covalent) bonds, and are strong at room temperature.

${\displaystyle \left[A\right]_{N}\equiv {\overset {N{\mbox{ times}}}{\overbrace {A-A-A-...-A} }}.}$

${\displaystyle N}$ is the polymerization index.

Sidenote

More generally, this kind of structure is called a homopolymer. Heteropolymers – which have several repeating constituent units - also exist. These can have a random structure (${\displaystyle A-B-B-A-B-A...}$) or a block structure (${\displaystyle \left[A\right]_{n}\left[B\right]_{m}\left[C\right]_{l}}$), in which case they are called block copolymers. These can self-assemble into complex ordered structures and are often very useful.

Sidenote

For an example, look up ester monomers and polyester, or polyethylene.

Polymerization is also the name of the process by which polymers are synthesized, which involves a chain reaction where a reactive site exists at the end of the chain. Some chemical reactions increase the chain length by one unit, while simultaneously moving the reactive site to the new end:

${\displaystyle \left[A\right]_{N}+\left[A\right]_{1}\rightarrow \left[A\right]_{N+1}.}$

There also exist condensation processes, by which chains unite:

${\displaystyle \left[A\right]_{N}+\left[A\right]_{M}\rightarrow \left[A\right]_{N+M},}$

where ${\displaystyle N,\,M\geq 1}$. A briefer notation, dropping the name of the monomer, is

${\displaystyle \left(N\right)+\left(M\right)\rightarrow \left(N+M\right).}$

Consider the example of hydrocarbon polymers, where we have a monomer which is ${\displaystyle \mathrm {C_{2}H_{4}} }$(Check this...). As a larger number of such units is joined together to become polyethylene molecules, the material composed of these molecules changes drastically in nature:

${\displaystyle N}$ phase type of material
1-4 gas flammable gas
5-15 thin liquid liquid fuel/organic solvents
16-25 thick liquid motor oil
20-50 soft solid wax, paraffin
1000 hard solid plastic

#### Types of polymer structures

Polymers can exist in different topologies, which affect the macroscopic properties of the material they form (see (4)):

• Linear chains (this is the simplest case, which we will be discussing).
• Rings (chains connected at the ends).
• Stars (several chain arms connected at a central point).
• Tree (connected stars).
• Comb (one main chain with side chains branching out).
• Dendrimer (ordered branching structure).

#### Polymer phases of matter

Depending on the environment and larger-scale structure, polymers can exist in many states:

• Gas of isolated chains (not very relevant).
• In solution (water or organic solvents). In dilute solutions, polymer chains float freely like gas molecules, but their length alters their behavior.
• In a liquid state of chains (called a melt).
• In solid state (plastic) – crystals, poly-crystals, amorphous/glassy materials.
• Liquid crystal formed by polymer chains (Polymeric Liquid Cristal or PLC)
• Gels and rubber: networks of chains tied together.

### Ideal Polymer Chains in Solution

#### Some basic models of polymer chains

The simplest model of an ideal polymer chain is the freely jointed chain (FJC), where each monomer performs a completely independent random rotation. Here, at equilibrium the end-to-end length of the chain is ${\displaystyle R_{0}\simeq N^{\frac {1}{2}}\ell =L^{\frac {1}{2}}\ell ^{\frac {1}{2}}}$, where ${\displaystyle L=N\ell }$ is the contour length.

A slightly more realistic model is the freely rotating chain (FRC), where monomers are locked at some chemically meaningful bond angle ${\displaystyle \vartheta }$ and rotate freely around it via the torsional angle ${\displaystyle \varphi }$. Here,

${\displaystyle {\begin{matrix}R_{0}^{2}&\simeq &L\ell _{eff}\sim N^{\frac {1}{2}},\ell _{eff}&=&\ell {\frac {1+\cos \vartheta }{1-\cos \vartheta }}.\end{matrix}}}$

Note that for ${\displaystyle \left\langle \cos \vartheta \right\rangle =0}$ we find that ${\displaystyle \ell _{eff}=\ell }$ and this is identical to the FJC. For very small ${\displaystyle \vartheta \sim \varepsilon }$, we can expand the cosine an obtain

${\displaystyle \ell _{eff}{\underset {\scriptstyle \vartheta \rightarrow 0}{\longrightarrow }}{\frac {2\ell }{\varepsilon }}\left(1-{\frac {\varepsilon }{2}}\right)\gg \ell .}$

This is the rigid rod limit (to be discussed later in detail).

A second possible improvement is the hindered rotation (HR) model. Here the angles ${\displaystyle \varphi _{i}}$ have a minimum-energy value, and are taken from an uncorrelated Boltzmann distribution with some

potential ${\displaystyle V\left(\varphi _{i}\right)}$. This gives

${\displaystyle {\begin{matrix}R_{0}^{2}&\simeq &L\ell _{eff}\sim N,\ell _{eff}&=&\ell \left({\frac {1+\cos \vartheta }{1-\cos \vartheta }}\right)\left({\frac {1+\left\langle \cos \varphi \right\rangle }{1-\left\langle \cos \varphi \right\rangle }}\right).\end{matrix}}}$
Sidenote

See Flory's book for details.

Another option is called the rotational isomeric state model. Here, a finite number of angles are possible for each monomer junction and the state of the full chain is given in terms of these. Correlations are also taken into account and the solution is numeric, but aside from a complicated ${\displaystyle \ell _{eff}}$ this is still an ideal chain with ${\displaystyle R_{0}^{2}\simeq L\ell _{eff}\sim N}$.

#### Calculating the end-to-end radius

For the polymer chain of (5), obviously we will always have ${\displaystyle \left\langle \mathbf {R} _{N}\right\rangle =0}$. The variance, however, is generally not zero: using ${\displaystyle \mathbf {R} _{N}=\sum _{i=1}^{N}\mathbf {r} _{i}}$,

${\displaystyle \left\langle \mathbf {R} _{N}^{2}\right\rangle =\sum _{i,j=1}^{N}\mathbf {r} _{i}\cdot \mathbf {r} _{j}=\sum _{i,j=1}^{N}\ell ^{2}\left\langle \cos \vartheta _{ij}\right\rangle .}$

FJC

In the freely jointed chain (FJC) model, there are neither correlations between different sites nor restrictions on the rotational angles. We therefore have ${\displaystyle \left\langle \cos \vartheta _{ij}\right\rangle ={\frac {1}{\ell ^{2}}}\left\langle \mathbf {r} _{i}\cdot \mathbf {r} _{j}\right\rangle =\delta _{ij}}$,

and

${\displaystyle \left\langle \mathbf {R} _{N}^{2}\right\rangle =\sum _{ij}\ell ^{2}\delta _{ij}=N\ell ^{2}=L\ell .}$
Sidenote

The mathematics are similar to that of a random walk or diffusion process, where in 1D ${\displaystyle {\sqrt {\left\langle x^{2}\right\rangle }}\sim t^{\frac {1}{2}}}$.

Therefore, ${\displaystyle R_{0}\equiv {\sqrt {\left\langle \mathbf {R} _{N}^{2}\right\rangle }}=N^{\frac {1}{2}}\ell }$.

FRC

In the freely rotating chain model, the bond angles are held constant at angles ${\displaystyle \vartheta _{i}}$ while the torsion angles ${\displaystyle \varphi _{i}}$ are taken from a uniform distribution between ${\displaystyle -\pi }$ and ${\displaystyle \pi }$. This introduces some correlation between the angles: since (for one definition of the ${\displaystyle \varphi _{i}}$) ${\displaystyle \mathbf {r} _{i+1}=\cos \vartheta _{i}\mathbf {r} _{i}+\sin \vartheta _{i}(\sin \varphi _{i}\mathbf {\hat {y}} \times \mathbf {r} _{i}+\cos \varphi _{i}\mathbf {\hat {x}} \times \mathbf {r} _{i})}$,

and since the ${\displaystyle \varphi _{i}}$ are independent and any averaging over a sine of cosine of one or more of them will result in a zero, only the ${\displaystyle \varphi _{i}}$ independent terms survive and by recursion this correlation has the simple form

${\displaystyle \left\langle \mathbf {r} _{i}\cdot \mathbf {r} _{j}\right\rangle =\ell ^{2}\left(\cos \vartheta \right)^{\left|i-j\right|}.}$

The end-to-end radius is

${\displaystyle {\begin{array}{lcl}R_{0}^{2}&=&\sum _{ij=1}^{N}\left\langle \mathbf {r} _{i}\cdot \mathbf {r} _{j}\right\rangle \\&=&\sum _{i=1}^{N}{\overset {\scriptstyle =\ell ^{2}}{\overbrace {\left\langle \mathbf {r} _{i}^{2}\right\rangle } }}+\ell ^{2}\sum _{i=1}^{N}{\overset {\scriptstyle k=i-j}{\overbrace {\sum _{j=1}^{i-1}\left(\cos \vartheta \right)^{i-j}} }}+\ell ^{2}\sum _{i=1}^{N}{\overset {\scriptstyle k=j-i}{\overbrace {\sum _{j=i+1}^{N}\left(\cos \vartheta \right)^{j-i}} }}\\&=&N\ell ^{2}+\ell ^{2}\sum _{i=1}^{N}\left[\sum _{k=1}^{i-1}\left(\cos \vartheta \right)^{k}+\sum _{k=1}^{N-i}\left(\cos \vartheta \right)^{k}\right].\end{array}}}$

At large ${\displaystyle N}$ we can approximate the two sums in ${\displaystyle k}$ by the series ${\displaystyle \sum _{k=1}^{\infty }\cos ^{k}\vartheta ={\frac {\cos \vartheta }{1-\cos \vartheta }}}$, giving

${\displaystyle R_{0}^{2}\simeq N\ell ^{2}+2\ell ^{2}\sum _{i=1}^{N}{\frac {\cos \vartheta }{1-\cos \vartheta }}=N\ell ^{2}{\frac {1+\cos \vartheta }{1-\cos \vartheta }}.}$

To extract the Kuhn length ${\displaystyle \ell _{eff}}$ from this expression, we rewrite in in the following way:

${\displaystyle R_{0}^{2}=N\left(\ell {\sqrt {\frac {1+\cos \vartheta }{1-\cos \vartheta }}}\right)^{2}\equiv N\ell \ell _{eff}=L\ell _{eff},}$
${\displaystyle \ell _{eff}=\ell {\frac {1+\cos \vartheta }{1-\cos \vartheta }}.}$

To go back from this to the FRC limit, we would consider a chain with a random distribution of ${\displaystyle \vartheta }$ angles such that ${\displaystyle \left\langle \cos \vartheta \right\rangle =0}$.

Consider once again the polymer chain of (5). Define:

${\displaystyle R_{g}^{2}={\frac {1}{N}}\sum _{i=1}^{N}\left(\mathbf {R} _{i}^{\prime }-\mathbf {R} _{CM}^{\prime }\right)^{2}\equiv {\frac {1}{N}}\sum _{i=1}^{N}\mathbf {R} _{i}^{2}.}$

The unprimed coordinate system is refocused on the center of mass, such that ${\displaystyle \sum _{i}\mathbf {R} _{i}=0}$. Now, it is easier to work with

the following expression:

${\displaystyle {\begin{matrix}{\frac {1}{2N^{2}}}\sum _{ij}\left(\mathbf {R} _{i}-\mathbf {R} _{j}\right)^{2}&=&{\frac {1}{2N^{2}}}\sum _{ij}\left(-2\mathbf {R} _{i}\cdot \mathbf {R} _{j}+2\mathbf {R} _{i}\cdot \mathbf {R} _{i}\right)&=&{\frac {2N}{2N^{2}}}\sum _{i}\mathbf {R} _{i}^{2}-{\frac {1}{N^{2}}}{\overset {\scriptstyle =0}{\overbrace {\left(\sum _{i}\mathbf {R} _{i}\right)} }}\cdot {\overset {\scriptstyle =0}{\overbrace {\left(\sum _{j}\mathbf {R} _{j}\right)} }}&=&R_{g}^{2}.\end{matrix}}}$

We will calculate ${\displaystyle R_{g}}$ for a long FJC. For ${\displaystyle N\gg 1}$ we can replace the sums with integrals, obtaining

${\displaystyle {\begin{array}{lcr}\left\langle R_{g}^{2}\right\rangle &=&{\frac {1}{2N^{2}}}\sum _{ij}{\overset {\scriptstyle {\left|i-j\right|\ell ^{2}}}{\overbrace {\left\langle \left(\mathbf {R} _{i}-\mathbf {R} _{j}\right)^{2}\right\rangle } }}\\&=&{\frac {1}{2N^{2}}}\int _{0}^{N}\mathrm {d} u\int _{0}^{N}\mathrm {d} v\,\ell ^{2}\left|u-v\right|\\&=&{\frac {2}{2N^{2}}}\int _{0}^{N}\mathrm {d} u\int _{0}^{u}\mathrm {d} v\,\ell ^{2}\left(u-v\right)\\&=&{\frac {\ell ^{2}}{N^{2}}}\int _{0}^{N}\mathrm {d} u\left[u^{2}-{\frac {1}{2}}u^{2}\right]\\&=&{\frac {1}{6}}N\ell ^{2}.\end{array}}}$

This gives the gyration radius for an FJC:

${\displaystyle R_{g}^{2}={\frac {1}{6}}N\ell ^{2}.}$

#### Polymers and Gaussian distributions

An ideal chain is a Gaussian chain, in the sense that the end-to-end radius is taken from a Gaussian distribution. We will see two proofs of this.

Random walk proof

One way to show this (see Rubinstein, de Gennes) is to begin with a random walk. For one dimension, if we begin at ${\displaystyle x=0}$ and at each

time step ${\displaystyle i}$ move left or right with moves ${\displaystyle x_{i}=\pm \ell }$ and the final displacement

${\displaystyle x=\sum _{i}x_{i},}$

then

${\displaystyle x=\ell \left(N_{+}-N_{-}\right)\equiv \ell N.}$

We define ${\displaystyle Z_{N}\left(x\right)}$ as the number of configurations of ${\displaystyle N}$ steps with a final displacement of ${\displaystyle x}$. ${\displaystyle P_{N}\left(x\right)}$

is the associated normalized probability.

${\displaystyle {\begin{matrix}Z_{N}\left(x\right)&=&{\frac {N!}{\left(N_{+}\right)!\left(N_{-}\right)!}}{\underset {\scriptstyle N\rightarrow \infty }{\longrightarrow }}{\frac {C}{\sqrt {N}}}e^{-{\frac {x^{2}}{2\left\langle x^{2}\right\rangle }}},\left\langle x^{2}\right\rangle &=&N\ell ^{2}.\end{matrix}}}$

In fact, for ${\displaystyle N\rightarrow \infty }$ the central limit theorem tells us that ${\displaystyle x=\sum _{i}x_{i}}$ will have a Gaussian distribution for any distribution of the ${\displaystyle x_{i}}$. This can be extended to ${\displaystyle d}$ dimensions

with a displacement ${\displaystyle \mathbf {R} =\sum _{i}\mathbf {x} _{i}}$:

${\displaystyle {\begin{matrix}Z_{N}^{d}&=&\left({\frac {C}{\sqrt {N}}}\right)^{d}\exp \left\{-{\frac {dR^{2}}{2\left\langle R^{2}\right\rangle }}\right\},\left\langle R^{2}\right\rangle &=&d\left\langle \mathbf {x} _{i}^{2}\right\rangle =N\ell ^{2}.\end{matrix}}}$

To find the normalization constant ${\displaystyle C}$ we must integrate over all dimensions:

${\displaystyle {\begin{array}{lcl}1=\int \mathrm {d} \mathbf {R} Z_{N}\left(\mathbf {R} \right)&=&\left(\int \mathrm {d} xZ_{N}\left(x\right)\right)^{d}\\&=&\left({\frac {C}{\sqrt {N}}}{\sqrt {2\pi \left\langle x^{2}\right\rangle }}\right)^{d},\\&\Downarrow \end{array}}}$
${\displaystyle P_{N}^{d}\left(\mathbf {R} \right)=\left({\frac {d}{2\pi N\ell ^{2}}}\right)^{\frac {d}{2}}\exp \left\{-{\frac {dR^{2}}{2N\ell ^{2}}}\right\}.}$

Some notes:

• An ideal chain can now be redefined as one such that ${\displaystyle P_{N}^{d}\left(R\right)}$ is Gaussian in any dimension ${\displaystyle d\geq 1}$.
• This is also true for a long chain with local interactions only, such that ${\displaystyle R_{0}^{2}=N\ell \ell _{eff}=L\ell _{eff}\sim N}$.
• The probability of being in a spherical shell with radius ${\displaystyle \mathbf {R} }$ is ${\displaystyle 4\pi R^{2}P_{N}\left(R\right)}$.
• The chance of returning to the origin ${\displaystyle P_{N}\left(R=0\right)}$ is ${\displaystyle \left({\frac {d}{2\pi \ell ^{2}N}}\right)^{\frac {d}{2}}\sim \left({\frac {1}{N}}\right)^{\frac {d}{2}}\equiv N^{-\gamma }}$. ${\displaystyle \gamma ={\frac {d}{2}}}$ is typical of an ideal chain.
• For any dimension ${\displaystyle d\geq 1}$, ${\displaystyle R_{0}={\sqrt {\left\langle R^{2}\right\rangle }}\sim N^{\frac {1}{2}}}$.

Formal proof

Another way to show this follows, which is also extensible to other distributions of the ${\displaystyle \left\{\mathbf {r} _{i}\right\}}$.

Sidenote

This proof can be found in Doi and Edwards.

In general, we can write

${\displaystyle P_{N}\left(\mathbf {R} \right)=\int \mathrm {d} \mathbf {r} _{1}\int \mathrm {d} \mathbf {r} _{2}...\int \mathrm {d} \mathbf {r} _{N}\Psi \left(\mathbf {r} _{1},...,\mathbf {r} _{N}\right)\delta \left(\mathbf {R} -\sum _{i=1}^{N}\mathbf {r} _{i}\right).}$

In the absence of correlations, we can factorize ${\displaystyle \Psi }$:

${\displaystyle \Psi \left(\mathbf {r} _{1},...,\mathbf {r} _{N}\right)=\psi \left(\mathbf {r} _{1}\right)...\psi \left(\mathbf {r} _{N}\right).}$

For example, for a freely jointed chain ${\displaystyle \psi \left(\mathbf {r} _{i}\right)=\alpha \delta \left(\left|\mathbf {r} _{i}\right|-\ell \right)}$. The normalization constant is found from ${\displaystyle \int \psi \left(r_{i}\right)4\pi r_{i}^{2}\mathrm {d} r_{i}=4\pi \alpha \ell ^{2}=1}$,

giving

${\displaystyle \psi \left(\mathbf {r} _{i}\right)={\frac {1}{4\pi \ell ^{2}}}\delta \left(\left|\mathbf {r} _{i}\right|-\ell \right).}$

We can replace the delta functions with ${\displaystyle \delta \left(\mathbf {r} \right)={\frac {1}{\left(2\pi \right)^{3}}}\int \mathrm {d} \mathbf {k} e^{i\mathbf {k} \cdot \mathbf {r} }}$,

leaving us with

${\displaystyle {\begin{matrix}P_{N}\left(\mathbf {R} \right)&=&{\frac {1}{\left(2\pi \right)^{3}}}\int \mathrm {d} \mathbf {k} e^{i\mathbf {k} \cdot \mathbf {R} }\int \mathrm {d} \mathbf {r} _{1}...\int \mathrm {d} \mathbf {r} _{N}\prod _{i}\left[e^{-i\mathbf {k} \cdot \mathbf {r} _{i}}\psi \left(\mathbf {r} _{i}\right)\right]&=&{\frac {1}{\left(2\pi \right)^{3}}}\int \mathrm {d} \mathbf {k} e^{i\mathbf {k} \cdot \mathbf {R} }\left[\int d\mathbf {r} e^{-i\mathbf {k} \cdot \mathbf {r} }\psi \left(\mathbf {r} \right)\right]^{N}.\end{matrix}}}$

In spherical coordinates,

${\displaystyle {\begin{matrix}\int \mathrm {d} \mathbf {r} e^{-i\mathbf {k} \cdot \mathbf {r} }\psi \left(\mathbf {r} \right)&=&\int r^{2}\mathrm {d} r\mathrm {d} \vartheta \mathrm {d} \varphi \sin \vartheta e^{-ikr\cos \vartheta }{\frac {1}{4\pi \ell ^{2}}}\delta \left(r-\ell \right)&{\overset {\scriptscriptstyle \alpha =\cos \vartheta }{=}}&{\frac {1}{2}}\int _{-1}^{1}\mathrm {d} \alpha e^{-ik\ell \alpha }&=&{\frac {\sin k\ell }{k\ell }},\end{matrix}}}$

which gives

${\displaystyle P_{N}\left(\mathbf {R} \right)=\left({\frac {1}{2\pi }}\right)^{3}\int \mathrm {d} \mathbf {k} e^{i\mathbf {k} \cdot \mathbf {r} }\left({\frac {\sin k\ell }{k\ell }}\right)^{N}.}$

We are left with the task of evaluating the integral. This can be done analytically with the Laplace method for large ${\displaystyle N}$, since the largest contribution is around ${\displaystyle k\ell =0}$: we can approximate ${\displaystyle \left({\frac {\sin k\ell }{k\ell }}\right)^{N}}$ by ${\displaystyle \left(1-{\frac {\left(k\ell \right)^{2}}{6}}+...\right)^{N}\simeq e^{-{\frac {\left(k\ell \right)^{2}N}{6}}}}$.

The integral is then

${\displaystyle {\begin{array}{lcl}P_{n}\left(\mathbf {R} \right)&=&\left({\frac {1}{2\pi }}\right)^{3}\int \mathrm {d} \mathbf {k} e^{i\mathbf {k} \cdot \mathbf {R} }e^{-{\frac {k^{2}\ell ^{2}N}{6}}}\\&=&\left({\frac {1}{2\pi }}\right)^{3}\int \mathrm {d} k_{1}\mathrm {d} k_{2}\mathrm {d} k_{3}\exp \left[\sum _{\alpha }\left(ik_{\alpha }R_{\alpha }-{\frac {Nk_{\alpha }^{2}\ell ^{2}}{6}}\right)\right]\\&=&\left({\frac {1}{2\pi }}\right)^{3}\prod _{\alpha }\int \mathrm {d} k_{\alpha }\exp \left(ik_{\alpha }R_{\alpha }-{\frac {Nk_{\alpha }^{2}\ell ^{2}}{6}}\right)\\&=&\left({\frac {3}{2\pi N\ell ^{2}}}\right)^{\frac {3}{2}}\exp \left\{-{\frac {3R^{2}}{2N\ell ^{2}}}\right\}.\end{array}}}$

This is, of course, the same Gaussian form we have obtained from the random walk (we have done the special case of ${\displaystyle d=3}$, but once again this process can be repeated for a general dimension ${\displaystyle d\geq 1}$).

03/26/2009

### Rigid and Semi-Rigid Polymer Chains in Solution

#### Worm-like chain

In considering the ${\displaystyle \vartheta \rightarrow 0}$ limit of the freely rotating chain, we have seen that ${\displaystyle \ell _{eff}\sim {\frac {\ell }{\vartheta ^{2}}}\rightarrow \infty }$. This is of course unphysical, and this limit is actually important for many interesting cases of stiff chains (for instance, DNA). If we take the ${\displaystyle N\rightarrow \infty }$ limit along with ${\displaystyle \vartheta \rightarrow 0}$

and start over, we can make the following change of variables:

${\displaystyle {\begin{matrix}\left\langle \mathbf {r} _{i}\cdot \mathbf {r} _{j}\right\rangle &=&\ell ^{2}\left\langle \cos \vartheta _{ij}\right\rangle &=&\ell ^{2}\left(\cos \vartheta \right)^{\left|i-j\right|}&=&\ell ^{2}\exp \left[-{\frac {\left|i-j\right|\ell }{\ell _{p}}}\right],\end{matrix}}}$

which defines the persistence length ${\displaystyle \ell _{p}}$. For the FRC

model,

${\displaystyle \ell _{p}=-{\frac {\ell }{\ln \cos \vartheta }}.}$

This is a useful concept in general, however: it defines the typical length scale over which correlations between chain angles dies out, and is therefore an expression of the chain's rigidity.

At small ${\displaystyle \vartheta }$ we can expand the logarithm to get

${\displaystyle {\begin{matrix}\ln \cos \vartheta &\simeq &\ln \left(1-{\frac {\vartheta ^{2}}{2}}\right)\approx -{\frac {\vartheta ^{2}}{2}}&\Downarrow \end{matrix}}}$
${\displaystyle \ell _{p}\simeq {\frac {2\ell }{\vartheta ^{2}}}.}$

Taking the continuum limit carefully then requires us to consider ${\displaystyle N\rightarrow \infty }$ and ${\displaystyle \ell \rightarrow 0}$ such that ${\displaystyle R_{max}=N\ell \cos {\frac {\vartheta }{2}}\simeq N\ell }$ is constant. Now, we can calculate the end-to-end length ${\displaystyle R_{0}^{2}=\left\langle R_{N}^{2}\right\rangle }$

at the continuum limit using out the new form for the correlations:

${\displaystyle {\begin{matrix}R_{0}^{2}&=&\ell \sum _{ij}\left(\cos \vartheta \right)^{\left|i-j\right|}=\ell ^{2}\sum _{ij}\exp \left\{-{\frac {\ell \left|i-j\right|}{\ell _{p}}}\right\}&\rightarrow &\ell ^{2}\int _{0}^{R_{m}}{\frac {\mathrm {d} u}{\ell }}\int _{0}^{R_{m}}{\frac {\mathrm {d} v}{\ell }}\exp \left\{-{\frac {\left|u-v\right|}{\ell _{p}}}\right\}.\end{matrix}}}$

To simplify the calculation, we can define the dimensionless variable ${\displaystyle u^{\prime }=u/\ell _{p}}$, ${\displaystyle v^{\prime }=v/\ell _{p}}$ and ${\displaystyle R_{m}^{\prime }=R_{m}/\ell _{p}}$.

With these replacements,

${\displaystyle {\begin{array}{lcr}{\frac {R_{0}^{2}}{\ell _{p}^{2}}}&=&\int _{0}^{R_{m}^{\prime }}\mathrm {d} u^{\prime }\int _{0}^{R_{m}^{\prime }}\mathrm {d} v^{\prime }e^{-\left|u^{\prime }-v^{\prime }\right|}\\&=&\int _{0}^{R_{m}^{\prime }}\mathrm {d} u^{\prime }\int _{0}^{u^{\prime }}\mathrm {d} v^{\prime }e^{-u^{\prime }}e^{v^{\prime }}+\int _{0}^{R_{m}^{\prime }}\mathrm {d} u^{\prime }\int _{u^{\prime }}^{R_{m}^{\prime }}\mathrm {d} v^{\prime }e^{u^{\prime }}e^{-v^{\prime }}\\&=&\int _{0}^{R_{m}^{\prime }}\mathrm {d} u^{\prime }e^{-u^{\prime }}\left(e^{u^{\prime }}-1\right)+\int _{0}^{R_{m}^{\prime }}\mathrm {d} u^{\prime }e^{u^{\prime }}\left(e^{-u^{\prime }}-e^{-R_{m}^{\prime }}\right)\\&=&R_{m}^{\prime }+\left(e^{-R_{m}^{\prime }}-1\right)+R_{m}^{\prime }-e^{-R_{m}^{\prime }}\left(e^{R_{m}^{\prime }}-1\right)\\&=&2R_{m}^{\prime }-2\left(1-e^{-R_{m}^{\prime }}\right).\end{array}}}$

The final result (known as the Kratchky-Porod worm-like-chain or WLC)

is

${\displaystyle R_{0}^{2}=\left\langle R_{N}^{2}\right\rangle =2\ell _{p}R_{max}-2\ell _{p}^{2}\left(1-e^{-{\frac {R_{max}}{\ell _{p}}}}\right).}$

Importantly, is does not depend on ${\displaystyle \vartheta }$ or ${\displaystyle N}$ but only on the physically transparent persistence length and contour length.

We will consider the two limits where one parameter is much larger than the other. First, for ${\displaystyle \ell _{p}\gg R_{max}}$ we encounter the

rigid rod limit: we can expand the previous expression into

${\displaystyle {\begin{matrix}R_{0}^{2}&=&2\ell _{p}R_{max}-2\ell _{p}^{2}\left(1-1+{\frac {R_{max}}{\ell _{p}}}-{\frac {1}{2}}\left({\frac {R_{max}}{\ell _{p}}}\right)^{2}+...\right)&=&R_{max}^{2}+\vartheta \left({\frac {R_{max}^{3}}{\ell _{p}^{3}}}\right),&\Downarrow R_{0}&\sim &N.\end{matrix}}}$

The fact that ${\displaystyle R_{0}\sim N}$ rather than ${\displaystyle R_{0}\sim N^{\frac {1}{2}}}$ is a result of the long-range correlations we have introduced, and is an indication that at this regime the material is in an essentially different phase. Somewhere between the ideal chain and the rigid rod, a crossover regime must exist.

Sidenote

While an ideal chain has ${\displaystyle \scriptstyle R_{0}\sim N^{\frac {1}{2}}}$ and a rigid rod has ${\displaystyle \scriptstyle R_{0}\sim N}$, in general polymer chains can have a scaling law ${\displaystyle \scriptstyle R_{0}\sim N^{\nu }}$. The power ${\displaystyle \scriptstyle \nu }$ need not be an integer.

For ${\displaystyle \ell _{p}\ll R_{max}}$ we can neglect the exponent, obtaining

${\displaystyle {\begin{matrix}R_{0}^{2}&\simeq &2\ell _{p}R_{max},\ell _{p}&\simeq &{\frac {2\ell }{\vartheta ^{2}}},R_{max}&\simeq &N\ell .\end{matrix}}}$

This therefore returns us to the ideal chain limit, with a Kuhn length ${\displaystyle \ell _{eff}=2\ell _{p}}$. The crossover phenomenon we discussed occurs on the chain itself here as we observe correlation between its pieces at differing length scales: at small scales (${\displaystyle \sim \ell _{p}}$) it behaves like a rigid rod, while at long scales we have an uncorrelated random walk. An interesting example is a DNA chain, which can be described by a worm-like chain with ${\displaystyle \ell _{p}\approx 500\mathrm {\AA} }$ and ${\displaystyle R_{max}\simeq 10\mu m\gg \ell _{p}}$: it will therefore typically cover a radius of ${\displaystyle R_{0}\sim 7000\mathrm {\AA} }$.

### Free Energy of the Ideal Chain and Entropic Springs

We have calculated distributions of ${\displaystyle \mathbf {R} }$ for Gaussian chains with ${\displaystyle N}$ components, ${\displaystyle Z_{N}\left(\mathbf {R} \right)}$. Let's consider

the entropy of such chains:

${\displaystyle {\begin{matrix}S_{N}\left(\mathbf {R} \right)&=&k_{B}\ln Z_{N}\left(\mathbf {R} \right)P_{N}\left(\mathbf {R} \right)&=&{\frac {Z_{N}\left(\mathbf {R} \right)}{\int \mathrm {d} \mathbf {R} Z_{N}\left(\mathbf {R} \right)}}=\left({\frac {3}{2\pi N\ell ^{2}}}\right)^{\frac {3}{2}}\exp \left(-{\frac {3R^{2}}{2N\ell ^{2}}}\right).\end{matrix}}}$

The logarithm of ${\displaystyle Z_{N}\left(\mathbf {R} \right)}$ is the same as that of ${\displaystyle P_{N}\left(\mathbf {R} \right)}$, aside from a factor which does

not depend on ${\displaystyle \mathbf {R} }$. Therefore,

${\displaystyle {\begin{matrix}S_{N}\left(\mathbf {R} \right)&=&{\overset {\scriptstyle =S_{N}\left(0\right)}{\overbrace {k_{B}\ln \left(\int Z_{N}\left(\mathbf {R} \right)\mathrm {d} \mathbf {R} \right)+{\frac {3}{2}}k_{B}\ln \left({\frac {3}{2\pi N\ell ^{2}}}\right)} }}-{\frac {3}{2}}k_{B}{\frac {R^{2}}{N\ell ^{2}}}&=&S_{N}\left(0\right)-{\frac {3}{2}}k_{B}{\frac {R^{2}}{N\ell ^{2}}}.\end{matrix}}}$

The free energy is

${\displaystyle {\begin{matrix}F_{N}\left(\mathbf {R} \right)&=&U_{N}\left(\mathbf {R} \right)-TS_{N}\left(\mathbf {R} \right)&=&{\frac {3}{2}}k_{B}T{\frac {R^{2}}{N\ell ^{2}}}\,\,+{\underset {\scriptscriptstyle _{U_{N}\left(0\right)-TS_{N}\left(0\right)}}{\underbrace {F_{N}\left(0\right)} }}\end{matrix}}}$

since ${\displaystyle U_{N}\left(\mathbf {R} \right)=U_{N}\left(0\right)}$ for an ideal chain.

What does ${\displaystyle F_{N}\left(\mathbf {R} \right)}$ mean? It represents the energy needed to stretch the polymer, and this energy is ${\displaystyle \sim R^{2}}$ like a harmonic spring (${\displaystyle U\sim {\frac {1}{2}}kx^{2}}$) with ${\displaystyle k={\frac {3k_{B}T}{N\ell ^{2}}}\sim {\frac {T}{N}}}$. Note that the polymer becomes less elastic (more rigid) as the temperature increases, unlike most solids. This is a physical result and can be verified experimentally: for instance, the spring constant of rubber (which is made of networks of polymer chains) increases linearly with temperature. Consider an experiment where instead of holding the chain at constant length, we apply a perturbatively weak force ${\displaystyle \pm \mathbf {f} }$ to its ends and measure its average length. We can perform a Legendre transform between distance and force: from equality of forces along the direction

in which they are applied,

${\displaystyle {\begin{array}{lcl}f_{x}&=&{\frac {\partial F_{N}}{\partial R_{x}}}={\frac {\partial }{\partial R_{x}}}\left({\frac {3k_{B}T}{2N\ell ^{2}}}R^{2}\right)={\overset {\scriptstyle \equiv k}{\overbrace {\frac {3k_{B}T}{N\ell ^{2}}} }}R_{x},\\&\Downarrow \\\mathbf {f} &=&k\mathbf {R} .\end{array}}}$

To be in this linear response (${\displaystyle \mathbf {f} \sim \mathbf {r} }$) region, we must demand that ${\displaystyle \mathbf {R} \sim \left|\mathbf {R} _{0}\right|\ll R_{max}=N\ell }$,

and to stress this we can write

${\displaystyle \mathbf {f} =\left({\frac {3k_{B}T}{\ell }}\right){\frac {\mathbf {R} }{R_{max}}}.}$

Numerically, with a nanometric ${\displaystyle \ell }$ and at room temperature the forces should be in the picoNewton range to meet this requirement. A more rigorous treatment which works at arbitrary forces can be carried out by considering an FJC with oppositely charged (${\displaystyle \pm q}$) ends in an electric field ${\displaystyle \mathbf {E} \parallel \mathbf {\hat {z}} }$. The chain's sites are at ${\displaystyle \mathbf {r} _{i}}$ with ${\displaystyle \mathbf {R} \equiv \mathbf {R} _{N}-\mathbf {R} _{0}}$.

The potential is

${\displaystyle {\begin{array}{lcl}U_{elec}&=&+q\mathbf {E} \cdot \mathbf {R} _{0}-q\mathbf {E} \cdot \mathbf {R} _{N}=f\mathbf {R} \\&\Downarrow \\\mathbf {f} &=&q\mathbf {E} .\end{array}}}$

Since ${\displaystyle \mathbf {R} =\sum _{i}\mathbf {r} _{i}}$, we can write the potential as

${\displaystyle U_{elec}=-q\mathbf {E} \cdot \mathbf {R} =-q\mathbf {E} \cdot \left(\sum _{i}\mathbf {r} _{i}\right)=-f\ell \sum _{i}\cos \vartheta _{i},}$

with ${\displaystyle \cos \vartheta _{i}=\mathbf {{\hat {z}}\cdot } \mathbf {r} _{i}}$. The

partition function is

${\displaystyle {\begin{matrix}Z_{N}\left(\mathbf {f} \right)&=&\int \mathrm {d} \mathbf {r} _{1}...\int \mathrm {d} \mathbf {r} _{N}\Psi \left(\left\{\mathbf {r} _{i}\right\}\right)e^{-\beta U_{elec}\left(\left\{\mathbf {r} _{i}\right\}\right)}&=&{\mbox{Tr}}_{\Psi _{i}}e^{-\beta U\left(\left\{\psi _{i}\right\}\right)}.\end{matrix}}}$

The function ${\displaystyle \Psi }$ is separable into product of functions ${\displaystyle \psi \left(\mathbf {r} _{i}\right)={\frac {1}{4\pi \ell ^{2}}}\delta \left(\left|\mathbf {r} _{i}\right|-\ell \right)}$.

Now,

${\displaystyle \exp \left\{-{\frac {U_{elec}}{k_{B}T}}\right\}=\exp \left\{{\frac {f\ell }{k_{B}T}}\sum _{i}\cos \vartheta _{i}\right\}.}$

In spherical coordinates ${\displaystyle \mathbf {r} _{i}=\left(r_{i},\vartheta _{i},\varphi _{i}\right)}$

we can solve the integral:

${\displaystyle {\begin{array}{lcl}Z_{N}\left(\mathbf {f} \right)&=&\left[\int _{0}^{\infty }\mathrm {d} r{\frac {r^{2}}{4\pi \ell ^{2}}}\delta \left(r-\ell \right)\right]^{N}\times \left[\int _{0}^{2\pi }\mathrm {d} \varphi \right]^{N}\times \prod _{i}\int _{0}^{\pi }\mathrm {d} \vartheta _{i}\sin \vartheta _{i}e^{{\frac {f\ell }{k_{B}T}}\cos \vartheta _{i}}\\&{\underset {\scriptscriptstyle x=\cos \vartheta }{=}}&\left({\frac {1}{4\pi }}\right)^{N}\left(2\pi \right)^{N}\left[\int _{-1}^{1}\mathrm {d} xe^{{\frac {f\ell }{k_{B}T}}x}\right]^{N}\\&=&{\frac {1}{2^{N}}}\left[{\frac {2k_{B}T}{f\ell }}\sinh \left({\frac {f\ell }{k_{B}T}}\right)\right]^{N}\\&=&\left[{\frac {k_{B}T}{f\ell }}\sinh \left({\frac {f\ell }{k_{B}T}}\right)\right]^{N}.\end{array}}}$

The Gibbs free energy (Gibbs because the external force is fixed)

is then

${\displaystyle G_{N}\left(\mathbf {f} \right)=-k_{B}T\ln Z_{N}\left(\mathbf {f} \right)=-k_{B}TN\ln \left[\sinh \left({\frac {f\ell }{k_{B}T}}\right)\right]+k_{B}TN\ln \left({\frac {f\ell }{k_{B}T}}\right),}$

and the average extension

${\displaystyle {\begin{matrix}\left\langle R\right\rangle _{f}&=&-{\frac {\partial G_{N}\left(f\right)}{\partial f}}&=&-k_{B}TN\coth \left({\overset {\scriptstyle \equiv \alpha }{\overbrace {\frac {f\ell }{k_{B}T}} }}\right){\frac {\ell }{k_{B}T}}+k_{B}TN{\frac {1}{f}}&=&N\ell \left[\coth \alpha -{\frac {1}{\alpha }}\right]\equiv N\ell {\mathcal {L}}\left(\alpha \right)\end{matrix}}}$

The Langevin function ${\displaystyle {\mathcal {L}}\left(\alpha \right)=\coth \alpha -{\frac {1}{\alpha }}}$ is also typical of spin magnetization in external magnetic fields and of dipoles in electric fields at finite temperatures. 04/02/2009

### Polymers and Fractal Curves

#### Introduction to fractals

Book: B. Mandelbrot.

A fractal is an object with fractal dimensionality , called also the Hausdorff dimension . This implies a new definition of dimensionality, which we will discuss. Consider a sphere of radius ${\displaystyle R}$. It is considered three-dimensional because it has ${\displaystyle V={\frac {4\pi }{3}}R^{3}}$ and ${\displaystyle M=\rho V\sim R^{D}}$ for ${\displaystyle D=3}$. A plane has by the same reasoning ${\displaystyle M\sim R^{D}}$ for ${\displaystyle D=2}$, and is therefore a ${\displaystyle 2D}$ object. Fractals are mathematical objects such that by the same sort of calculation they will have ${\displaystyle M\sim R^{D_{f}}}$, for a ${\displaystyle D_{f}}$ which is not necessarily an integer number (this definition is due to Hausdorff). One example is the Koch curve (see (7)): in each of its iterations, we decrease the length of a segment by a factor

of 3 and decrease its mass by a factor of 4. We will therefore have

${\displaystyle {\begin{array}{lcl}M_{2}&=&{\frac {1}{4}}M_{1}=A\left(r_{2}\right)^{D_{f}}=A\left({\frac {1}{3}}r_{1}\right)^{D_{f}},\\M_{1}&=&A\left(r_{1}\right)^{D_{f}}.\\&\Downarrow \\{\frac {1}{4}}&=&\left({\frac {1}{3}}\right)^{D_{f}}\Rightarrow D_{f}={\frac {\ln 4}{\ln 3}}\simeq 1.26{\mbox{ and }}1

Note that a fractal's "real" length is infinite, and its approximations will depend on the resolution. The structure exhibits self-similarity: namely, on different length scales it will look the same. This can be seen in the Koch snowflake: at any magnification, a part of the curve looks similar to the whole curve. There's a very nice animation of this in Wikipedia. The total length of the curve depends on the ruler used to measure it: the actual length at iteration ${\displaystyle n}$ is ${\displaystyle L_{0}\left({\frac {4}{3}}\right)^{n}}$.

Another definition for the fractal dimension is

${\displaystyle D_{f}={\frac {\ln {\frac {L_{0}}{\ell _{0}}}}{\ln {\frac {\ell _{1}}{\ell _{0}}}}}={\frac {\ln 4}{\ln 3}}.}$

#### Linking fractals to polymers

Sidenote

The Flory exponent is defined from ${\displaystyle \scriptstyle R\sim N^{\nu }}$ such that ${\displaystyle \scriptstyle \nu ={\frac {1}{D_{f}}}}$.

Consider the ideal Gaussian chain again. It has ${\displaystyle R_{0}^{2}=N\ell ^{2}\sim N}$. Since ${\displaystyle N}$ is proportional to the mass, we have an object with a fractal dimension of 2 no matter what the dimensionality of the actual space is. We can say that a polymer in ${\displaystyle d}$-space fills only ${\displaystyle D_{f}\leq d}$ dimensions of the space it occupies, where ${\displaystyle D_{f}}$ is 2 for an ideal polymer Gaussian and ${\displaystyle 2\leq D_{f}\leq d}$ in general. Flory has shown that in some cases a non-ideal polymer can also have ${\displaystyle D_{f}<2}$, in particular when a self-avoiding walk (SAW) is accounted for. The SAW as opposed to the Gaussian walk (GW) is the defining property of a physical rather than ideal polymer, and gives a fractal dimension of ${\displaystyle D_{f}\approx 1.66}$. A collapsed polymer has ${\displaystyle D_{f}=3}$ and fills space completely. Note that two polymers with fractal dimensions ${\displaystyle D_{f}}$ and ${\displaystyle D_{f}^{*}}$ do not "feel" each other statistically if ${\displaystyle D_{f}+D_{f}^{*}.

### Polymers, Path Integrals and Green's Functions

Books: Doi & Edwards, F. Wiegel, or Feynman & Hibbs.

#### Local Gaussian chain model and the continuum limit

This model is also known as LGC. We start from an FJC in 3D where ${\displaystyle \Psi =\prod _{i}\psi \left(\mathbf {r} _{i}\right)}$ and ${\displaystyle \psi \left(\mathbf {r} _{i}\right)={\frac {1}{4\pi \ell ^{2}}}\delta \left(\mathbf {r} _{i}-\ell \right)}$. By the central limit theorem ${\displaystyle \mathbf {R} =\sum _{i}\mathbf {r} _{i}}$ will always be taken from a Gaussian distribution when the number of monomers is large (whatever the form of ${\displaystyle \psi }$, as long as it

is symmetrical around zero such that ${\displaystyle \left\langle \mathbf {r} _{i}\right\rangle =0}$):

${\displaystyle P_{N}\left(\mathbf {R} \right)=\left({\frac {3}{2\pi N\ell ^{2}}}\right)^{\frac {3}{2}}\exp \left(-{\frac {3R^{2}}{2N\ell ^{2}}}\right).}$

In the LGC approximation we exchange the rigid rods for Gaussian springs with ${\displaystyle \left\langle \mathbf {r} '''_{i}\right\rangle =0}$ and ${\displaystyle \left\langle \mathbf {r} _{i}^{2}\right\rangle =\ell ^{2}}$, by

setting

${\displaystyle \psi \left(\mathbf {r} _{i}\right)=\left({\frac {3}{2\pi \ell ^{2}}}\right)^{\frac {3}{2}}\exp \left(-{\frac {3R^{2}}{2\ell ^{2}}}\right).}$

We can then obtain for the full probability distribution

${\displaystyle {\begin{matrix}\Psi \left(\left\{\mathbf {r} _{i}\right\}\right)&=&\prod _{i}\psi \left(\mathbf {r} _{i}\right)&=&\left({\frac {3}{2\pi \ell ^{2}}}\right)^{\frac {3N}{2}}\exp \left(-\sum _{i=1}^{N}{\frac {3\left(\mathbf {R} _{i}-\mathbf {R} _{i-1}\right)^{2}}{2\ell ^{2}}}\right),\end{matrix}}}$

where ${\displaystyle \mathbf {r} _{i}=\mathbf {R} _{i}-\mathbf {R} _{i-1}}$. ${\displaystyle \Psi }$ describes ${\displaystyle N}$ harmonic springs with ${\displaystyle k={\frac {3k_{B}T}{\ell ^{2}}}}$ connected

in series:

${\displaystyle {\begin{matrix}U_{0}\left(\left\{\mathbf {R} _{i}\right\}\right)&=&{\frac {3}{2\ell ^{2}}}k_{B}T\sum _{i=1}^{N}\left(\mathbf {R} _{i}-\mathbf {R} _{i-1}\right)^{2},\Psi &\sim &e^{-{\frac {U_{0}}{k_{B}T}}}.\end{matrix}}}$

An exact property of the Gaussian distributions we have been using is that a sub chain of ${\displaystyle m-n}$ monomers (such as the sub chain starting at index ${\displaystyle m}$ and ending at ${\displaystyle n}$) will also have a Gaussian distribution

of the end-to-end length:

${\displaystyle {\begin{matrix}P\left(\mathbf {R} _{m}-\mathbf {R} _{n},m-n\right)&=&\left({\frac {3}{2\pi \left|n-m\right|\ell ^{2}}}\right)^{\frac {3}{2}}\exp \left(-{\frac {3R^{2}}{2\left|n-m\right|\ell ^{2}}}\right),\left\langle \left(\mathbf {R} _{m}-\mathbf {R} _{n}\right)^{2}\right\rangle &=&\ell ^{2}\left|n-m\right|.\end{matrix}}}$

At the continuum limit, we will get Wiener distributions : the correct way to calculate the limit is to take ${\displaystyle N\rightarrow \infty }$ and ${\displaystyle \ell \rightarrow 0}$ with ${\displaystyle N\ell =L}$ remaining constant. The length along the chain up to site ${\displaystyle n}$ is then described by ${\displaystyle n\ell \rightarrow s}$, ${\displaystyle 0\leq s\leq L}$. At this limit we can also substitute derivatives ${\displaystyle {\frac {\partial \mathbf {R} }{\partial s}}={\frac {1}{\ell }}{\frac {\partial \mathbf {R} }{\partial n}}}$ for the finite differences ${\displaystyle {\frac {\mathbf {R} _{i}-\mathbf {R} _{i-1}}{\ell }}}$,

such that

${\displaystyle {\begin{matrix}\sum _{i=1}^{N}{\frac {1}{\ell ^{2}}}\left(\mathbf {R} _{i}-\mathbf {R} _{i-1}\right)^{2}&\rightarrow &\int _{0}^{L}{\frac {\mathrm {d} s}{\ell }}\left({\frac {\partial \mathbf {R} }{\partial s}}\right)^{2}=\int _{0}^{N}\mathrm {d} n{\frac {1}{\ell ^{2}}}\left({\frac {\partial \mathbf {R} \left(n\right)}{\partial n}}\right)^{2}\Psi \left(\left\{\mathbf {R} _{i}\right\}\right)&\rightarrow &{\mbox{const.}}\times \exp \left\{-{\frac {3}{2\ell ^{2}}}\int _{0}^{N}\left({\frac {\partial \mathbf {R} \left(n\right)}{\partial n}}\right)^{2}\mathrm {d} n\right\}.\end{matrix}}}$

If we add an external spatial potential ${\displaystyle U\left(\mathbf {R} _{i}\right)}$ (which is single-body), its contribution to the free energy will amount

in a factor of

${\displaystyle \exp \left\{-{\frac {1}{k_{B}T}}\sum _{i=1}^{N}U\left(\mathbf {R} _{i}\right)\right\}\rightarrow \exp \left\{-{\frac {1}{k_{B}T}}\int _{0}^{N}U\left(\mathbf {R} \left(n\right)\right)\mathrm {d} n\right\}.}$

to the Boltzmann factor. 04/23/2009

#### Functional path integrals and the continuum distribution function

Books: F. Wiegel, Doi & Edwards.

Consider what happens when we hold the ends of a chain defined by ${\displaystyle \left\{\mathbf {R} _{i}\right\}}$ in place, such that ${\displaystyle \mathbf {R} _{0}=\mathbf {R} ^{\prime }}$ and ${\displaystyle \mathbf {R} _{N}=\mathbf {R} }$. We can calculate the probability

of this configuration from

${\displaystyle P_{N}\left(\mathbf {R} _{0},\mathbf {R} _{N}\right)=\prod _{i=1}^{N-1}\int \mathrm {d} \mathbf {R} _{i}\Psi \left(\left\{\mathbf {R} _{i}\right\}\right).}$

At the continuum limit the definition of the chain configurations translates into a function ${\displaystyle \mathbf {R} \left(n\right)}$ and the product of integrals can be taken as a path integral according to ${\displaystyle \prod _{i=1}^{N-1}\int \mathrm {d} \mathbf {R} _{i}\rightarrow \int {\mathcal {D}}\mathbf {R} \left(n\right)}$. The probability for each configuration with our constraint is a functional

of ${\displaystyle \mathbf {R} \left(n\right)}$. The partition function is:

${\displaystyle Z_{N}\left(\mathbf {R} ,\mathbf {R} ^{\prime }\right)=\int _{\mathbf {R} _{0}=\mathbf {R} ^{\prime }}^{\mathbf {R} _{N}=\mathbf {R} }{\mathcal {D}}\mathbf {R} \left(n\right)\exp \left\{-{\frac {3}{2\ell ^{2}}}\int _{0}^{N}\left({\frac {\partial \mathbf {R} \left(n\right)}{\partial n}}\right)^{2}\mathrm {d} n-{\frac {1}{k_{B}T}}\int _{0}^{N}U\left(\mathbf {R} \left(n\right)\right)\mathrm {d} n\right\},}$

and we can normalize it to obtain a probability distribution function,

given in terms of this path integral:

${\displaystyle P_{N}\left(\mathbf {R} ,\mathbf {R} ^{\prime }\right)={\frac {Z_{N}\left(\mathbf {R} ,\mathbf {R} ^{\prime }\right)}{\int Z_{N}\left(\mathbf {R} ,\mathbf {R} ^{\prime }\right)\mathrm {d} \mathbf {R} \mathrm {d} \mathbf {R} ^{\prime }}}.}$

We now introduce the Green's function ${\displaystyle G\left(\mathbf {R} ,\mathbf {R} ^{\prime };N\right),}$which as we will soon see describes the evolution from ${\displaystyle \mathbf {R} ^{\prime }}$

to ${\displaystyle \mathbf {R} }$ in ${\displaystyle N}$ steps. We define it as:

${\displaystyle G\left(\mathbf {R} _{N}=\mathbf {R} ,\mathbf {R} _{0}=\mathbf {R} ^{\prime };N\right)\equiv {\frac {\int _{\mathbf {R} _{0}=\mathbf {R} ^{\prime }}^{\mathbf {R} _{N}=\mathbf {R} }{\mathcal {D}}\mathbf {R} \left(n\right)\exp \left\{-{\frac {3}{2\ell ^{2}}}\int _{0}^{N}\left({\frac {\partial \mathbf {R} \left(n\right)}{\partial n}}\right)^{2}\mathrm {d} n-{\frac {1}{k_{B}T}}\int _{0}^{N}U\left(\mathbf {R} \left(n\right)\right)\mathrm {d} n\right\}}{\int \mathrm {d} \mathbf {R} \mathrm {d} \mathbf {R^{\prime }} \int _{\mathbf {R} _{0}=\mathbf {R} ^{\prime }}^{\mathbf {R} _{N}=\mathbf {R} }{\mathcal {D}}\mathbf {R} \left(n\right)\exp \left\{-{\frac {3}{2\ell ^{2}}}\int _{0}^{N}\left({\frac {\partial \mathbf {R} \left(n\right)}{\partial n}}\right)^{2}\mathrm {d} n\right\}}}.}$

Note that while the nominator is proportional to the probability ${\displaystyle P_{N}}$, the denominator does not include include the external potential.

${\displaystyle G}$ has several important properties:

1. It is equal to the exact probability ${\displaystyle P_{N}}$ for Gaussian chains in the absence of external potential.
2. If we consider that the chain might be divided into one sub chain between step ${\displaystyle 0}$ and ${\displaystyle i}$ and a second sub chain from step ${\displaystyle i}$ to step ${\displaystyle N}$, then
${\displaystyle G\left(\mathbf {R} ,\mathbf {R} ^{\prime };N\right)=\int \mathrm {d} \mathbf {R} ^{\prime \prime }G\left(\mathbf {R} ,\mathbf {R} ^{\prime \prime };N-i\right)G\left(\mathbf {R} ^{\prime \prime },\mathbf {R} ^{\prime };i\right).}$

We can use this property to compute expectations values of observables. If we have some function of a specific monomer ${\displaystyle A\left(\mathbf {R} _{i}\right)}$, for instance:

${\displaystyle \left\langle A\left(\mathbf {R} _{i}\right)\right\rangle ={\frac {\int \mathrm {d} \mathbf {R} _{N}\mathrm {d} \mathbf {R} _{0}\mathrm {d} \mathbf {R} _{i}G\left(\mathbf {R} _{N},\mathbf {R} _{i};N-i\right)A\left(\mathbf {R} _{i}\right)G\left(\mathbf {R} _{i},\mathbf {R} _{0};i\right)}{\int \mathrm {d} \mathbf {R} _{N}\mathrm {d} \mathbf {R} _{0}G\left(\mathbf {R} _{N},\mathbf {R} _{0};N\right)}}.}$
1. The Green's function is the solution of the differential equation (see proof in Doi & Edwards and in homework):
${\displaystyle {\frac {\partial G}{\partial N}}-{\frac {\ell ^{2}}{6}}{\frac {\partial ^{2}G}{\partial \mathbf {R} ^{2}}}+{\frac {U\left(\mathbf {R} \right)}{k_{B}T}}G=\delta \left(\mathbf {R} -\mathbf {R} ^{\prime }\right)\delta \left(N\right).}$
1. The Green's function is defined as 0 for ${\displaystyle N<0}$ and is equal to ${\displaystyle \delta \left(\mathbf {R} -\mathbf {R} ^{\prime }\right)}$ when ${\displaystyle N\rightarrow 0}$ in order to satisfy the boundary conditions.

#### Relationship to quantum mechanics

This equation for ${\displaystyle N>0}$, ${\displaystyle \mathbf {R} \neq \mathbf {R} ^{\prime }}$ is very similar in form to the Schrödinger equation. To see this, we

can rewrite it as:

${\displaystyle \left[{\frac {\partial }{\partial N}}-{\overset {\scriptstyle \equiv {\mathcal {L}}}{\overbrace {{\frac {\ell ^{2}}{6}}{\frac {\partial ^{2}}{\partial \mathbf {R} ^{2}}}+{\frac {U\left(\mathbf {R} \right)}{k_{B}T}}} }}\right]G\left(\mathbf {R} ,\mathbf {R} ^{\prime };N\right)=\left[{\frac {\partial }{\partial N}}-{\mathcal {L}}\right]G\left(\mathbf {R} ,\mathbf {R} ^{\prime };N\right)=0.}$

If we make the replacement ${\displaystyle N\rightarrow {\frac {it}{\hbar }}}$, ${\displaystyle {\mathcal {L}}\rightarrow {\mathcal {H}}}$ and ${\displaystyle {\frac {\ell ^{2}}{6}}\rightarrow {\frac {\hbar ^{2}}{2m}}}$ this is identical to ${\displaystyle -i\hbar {\frac {\partial }{\partial t}}G=\left[-{\frac {\hbar ^{2}\triangledown ^{2}}{2m}}+V\left(\mathbf {R} \right)\right]G={\mathcal {H}}G}$. Like the quantum Hamiltonian the Hermitian operator ${\displaystyle {\mathcal {L}}}$ has eigenfunctions such that ${\displaystyle {\mathcal {L}}\varphi _{k}=E_{k}\varphi _{k}}$, which according to Sturm-Liouville theory span the solution space (${\displaystyle \sum _{k}\varphi _{k}^{*}\left(\mathbf {r} ^{\prime }\right)\varphi _{k}\left(\mathbf {r} \right)=\delta \left(\mathbf {r} -\mathbf {r} ^{\prime }\right)}$) and can be orthonormalized (${\displaystyle \int \varphi _{k}^{*}\varphi _{m}\mathrm {d} \mathbf {r} =\delta _{km}}$).

The solution of the non-homogeneous problem is therefore

${\displaystyle G\left(\mathbf {R} ,\mathbf {R} ^{\prime };N\right)=\sum _{k}\varphi _{k}^{*}\left(\mathbf {R} \right)\varphi _{k}\left(\mathbf {R} ^{\prime }\right)e^{-NE_{k}},}$

where the ${\displaystyle \varphi _{k}}$ are solutions of the homogeneous equation ${\displaystyle \left({\mathcal {L}}-E_{n}\right)\varphi _{n}=0}$.

Example A polymer chain in a box of dimensions ${\displaystyle L_{x}\times L_{y}\times L_{z}}$: The potential ${\displaystyle U}$ is ${\displaystyle 0}$ within the box and ${\displaystyle \infty }$ on the edges. The boundary conditions are ${\displaystyle G\left(\mathbf {R} ,\mathbf {R} ^{\prime };N\right)=0}$ if ${\displaystyle \mathbf {R} }$ or ${\displaystyle \mathbf {R} ^{\prime }}$ are on the boundary. The

function is also separable in Cartesian coordinates:

${\displaystyle G\left(\mathbf {R} ,\mathbf {R} ^{\prime };N\right)=\prod _{i=1}^{3}g_{i}\left(R_{i},R_{i}^{\prime };N\right).}$

Let's solve for ${\displaystyle g_{1}\equiv g_{x}}$ (the other ${\displaystyle g}$ functions are

similar):

${\displaystyle \left({\frac {\partial }{\partial N}}-{\frac {\ell ^{2}}{6}}{\frac {\partial ^{2}}{\partial R_{1}^{2}}}\right)u\left(R_{1},N\right)=0.}$

If we separate variables again with the ansatz ${\displaystyle u\left(R_{1},N\right)=\varphi \left(R_{1}\right)e^{-EN}}$

we obtain

${\displaystyle {\begin{array}{lcl}-E\varphi -{\frac {\ell ^{2}}{6}}\varphi ^{\prime \prime }&=&0,\\&\Downarrow \\\varphi \left(R_{1}\right)&=&A\sin k_{1}R_{1}+B\cos k_{1}R_{1}.\end{array}}}$

With the boundary condition

${\displaystyle {\begin{cases}\varphi \left(0\right)=0&\Rightarrow B=0,\\\varphi \left(L_{x}\right)=0&\Rightarrow k_{1}={\frac {n\pi }{L_{x}}}.\end{cases}}}$

This gives an expression for the energy and eigenfunctions:

${\displaystyle {\begin{matrix}E_{n}&=&\left({\frac {\ell ^{2}\pi ^{2}}{6L_{x}}}\right)n^{2}=E_{1}n^{2},\varphi _{n}\left(R_{1}\right)&=&{\sqrt {\frac {2}{L_{x}}}}\sin \left({\frac {n\pi }{L_{x}}}R_{1}\right),u_{n}\left(R_{1}\right)&=&{\sqrt {\frac {2}{L_{x}}}}\sin \left({\frac {n\pi }{L_{x}}}R_{1}\right)e^{-E_{n}N}.\end{matrix}}}$

The Green's function can finally be written as

${\displaystyle g_{1}\left(R_{1},R_{1}^{\prime };N\right)=\sum _{n=1}^{\infty }{\frac {2}{L_{x}}}\sin \left({\frac {n\pi }{L_{x}}}R_{1}\right)\sin \left({\frac {n\pi }{L_{x}}}R_{1}^{\prime }\right)e^{-NE_{n}}.}$

Since with the Cartesian symmetry of the box the partition function ${\displaystyle Z=\prod _{i=1}^{3}Z_{i}}$ is also separable and using

${\displaystyle \int _{0}^{L_{x}}\sin {\frac {n\pi }{L_{x}}}x\mathrm {d} x={\begin{cases}{\frac {2L_{x}}{n\pi }}&n=2,\,4,\,6,\,...\,{\mbox{(even),}}\\0&n\in 1,\,3,\,5,\,...\,{\mbox{(odd)}}\end{cases}}}$

we can calculate

${\displaystyle {\begin{matrix}Z_{x}&=&\int \mathrm {d} x\int dx^{\prime }g_{1}\left(x,x^{\prime };N\right)&=&{\frac {2}{L_{x}}}\sum _{n=1,\,3,\,5...}\left({\frac {2L_{x}}{n\pi }}\right)^{2}\exp \left(-N{\frac {\ell ^{2}\pi ^{2}n^{2}}{6L_{x}^{2}}}\right)&=&{\frac {8L_{x}}{\pi ^{2}}}\sum _{n=1,\,3,\,5...}{\frac {1}{n^{2}}}e^{-n^{2}E_{1}N}.\end{matrix}}}$

We can now go on to calculate ${\displaystyle F=-k_{B}T\ln Z_{x}Z_{y}Z_{z}}$, and we can for instance calculate the pressure on the box edges in the

${\displaystyle x}$ direction:

${\displaystyle P_{x}=-{\frac {1}{L_{y}L_{z}}}{\frac {\partial F}{\partial L_{x}}}.}$

Two limiting cases can be done analytically: first, if the box is much larger than the polymer, ${\displaystyle L_{i}\gg {\sqrt {N}}\ell }$ and

${\displaystyle {\begin{array}{lcr}{\frac {1}{n^{2}}}\exp \left\{-{\frac {\pi ^{2}\ell ^{2}N}{L_{x}^{2}}}n^{2}\right\}&\approx &{\frac {1}{n^{2}}},\\\sum _{n=1,\,3,\,5...}{\frac {1}{n^{2}}}&=&{\frac {\pi ^{2}}{8}},\\&\Downarrow &\\P_{x}&=&-{\frac {1}{L_{y}L_{z}}}{\frac {\partial }{\partial L_{x}}}\left(-k_{B}T\ln Z_{x}Z_{y}Z_{z}\right)\\&=&-{\frac {k_{B}T}{L_{y}L_{z}}}{\frac {1}{Z_{x}}}{\frac {\partial Z_{x}}{\partial L_{x}}}\\&=&{\frac {k_{B}T}{Z_{x}Z_{y}Z_{z}}}.\end{array}}}$

This is equivalent to a dilute gas of polymers (done here for a single chain). At the opposite limit, ${\displaystyle L_{i}\ll {\sqrt {N}}\ell }$, the polymer should be "squeezed". The Gaussian approximation will be no good if we squeeze too hard, but at least for some intermediate regime

we can neglect all but the first term in the series:

${\displaystyle {\begin{array}{lcr}Z_{x}&=&{\frac {8L_{x}}{\pi ^{2}}}\sum _{n=1,\,3,\,5...}{\frac {1}{n^{2}}}e^{-{\frac {\pi ^{2}\ell ^{2}n^{2}}{6L_{x}^{2}}}N}\\&\approx &{\frac {8L_{x}}{\pi ^{2}}}e^{-{\frac {\pi ^{2}\ell ^{2}}{6L_{x}^{2}}}N},\\P_{x}&=&-{\frac {k_{B}T}{L_{y}L_{z}}}{\frac {\partial \ln Z_{x}}{\partial L_{x}}}\\&\approx &{\frac {k_{B}T}{V}}\left\{{\frac {1}{L_{x}}}+{\frac {\pi ^{2}\ell ^{2}N}{3L_{x}^{3}}}\right\}\\&=&{\frac {k_{B}T}{V}}\left\{1+{\frac {\pi ^{2}\ell ^{2}N}{3L_{x}^{2}}}\right\}.\end{array}}}$

There is a large extra pressure caused by the "squeezing" of the chain and the corresponding loss of its entropy.

04/30/2009

The same formalism can be used to treat polymers near a wall or in a well near a wall, for instance (see the homework for details). In the well case, like in the similar quantum problem, we will have bound states for ${\displaystyle T (where the critical temperature is defined by a critical value of ${\displaystyle \beta _{c}V_{0}={\frac {V_{0}}{k_{B}T_{c}}}}$, and describes the condition for the potential well to be "deep" enough to contain a bound state).

#### Dominant ground state

Note that since

${\displaystyle G=\sum _{n=0}^{\infty }\varphi _{n}\left(x\right)\varphi _{n}^{*}\left(x^{\prime }\right)e^{-NE_{n}},}$

where ${\displaystyle N}$ is positive and the ${\displaystyle E_{n}}$ are real and ordered (assuming no degeneracy, ${\displaystyle E_{0}), at large ${\displaystyle N}$ we can neglect

all but the leading terms (smallest energies) and

${\displaystyle G\approx \varphi _{0}\left(x\right)\varphi _{0}^{*}\left(x^{\prime }\right)e^{-NE_{0}}+\varphi _{1}\left(x\right)\varphi _{1}^{*}\left(x^{\prime }\right)e^{-NE_{1}}+....}$

This is possible because the exponent is decreasing rather than oscillating, as it is in the quantum mechanics case. Taking only the first term in this series is called the dominant ground state approximation .

### Polymers in Good Solutions and Self-Avoiding Walks

#### Virial expansion

So far, in treating Gaussian chains, we have neglected any long-ranged interactions. However, polymers in solution cannot self-intersect, and this introduces interactions ${\displaystyle V\left(\mathbf {R} _{i}-\mathbf {R} _{j}\right)}$ into the picture which are local in real-space, but are long ranged in terms of the contour spacing – that is, they are not limited to ${\displaystyle i\approx j}$. The importance of this effect depends on dimensionality: it is easy to imagine that intersections in 2D are more effective in restricting a polymer's shape than intersections in 3D.

The interaction potential ${\displaystyle V\left(\mathbf {r} \right)}$ can in general have both attractive and repulsive parts, and depends on the detailed properties of the solvent. If we consider it to be due to a long ranged attractive Van der-Waals interaction and a short ranged repulsive hard-core interaction, it might be modeled by a ${\displaystyle 6-12}$ Lennard-Jones potential. To treat interaction perturbatively within statistical mechanics, we can use a virial expansion (this is a statistical-mechanical expansion in powers of the density, useful for systematic perturbative corrections to non-interacting calculations when one wants to include

many-body interactions). The second virial coefficient is

${\displaystyle v_{2}=\int \mathrm {d} ^{3}r\left[1-e^{-{\frac {V\left(r\right)}{k_{B}T}}}\right].}$

To make the calculation easy, consider a potential even simpler than

the 6-12 Lennard-Jones:

${\displaystyle V\left(\mathbf {r} \right)={\begin{cases}\infty &r<\sigma ,\\-\varepsilon &\sigma 2\sigma .\end{cases}}}$

This gives

${\displaystyle {\begin{matrix}v_{2}&=&{\overset {\scriptstyle ={\frac {4\pi }{3}}\sigma ^{3}\equiv V_{0}}{\overbrace {\int _{r<\sigma }\mathrm {d} ^{3}r\left[1-e^{-{\frac {V\left(r\right)}{k_{B}T}}}\right]} }}+{\overset {\scriptstyle ={\frac {4\pi }{3}}\left[\left(2\sigma \right)^{3}-\sigma ^{3}\right]\left(1-e^{\beta \varepsilon }\right)}{\overbrace {\int _{\sigma

This can be positive (signifying net repulsion between the particles) at ${\displaystyle k_{B}T>{\frac {\varepsilon }{\ln {\frac {8}{7}}}}}$ or negative (signifying attraction) for ${\displaystyle k_{B}T<{\frac {\varepsilon }{\ln {\frac {8}{7}}}}}$. While the details of this calculation depend on our choice and parametrization of the potential, in general we will have some special temperature known as the ${\displaystyle \vartheta }$ temperature (in our case ${\displaystyle k_{B}\vartheta ={\frac {\varepsilon }{\ln {\frac {8}{7}}}}}$)

where

${\displaystyle v_{2}\left(\vartheta \right)=0.}$

This allows us to define a good solvent: such a solvent must have ${\displaystyle T>\vartheta }$ at our working temperature. This assures us (within the second Virial approximation, at least) that the interactions are repulsive and (as can be shown separately) the chain is swollen . A bad solvent for which ${\displaystyle T<\vartheta }$ will have attractive interactions, resulting in collapse . A solvent for which ${\displaystyle T=\vartheta }$ is called a ${\displaystyle \vartheta }$ solvent, and returns us to a Gaussian chain unless the next Virial coefficient is taken.

#### Lattice model

A common numerical treatment for this kind of system is to draw the polymer on a grid and make Monte-Carlo runs, where steps must be self-avoiding and their probability is taken from a thermal distribution while maintaining detailed balance. This gives in 3D ${\displaystyle R_{N}\simeq \ell N^{\nu }}$ where ${\displaystyle \nu \approx 0.588}$.

#### Renormalization group

A connection between SAWs and critical phenomena was made by de Gennes in the 1970s. Some of the similarities are summarized in the table below. Using renormalization group methods, de Gennes showed by analogy

to a certain spin model that

${\displaystyle {\begin{matrix}\nu \left(\varepsilon \right)&=&{\frac {1}{2}}+{\frac {1}{16}}\varepsilon +{\frac {15}{512}}\varepsilon ^{2}+\vartheta \left(\varepsilon ^{3}\right),\varepsilon &\equiv &4-d.\end{matrix}}}$

This gives in 3D a result very close to the SAW: ${\displaystyle \nu _{RG}={\frac {1}{2}}+{\frac {1}{16}}+{\frac {15}{512}}+\vartheta \left(\varepsilon ^{3}\right)=0.5625+\vartheta \left(\varepsilon ^{3}\right)}$.

Polymers Magnetic Systems
${\displaystyle N\rightarrow \infty }$, ${\displaystyle {\frac {1}{N}}\ll 1.}$ ${\displaystyle T_{c}}$ (critical temperature)${\displaystyle \Rightarrow }$ ${\displaystyle T-T_{c}}$ is a small parameter.
${\displaystyle R_{g}\approx \ell N^{\nu }=\ell \left({\frac {1}{N}}\right)^{-\nu }}$. Correlation length ${\displaystyle \xi =\xi _{0}\left|T-T_{c}\right|^{-\nu }}$ – critical exponent ${\displaystyle \nu }$.
Gaussian chains (non-SAW). Mean field theory.
${\displaystyle \nu \left(d=3\right)\neq \nu _{Gaussian}={\frac {1}{2}}}$. ${\displaystyle \nu \left(d=3\right)\neq \nu _{MFT}}$
For ${\displaystyle d>d_{u}=4}$, ${\displaystyle \nu \left(d\right)=\nu _{Gaussian}}$. MFT is accurate for ${\displaystyle d>d_{u}}$ (Ising model: ${\displaystyle d_{u}=4}$).

#### Flory model

This is a very crude model which gives surprisingly good results. We write the free energy as ${\displaystyle F_{tot}\left(R\right)=F_{int}+F_{ent}}$. For the entropic part we take the expression for an ideal chain: ${\displaystyle S_{N}\left(R\right)=-{\frac {d}{2}}k_{B}{\frac {R^{2}}{N\ell ^{2}}}+S_{N}\left(0\right)}$, ${\displaystyle F_{ent}=-TS_{N}}$. For the interaction, we use the second virial

coefficient:

${\displaystyle {\begin{matrix}{\frac {F_{int}\left(R\right)}{k_{B}T}}&=&{\frac {1}{2}}\nu _{2}\int \left[c\left(r\right)\right]^{2}\mathrm {d} ^{3}r.\end{matrix}}}$

Here ${\displaystyle c\left(r\right)}$ is a local density such that its average value is ${\displaystyle \left\langle c\right\rangle ={\frac {N}{V}}\sim {\frac {N}{R^{d}}}}$.

If we neglect local fluctuations in ${\displaystyle c}$, then

${\displaystyle {\begin{matrix}\int \left[c\left(r\right)\right]^{2}\mathrm {d} ^{3}r&=&V\left\langle c^{2}\left(r\right)\right\rangle \approx V\left\langle c\left(r\right)\right\rangle ^{2}=R^{2}\left({\frac {N}{R^{d}}}\right)^{2},{\frac {F_{int}}{k_{B}T}}&\approx &{\frac {1}{2}}v_{2}N^{2}R^{-d}.\end{matrix}}}$

The total free energy is then

${\displaystyle {\frac {F_{tot}}{k_{B}T}}\approx {\frac {d}{2}}{\frac {R^{2}}{N\ell ^{2}}}+{\frac {1}{2}}v^{2}N^{2}R^{-d}.}$

The free parameter here is ${\displaystyle R}$, but we do not know how it relates

to ${\displaystyle N}$. For constant ${\displaystyle N}$ the minimum is at

${\displaystyle R_{F}=\left({\frac {v_{2}}{2}}\ell ^{2}\right)^{\frac {1}{d+2}}N^{\frac {3}{d+2}},}$

which gives the Flory exponent

${\displaystyle \nu _{F}={\frac {3}{d+2}}.}$

This exponent is exact for 1, 2 and 4 dimensions, and gives a very good approximation (0.6) for 3 dimensions, but it misses completely for more than 4 dimensions. For a numerical example consider a polymer of ${\displaystyle \sim 10^{5}}$ monomers each of which is about ${\displaystyle 5\mathrm {\AA} }$ in length.

From the expressions above,

${\displaystyle R={\begin{cases}1600\mathrm {\AA} &{\mbox{GW,}}\\5000\mathrm {\AA} &{\mbox{Flory,}}\\4400\mathrm {\AA} &{\mbox{SAW.}}\end{cases}}}$

This difference is large enough to be experimentally detectable by the scattering techniques to be explained next.

The reason the Flory method provides such good results turns out to be a matter of lucky cancellation between two mistakes, both of which are by orders of magnitude: the entropy is overestimated and the correlations are underestimated. This is discussed in detail in all the books.

#### Field Theory of SAW

Books: Doi & Edwards, Wiegel

The seminal article of S.F. Edwards in 1965 was the first application of field-theoretic methods to the physics of polymers. To insert interactions into the Wiener distribution, we take sum over the two-body interactions ${\displaystyle {\frac {1}{2}}\sum _{ij}V\left(\mathbf {R} _{i}-\mathbf {R} _{j}\right)}$ to the continuum limit ${\displaystyle {\frac {1}{2}}\int _{0}^{N}\mathrm {d} n\int _{0}^{N}\mathrm {d} mV\left(\mathbf {R} \left(m\right)-\mathbf {R} \left(n\right)\right)}$.

This formalism is rather complicated and not much can be done by hand. One possible simplification is to consider an excluded-volume (or self-exclusion) interaction of Dirac delta function form, which prevents

two monomers from occupying the same point in space:

${\displaystyle V\left(\mathbf {R} _{i}-\mathbf {R} _{j}\right)=k_{B}Tv_{2}\delta \left(\mathbf {R} _{i}-\mathbf {R} _{j}\right).}$

The advantage of this is that a simple form is obtained in which only the second virial coefficient ${\displaystyle v_{2}}$ is taken into account. The

expression for the distribution is then

${\displaystyle \Psi \left(\left\{\mathbf {R} _{n}\right\}\right)\sim \exp \left\{-{\frac {3}{2\ell ^{2}}}\int _{0}^{N}\mathrm {d} n\left({\frac {\partial R\left(n\right)}{\partial n}}\right)^{2}-{\frac {v_{2}}{2}}\int _{0}^{N}\mathrm {dn} \int _{0}^{N}\mathrm {d} m\delta \left(\mathbf {R} _{m}-\mathbf {R} _{n}\right)\right\}.}$

With expressions of this sort, one can apply standard field-theory/many-body methods to evaluate the Green's function and calculate observables. This is more advanced and we will not be going into it. 05/07/2009

### Scattering and Polymer Solutions

#### The form factor

Materials can be probed by scattering experiments, and for dilute polymer solutions this is one way to learn about the polymers within them. Laser scattering requires relatively little equipment and can be done in any lab, while x-ray scattering (SAXS) requires a synchrotron and neutron scattering (SANS) requires a nuclear reactor. We will discuss structural properties on the scale of chains rather than individual monomers, which means relatively small wavenumbers. It will also soon be clear that small angles are of interest.

Sidenote

Modeling the monomers as points is reasonable when considering probing on the scale of the complete chain.

If we assume that the individual monomers act as point scatterers (see (8)) and consider a process which scatters the incoming wave at ${\displaystyle \mathbf {k} _{i}}$ to ${\displaystyle \mathbf {k} _{f}}$, we can define a scattering angle ${\displaystyle \vartheta }$ and a scattering wave vector ${\displaystyle \mathbf {k} =\mathbf {k} _{f}-\mathbf {k} _{i}}$ (which becomes smaller in magnitude as the angle ${\displaystyle \vartheta }$ becomes smaller). We then measure scattered waves at some outgoing angle for some incoming angle as illustrated in (9), where in fact many chain scatterers are involved we should have an ensemble average over the chain configurations (which should be incoherent since the chains are far apart compared with the typical decoherence length scale). All this is discussed in more detail below.

Sidenote

For this kind of experiment to work with lasers or x-rays, there must be a contrast : the polymer and solvent must have different indices of refraction. X-Ray experiments rely on different electronic densities. In neutron scattering experiments, contrast is achieved artificially by labeling the polymers or solvent – that is, replacing hydrogen with deuterium.

Within a chain scattering is mostly coherent such that that the scattered wavefunction is ${\displaystyle \Psi =\sum _{i=1}^{N}a_{i}e^{i\mathbf {k} \cdot \mathbf {R} _{i}}}$. The intensity or power should be proportional to ${\displaystyle I=\left|\Psi \right|^{2}=\sum _{i,j=1}^{N}a_{i}a_{j}^{*}e^{i\mathbf {k} \cdot \left(\mathbf {R} _{i}-\mathbf {R} _{j}\right)}}$).

If we specialize to homogeneous chains where ${\displaystyle a_{i}=a}$, then

${\displaystyle I=\left|a\right|^{2}\sum _{i,j=1}^{N}e^{i\mathbf {k} \cdot \left(\mathbf {R} _{i}-\mathbf {R} _{j}\right)}.}$

This expression is suitable for a single static chain in a specific configuration ${\displaystyle \left\{\mathbf {R} _{i}\right\}}$. For an ensemble of chains in solution, we average over all chain configurations incoherently,

defining the structure factor ${\displaystyle S\left(k\right)}$:

${\displaystyle {\begin{matrix}\left\langle I\right\rangle &=&\left\langle \Psi ^{2}\right\rangle ,S\left(k\right)&\equiv &{\frac {\left\langle \left|\Psi \left(k\right)\right|^{2}\right\rangle }{\left\langle \left|\Psi \left(0\right)\right|^{2}\right\rangle }}.\end{matrix}}}$

The normalization is with respect to the unscattered wave at ${\displaystyle k=0}$, ${\displaystyle \left|\Psi \left(0\right)\right|^{2}=a^{2}N^{2}}$. Note that in an isotropic system like the system of chain molecules in a solvent, the structure factor must depend only on the magnitude of ${\displaystyle k}$.

Inserting the expression for ${\displaystyle \Psi ^{2}}$ into the above equation gives

${\displaystyle S\left(k\right)={\frac {1}{N^{2}}}\left\langle \sum _{i,j=1}^{N}e^{i\mathbf {k} \cdot \left(\mathbf {R} _{i}-\mathbf {R} _{j}\right)}\right\rangle .}$

We now switch to spherical coordinates with ${\displaystyle \mathbf {z} }$ parallel to ${\displaystyle \mathbf {k} }$ with the added notation ${\displaystyle \mathbf {R} _{ij}=\mathbf {R} _{i}-\mathbf {R} _{j}}$. Since in these coordinates ${\displaystyle \mathbf {k} \cdot \mathbf {R} _{ij}=kR_{ij}\cos \vartheta }$,

we can write

${\displaystyle {\begin{matrix}\left\langle e^{i\mathbf {q} \cdot \mathbf {R} _{ij}}\right\rangle &=&{\frac {1}{4\pi }}\int _{0}^{2\pi }\mathrm {d} \varphi \int _{0}^{\pi }\mathrm {d} \vartheta \sin \vartheta e^{ikR_{ij}\cos \vartheta }&=&{\frac {1}{2}}\int _{-1}^{1}\mathrm {d} xe^{ikR_{ij}x}&=&{\frac {\sin \left(kR_{ij}\right)}{kR_{ij}}},\end{matrix}}}$
${\displaystyle S\left(k\right)={\frac {1}{N^{2}}}\sum _{ij}\left\langle {\frac {\sin \left(kR_{ij}\right)}{kR_{ij}}}\right\rangle _{\mathrm {configurations} }.}$

#### The gyration radius and small angle scattering

For small ${\displaystyle k}$ (which at least in the elastic case implies small ${\displaystyle \vartheta }$), we can expand the above expression for ${\displaystyle S\left(k\right)}$ in powers

of ${\displaystyle kR_{ij}}$ to obtain

${\displaystyle {\begin{matrix}S\left(k\right)&\simeq &{\frac {1}{N^{2}}}\sum _{ij}\left\langle 1-\left({\frac {kR_{ij}}{3!}}\right)^{2}\right\rangle &=&{\frac {1}{N^{2}}}N^{2}-{\frac {1}{6}}{\frac {1}{N^{2}}}\left|\mathbf {k} \right|^{2}\sum _{ij}\left\langle \mathbf {R} _{ij}^{2}\right\rangle &=&1-{\frac {1}{3}}k^{2}R_{g}^{2}.\end{matrix}}}$

The last equality is due to the fact ${\displaystyle R_{g}^{2}={\frac {1}{2N^{2}}}\sum _{ij}\left\langle \mathbf {R} _{ij}^{2}\right\rangle }$. If the scattering is elastic, ${\displaystyle \left|\mathbf {k} _{i}\right|=\left|\mathbf {k} _{f}\right|={\frac {2\pi }{\lambda }}}$

and

${\displaystyle k=\left|\mathbf {k} _{i}-\mathbf {k} _{f}\right|={\sqrt {\mathbf {k} _{i}^{2}+\mathbf {k} _{f}^{2}-2\mathbf {k} _{i}\cdot \mathbf {k} _{f}}}=\left|\mathbf {k} _{i}\right|{\sqrt {1+1-2\cos \vartheta }}={\frac {2\pi }{\lambda }}\cdot 2\sin {\frac {\vartheta }{2}}.}$

With this expression for ${\displaystyle k}$ in terms of the angle ${\displaystyle \vartheta }$,

the structure factor is then

${\displaystyle {\begin{matrix}S\left(k\right)&\simeq &1-{\frac {1}{3}}k^{2}R_{g}^{2}.&=&1-{\frac {16\pi ^{2}}{3}}{\frac {\sin ^{2}{\frac {\vartheta }{2}}}{\lambda ^{2}}}R_{g}^{2}.\end{matrix}}}$

From an experimental point of view, we can plot ${\displaystyle S}$ as a function of ${\displaystyle k^{2}\sim \sin ^{2}{\frac {\vartheta }{2}}}$ and determine the polymer's gyration radius ${\displaystyle R_{g}}$ from the slope.

The approximation we have made is good when ${\displaystyle kR_{g}\sim {\frac {\sin {\frac {\vartheta }{2}}}{\lambda }}R_{g}\ll 1}$, and this determines the range of angles that should be taken into account: we must have ${\displaystyle \sin {\frac {\vartheta }{2}}\sim {\frac {\vartheta }{2}}\lesssim {\frac {\lambda }{R_{g}}}}$. For laser scattering usually ${\displaystyle \lambda \sim 500\mathrm {nm} }$ (about enough to measure ${\displaystyle R_{g}}$) while for neutron scattering ${\displaystyle \lambda \sim 0.3\mathrm {nm} }$ (meaning we must take only very small angles into account to measure ${\displaystyle R_{g}}$, but also allowing for more detailed information about correlations within the chain to be collected).

#### Debye scattering function

Around 1947, Debye gave an exact result (the Debye function )

for Gaussian chains:

${\displaystyle {\begin{matrix}\left\langle e^{i\mathbf {k} \cdot \left(\mathbf {R} _{i}-\mathbf {R} _{j}\right)}\right\rangle &=&e^{-{\frac {\ell ^{2}k^{2}}{6}}\left|i-j\right|},S_{D}\left(k\right)&=&{\frac {2}{\left(k^{2}R_{g}^{2}\right)^{2}}}\left(k^{2}R_{g}^{2}-1+e^{-{\overset {=x}{\overbrace {k^{2}R_{g}^{2}} }}}\right),\end{matrix}}}$
${\displaystyle S_{D}\left(x\right)={\frac {2}{x^{2}}}\left(x-1+e^{-x}\right).}$

At the limit where ${\displaystyle x\ll 1}$ we can expand ${\displaystyle S\left(x\right)}$ around ${\displaystyle x=0}$, yielding the ${\displaystyle k\rightarrow 0}$ limit we have encountered earlier. For ${\displaystyle x\gg 1}$, ${\displaystyle S\left(x\right)={\frac {2}{x^{2}}}\left(x-1+e^{-x}\right)\simeq {\frac {2}{x}}}$.

Sidenote

Another way to observe GW behavior is to use a ${\displaystyle \vartheta }$-solvent.

This also works very well for non-Gaussian chains in non-dilute solutions, where a small percentage of the chains is replaced by isotopic variants. This gives an effectively dilute solution of isotopic chains, which can be distinguished from the rest, and these chains are effectively Gaussian for reasons which we will mention later. An example from Rubinstein is neutron scattering from PMMA as done by R. Kirste et al. (1975), which fits very nicely to the Debye function for ${\displaystyle R_{g}\approx 130\mathrm {\AA} }$. In general, however, a SAW in a dilute solution modifies the tail of the Debye function, since ${\displaystyle \rho \left(k\right)\sim k^{-D_{f}}}$ and ${\displaystyle D_{f}={\frac {5}{3}}}$ for a SAW.

#### The structure factor and monomer correlations

Consider the full distribution function of the distances ${\displaystyle \mathbf {R} _{ij}=\mathbf {R} _{i}-\mathbf {R} _{j}}$.

This is related to the correlation function for monomer ${\displaystyle i}$:

${\displaystyle g_{i}\left(\mathbf {r} \right)={\frac {1}{N}}\sum _{j=1}^{N}\left\langle \delta \left(\mathbf {r} -\mathbf {R} _{ij}\right)\right\rangle .}$

This function is evaluated by fixing a certain monomer ${\displaystyle i}$