Advanced Mathematics for Engineers and Scientists/Parallel Plate Flow: Realistic IC< Advanced Mathematics for Engineers and Scientists
Parallel Plate Flow: Realistic ICEdit
The Steady StateEdit
The initial velocity profile chosen in the last problem agreed with intuition but honestly came out of thin air. A more realistic development follows.
The problem stated that (to come up with an IC) the fluid was under a pressure difference for some time, so that the flow became steady aka flowing steadily. "Steady" is another way of saying "not changing with time", and "not changing with time" is another way of saying that:
Putting this into the PDE from the previous section:
Independent of , the PDE became an ODE with variables separated and thus we can integrate.
The no slip condition results in the following BCs: at and . We can plug the BC values into the integrated ODE and resolve the Cs.
Inserting the Cs and and simplifying yields:
For the sake of example, take (recall that a negative pressure gradient causes left to right flow). Also note that this is a constant gradient or slope. This gives a parabola which starts at , increases to a maximum of at , and returns to at .
This parabola looks pretty much identical to the sinusoid previously used (you must zoom in to see a difference). However, even more so on the narrow domain of interest, the two are very different functions (look at their taylor expansions, for example). Using the parabola instead of the sine function results in a much more involved solution.
So this derives the steady state flow, which we will use as an improved, realistic IC. Recall that the problem is about a fluid that's initially in motion that is coming to a stop due to the absence of a driving force. The IBVP (Initial Boundary Value Problem) is now subtly different:
Since the only difference from the problem in the last section is the IC, the variables may be separated and the BCs applied with no difference, giving:
But now we're stuck (after applying the BCs)! Applying the IC makes the term go away as t = 0, which is the IC. However, then the IC function can't be made to match:
What went wrong? It was the assumption that . The fact that the IC couldn't be fulfilled means that the assumption was wrong. It should be apparent now why the IC was chosen to be in the previous section.
We can proceed however, thanks to the linearity of the problem. Another detour is necessary, it gets long.
Linearity (the superposition principle specifically) says that if is a solution to the BVP (not the whole IBVP, only the BVP, Boundary Value Problem, the BCs applied) and so is another , then a linear combination, , is also a solution.
Let's take a step back and suppose that the IC was
This is no longer a realistic flow problem but it contains the first two terms of what is called a Fourier sine expansion, see these examples of Fourier sine expansions. We are going to generalize this below. Let's now use this expression and equate it to the half way solution (BCs applied) with being eliminated as t = 0:
And it still can't match. However, observe that the individual terms in the IC can. We simply set the constants to values making both sides match:
Note the subscripts are used to identify each term: they reflect the integer from the separation constant. Solutions may be obtained for each individual term of the IC, identified with :
Linearity states that the sum of these two solutions is also a solution to the BVP (no need for new constants):
So we added the solutions and got a new solution... what is this good for? Try setting :
Each component solution satisfies the BVP, and the sum of these just happened to satisfy our surrogate IC. The IBVP with IC is now solved. It would work the same way for any linear combination of sine functions whose half frequencies are . "Linear combination" means a sum of terms, each multiplied by a constant. The sum is assumed to converge and be term by term differentiable.
Let's do what we just did in a more generalized fashion. First, we make our IC a linear combination of sines (with eliminated as t = 0), in fact, infinitely many of them. But each successive term has to 'converge', it can't stray wildly all over the place.
Second, find the n and B for each term assuming t = 0 (the IC), then plug them back into each term making no assumptions about t, leaving t as is.
Third, sum up all the terms with their individual n and Bs.
Fourth, plug t = 0 into the sum of terms and recover the IC from the first step.
So we went full circle on this example but found the n and Bs because we were able to equate/satisfy each term with the IC. Now we can solve the problem if the IC is a linear combination of sine functions. But the IC for this problem isn't such a sum, it's just a stupid parabola. Or is it?
In the 19th century, a man named Joseph Fourier took a break from helping Napoleon take over the world to ask an important question while studying this same BVP (concerning heat flow): can a function be expressed as a sum of sinusoids, similar to a taylor series? The short answer is yes, if a few reasonable conditions apply as we have already indicated. The long answer follows, and this section is a longer answer.
A function meeting certain criteria may indeed be expanded into a sum of sines, cosines, or both. In our case, all that is needed to accomplish this expansion is to find the coefficients . A little trick involving an integral makes this possible.
The sine function has a very important property called orthogonality. There are many flavors of this, which will be served in the next chapter. Relevant to this problem is the following:
A quick hint may help. Orthogonality literally means two lines at a right angle to each other. These lines could be vectors, each with its own tuple of coordinates. If those two vectors are at a right angle to each other, multiplying and summing their coordinate tuples always yields zero (in Euclidean space). The method of multiplying and summing is also used to determine whether two functions are orthogonal. Using this definition, our multiplied and integrated functions above are orthogonal most of the time, but not always.
Let's call the IC to generalize it. We equate the IC with its expansion, meaning the linear combination of sines, and then apply some craftiness. And remember that our goal is to reproduce a parabolic function from linearly combined sines:
In the last step, all of the terms in the sum became except for the term where , the only case where we get for the otherwise orthogonal sine functions. This isolates and explicitly defines which is the same as as m = n. The expansion for is then:
Many important details have been left out for later in a devoted chapter; one noteworthy detail is that this expansion is only approximating the parabola (very superficially) on the interval , not say from to .
This expansion may finally be combined with the sum of sines solution to the BVP developed previously. Note that the last equation looks very similar to . Following from this:
So the expansion will satisfy the IC given as (surprised?). The full solution for the problem with arbitrary IC is then:
In this problem specifically, the IC is , so:
Sines and cosines appear from the integration dependent only on . Since is an integer, these can be made more aesthetic.
Note that for even , . Putting everything together finally completes the solution to the IBVP:
There are many interesting things to observe. To begin with, is not a product of a function of and a function of . Such a solution was assumed in the beginning, proved to be wrong, but eventually happened to yield a solution anyway thanks to linearity and what is called a Fourier sine expansion.
A careful look at the procedure reveals something that may be disturbing: this lengthy solution is strictly valid for the given BCs. Thanks to the definition of , the solution is generic as far as the IC is concerned (the IC doesn't even need to match the BCs), however a slight change in either BC would mandate starting over almost from the beginning.
The parabolic IC, which looks very similar to the sine function used in the previous section, is wholly to blame (or thank once you understand the beauty of a Fourier series!) for the infinite sum. It is interesting to approximate the first several numeric values of the sequence :
Recall that the even terms are all . The first term by far dominates, this makes sense since the first term already looks very, very similar to the parabola. Recall that appears in an exponential, making the higher terms even smaller for time not too close to .