| exports | kernelspec | |||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|
|
|
#%config InlineBackend.figure_format = 'svg'
from pylab import *
from matplotlib import animation
from IPython.display import HTML
Consider a partition of
The points
i.e.,
Note that
:::{prf:definition}
We say that
In general, no restrictions on the continuity of
(sec:pwlininterp)=
Let us demand the we want a continuous approximation. We are given the function values
Clearly, such a function is piecewise linear and continuous in the whole domain. Here is the approximation with
fun = lambda x: sin(4*pi*x)
xmin, xmax = 0.0, 1.0
N = 10
x = linspace(xmin,xmax,N+1)
f = fun(x)
xe = linspace(xmin,xmax,100) # for plotting only
fe = fun(xe)
fig, ax = subplots()
line1, = ax.plot(xe,fe,'-',linewidth=2,label='Function')
line2, = ax.plot(x,f,'or--',linewidth=2,label='Interpolant')
ax.set_xticks(x), ax.grid()
ax.set_xlabel('x'), ax.legend()
ax.set_title('Approximation with N = ' + str(N));
The vertical grid lines show the boundaries of the sub-intervals.
From the error estimate of polynomial interpolation,
we get the local interpolation error estimate
from which we get the global error
Clearly, we have convergence as
:::{prf:remark}
-
The polynomial degree is fixed and convergence is achieved because the spacing between the grid points becomes smaller. Note that we only require a condition on the second derivative of the function
$f$ while in the global interpolation problem, we required conditions on higher derivatives, which can be difficult to satisfy in some situations. -
The points
${ x_i }$ need not be uniformly distributed. We can exploit this freedom to adaptive choose the points to reduce the error in the approximation. :::
We have written down the polynomial in a piecewise manner but we can also write a global view of the polynomial by defining some basis functions. Since
and similarly
Let us define
These are triangular hat functions with compact support. Then the piecewise linear approximation is given by
For
:tags: hide-input
def basis1(x,i,h):
xi = i*h; xl,xr = xi-h,xi+h
y = empty_like(x)
for j in range(len(x)):
if x[j]>xr or x[j] < xl:
y[j] = 0
elif x[j]>xi:
y[j] = (xr - x[j])/h
else:
y[j] = (x[j] - xl)/h
return y
N = 6 # Number of elements
xg = linspace(0.0, 1.0, N+1)
h = 1.0/N
# Finer grid for plotting purpose
x = linspace(0.0,1.0,1000)
fig = figure()
ax = fig.add_subplot(111)
ax.set_xlabel('$x$'); ax.set_ylabel('$\\phi$')
line1, = ax.plot(xg,0*xg,'o-')
line2, = ax.plot(x,0*x,'r-',linewidth=2)
for i in range(N+1):
node = '$x_'+str(i)+'$'
ax.text(xg[i],-0.02,node,ha='center',va='top')
ax.axis([-0.1,1.1,-0.1,1.1])
ax.set_xticks(xg)
grid(True), close()
def fplot(i):
y = basis1(x,i,h)
line2.set_ydata(y)
t = '$\\phi_'+str(i)+'$'
ax.set_title(t)
return line2,
anim = animation.FuncAnimation(fig, fplot, frames=N+1, repeat=False)
HTML(anim.to_jshtml())
Note each
similar to the Lagrange polynomials.
:::{warning} In practice, we should never use the global form since most terms in the sum are zero due to compact support of basis functions.
If
:::
How do we choose the number of elements
We have to first estimate the error in an interval
by some finite difference formula
If in some interval, it happens that
divide
The derivative in the interval polyfit and finding the maximum value of its second derivative at P = polyfit(x,f,3) returns a polynomial in the form
whose second derivative is
We can start with a uniform grid of
:::{prf:algorithm}
Given:
Return:
- Make uniform grid of
$N$ intervals - In each interval
$[x_{i-1},x_i]$
Compute the error estimate$e_i = \half h_i^2 H_i$ - Find interval with maximum error $$ e_k = \max_i e_i $$
- If
$e_k > \epsilon$ , divide$[x_{k-1},x_k]$ into two sub-intervals, goto (1). - Return
${x_i, f_i}$ . :::
+++
:::{prf:example}
Let us try to approximate
with piecewise linear approximation.
xmin, xmax = 0.0, 1.0
fun = lambda x: exp(-100*(x-0.5)**2) * sin(4*pi*x)
Here is the initial approximation.
N = 10 # number of initial intervals
x = linspace(xmin,xmax,N+1)
f = fun(x)
ne = 100
xe = linspace(xmin,xmax,ne)
fe = fun(xe)
fig, ax = subplots()
line1, = ax.plot(xe,fe,'-',linewidth=2)
line2, = ax.plot(x,f,'or--',linewidth=2)
ax.set_title('Initial approximation, N = ' + str(N));
The next function performs uniform and adaptive refinement.
def adapt(x,f,nadapt=100,err=1.0e-2,mode=1):
N = len(x) # Number of intervals = N-1
for n in range(nadapt): # Number of adaptive refinement steps
h = x[1:] - x[0:-1]
H = zeros(N-1)
# First element
P = polyfit(x[0:4], f[0:4], 3)
H[0] = max(abs(6*P[0]*x[0:2] + 2*P[1]))
for j in range(1,N-2): # Interior elements
P = polyfit(x[j-1:j+3], f[j-1:j+3], 3)
H[j] = max(abs(6*P[0]*x[j:j+2] + 2*P[1]))
# Last element
P = polyfit(x[N-4:N], f[N-4:N], 3)
H[-1] = max(abs(6*P[0]*x[N-2:N] + 2*P[1]))
elem_err = (1.0/8.0) * h**2 * abs(H)
i = argmax(elem_err); current_err = elem_err[i]
if current_err < err:
print('Satisfied error tolerance, N = ' + str(N))
return x,f
if mode == 0:
# N-1 intervals doubled to 2*(N-1), then there are
# 2*(N-1)+1 = 2*N-1 points
x = linspace(xmin,xmax,2*N-1)
f = fun(x)
else:
x = concatenate([x[0:i+1], [0.5*(x[i]+x[i+1])], x[i+1:]])
f = concatenate([f[0:i+1], [fun(x[i+1])], f[i+1:]])
N = len(x)
return x,f
Let us first try uniform refinement, i.e., we sub-divide every interval.
adapt(x, f, mode=0);
and adaptive refinement.
adapt(x, f, mode=1);
Here is an animation of adaptive refinement.
def fplot(frame_number,x,f):
x1, f1 = adapt(x,f,nadapt=frame_number,mode=1)
line2.set_data(x1,f1)
ax.set_title('N = '+str(len(x1)))
return line2,
anim = animation.FuncAnimation(fig, fplot, frames=22, fargs=[x,f], repeat=False)
HTML(anim.to_jshtml())
The adaptive algorithm puts new points in regions of large gradient, where the resolution is not sufficient, and does not add anything in other regions. :::
+++
:::{exercise}
Modify the program by changing the error estimate as follows. Estimate second derivative at
+++
:::{exercise}
In each step, we divided only one interval in which error
+++
Consider a grid
and define the mid-points of the intervals
We are given the information
\begin{align} y_i &= f(x_i), \quad & i=0,1,\ldots,N \ y_\imh &= f(x_\imh), \quad & i=1,2,\ldots,N \end{align}
In the interval
and we can determine a quadratic polynomial that interpolates these values
$$ \label{eq:p2locint} p(x) = y_{i-1} \phi_0\left(\frac{x-x_{i-1}}{h_i}\right)
- y_\imh \phi_1\left(\frac{x- x_{i-1}}{h_i}\right)
- y_i \phi_2\left(\frac{x-x_{i-1}}{h_i}\right) $$
where
Note that the quadratic interpolation is continuous but may not be differentiable.
Here is the piecewise quadratic approximation of
:tags: hide-input
fun = lambda x: sin(4*pi*x)
xmin, xmax = 0.0, 1.0
N = 7 # no of elements
m = 2*N + 1 # no of data
x = linspace(xmin,xmax,m)
f = fun(x)
xe = linspace(xmin,xmax,500)
fe = fun(xe)
plot(x,f,'o',label='Data')
plot(x,0*f,'o')
plot(xe,fe,'k--',lw=1,label='Function')
# Local coordinates
xi = linspace(0,1,10)
phi0 = lambda x: 2*(x - 0.5)*(x - 1.0)
phi1 = lambda x: 4*x*(1.0 - x)
phi2 = lambda x: 2*x*(x - 0.5)
j = 0
for i in range(N):
z = x[j:j+3]
h = z[-1] - z[0]
y = f[j:j+3]
fu = y[0]*phi0(xi) + y[1]*phi1(xi) + y[2]*phi2(xi)
plot(xi*h + z[0], fu, lw=2)
j += 2
title('Piecewise quadratic with '+str(N)+' elements')
legend(), xticks(x[::2]), grid(True), xlabel('x');
The vertical grid lines show the elements. The quadratic interpolant is shown in different color inside each element.
We can write the approximation in the form
where
but this form should not be used in computations. For any
:::{exercise}
What are the basis functions
Consider a grid
In each interval
Then the degree
where
with the property
Since we choose the nodes in each interval such that
:::{prf:remark}
-
Note that we have mapped the interval
$[x_{i-1},x_i]$ to the reference interval$[0,1]$ and then constructed the interpolating polynomial. -
The points
$\xi_l$ can be uniformly distributed in$[0,1]$ but this may lead to Runge phenomenon at high degrees. We can also choose them to be Chbeyshev or Gauss points. :::
+++
:::{prf:remark}
-
In each element, the approximation can be constructed using the data located inside that element only. This lends a local character to the approximation which is useful in Finite Element Methods.
-
If the function is not smooth in some region, the approximations will be good away from these non-smooth regions. And as we have seen, we can improve the approximation by local grid adaptation. :::
+++
:::{exercise}
Write a function which plots the piecewise degree scipy.interpolate.barycentric_interpolate to evaluate the polynomial inside each element.
:::
+++
The estimate we have derived required functions to be differentiable. Here we use weaker smoothness properties.
:::{prf:theorem}
Let
$$ \norm{f' - p'}{L^2(0,1)} \le C h \norm{f''}{L^2(0,1)} $$ :::
:::{prf:proof}
Since
(1) If
Squaring both sides and using
Integrating on
Adding such an inequality from all the intervals
we get the first error estimate.
(2) The error in derivative is
Squaring both sides
Integrating with respect to
Adding such inequalities from each interval, we obtain the second error estimate. :::