Example: Coefficient field inversion in an elliptic partial differential equation

We consider the estimation of a coefficient in an elliptic partial differential equation as a first model problem. Depending on the interpretation of the unknowns and the type of measurements, this model problem arises, for instance, in inversion for groundwater flow or heat conductivity. It can also be interpreted as finding a membrane with a certain spatially varying stiffness. Let $\Omega\subset\mathbb{R}^n$, $n\in\{1,2,3\}$ be an open, bounded domain and consider the following problem:

$$ \min_{a} J(a):=\frac{1}{2}\int_\Omega (u-u_d)^2\, dx + \frac{\gamma}{2}\int_\Omega|\nabla a|^2\,dx, $$

where $u$ is the solution of

$$ \begin{split} \quad -\nabla\cdot(a\nabla u) &= f \text{ in }\Omega,\\ u &= 0 \text{ on }\partial\Omega. \end{split} $$

Here $a\in U_{ad}:=\{a\in L^{\infty}(\Omega)\}$ the unknown coefficient field, $u_d$ denotes (possibly noisy) data, $f\in H^{-1}(\Omega)$ a given force, and $\gamma\ge 0$ the regularization parameter.

The variational (or weak) form of the state equation:

Find $u\in H_0^1(\Omega)$ such that $(a\nabla u,\nabla v) - (f,v) = 0, \text{ for all } v\in H_0^1(\Omega),$ where $H_0^1(\Omega)$ is the space of functions vanishing on $\partial\Omega$ with square integrable derivatives. Here, $(\cdot\,,\cdot)$ denotes the $L^2$-inner product, i.e, for scalar functions $u,v$ defined on $\Omega$ we denote $(u,v) := \int_\Omega u(x) v(x) \,dx$.

Optimality System:

The Lagrangian functional $\mathscr{L}:L^\infty(\Omega)\times H_0^1(\Omega)\times H_0^1(\Omega)\rightarrow \mathbb{R}$, which we use as a tool to derive the optimality system, is given by

$$ \mathscr{L}(a,u,p):= \frac{1}{2}(u-u_d,u-u_d) + \frac{\gamma}{2}(\nabla a, \nabla a) + (a\nabla u,\nabla p) - (f,p). $$

The Lagrange multiplier theory shows that, at a solution all variations of the Lagrangian functional with respect to all variables must vanish. These variations of $\mathscr{L}$ with respect to $(p,u,a)$ in directions $(\tilde{u}, \tilde{p}, \tilde{a})$ are given by

$$ \begin{alignat}{2} \mathscr{L}_p(a,u,p)(\tilde{p}) &= (a\nabla u, \nabla \tilde{p}) - (f,\tilde{p}) &&= 0,\\ \mathscr{L}_u(a,u,p)(\tilde{u}) &= (a\nabla p, \nabla \tilde{u}) + (u-u_d,\tilde{u}) && = 0,\\ \mathscr{L}_a(a,u,p)(\tilde{a}) &= \gamma(\nabla a, \nabla \tilde{a}) + (\tilde{a}\nabla u, \nabla p) &&= 0, \end{alignat} $$

where the variations $(\tilde{u}, \tilde{p}, \tilde{a})$ are taken from the same spaces as $(u,p,a)$.

The gradient of the cost functional $\mathcal{J}(a)$ therefore is

$$ \mathcal{G}(a)(\tilde a) = \gamma(\nabla a, \nabla \tilde{a}) + (\tilde{a}\nabla u, \nabla \tilde{p}). $$

Goals:

By the end of this notebook, you should be able to:

  • solve the forward and adjoint Poisson equations
  • understand the inverse method framework
  • visualise and understand the results
  • modify the problem and code

Mathematical tools used:

  • Finite element method
  • Derivation of gradiant via the adjoint method
  • Armijo line search

List of software used:

  • FEniCS, a parallel finite element element library for the discretization of partial differential equations
  • PETSc, for scalable and efficient linear algebra operations and solvers
  • Matplotlib, a python package used for plotting the results
  • Numpy, a python package for linear algebra

Set up

Import dependencies

In [1]:
from dolfin import *

import numpy as np
import time
import logging

import matplotlib.pyplot as plt
%matplotlib inline
import nb

start = time.clock()

logging.getLogger('FFC').setLevel(logging.WARNING)
logging.getLogger('UFL').setLevel(logging.WARNING)
set_log_active(False)

np.random.seed(seed=1)

Model set up:

As in the introduction, the first thing we need to do is to set up the numerical model.

In this cell, we set the mesh mesh, the finite element spaces Va and Vu corresponding to the parameter space and state/adjoint space, respectively. In particular, we use linear finite elements for the parameter space, and quadratic elements for the state/adjoint space.

The true parameter atrue is the finite element interpolant of the function $$ a_{\rm true} = \left\{ \begin{array}{l} 4 \; \forall \,(x,y) \, {\rm s.t.}\, \sqrt{ (x-.5)^2 + (y-.5)^2} \leq 0.2 \\ 8 \; {\rm otherwise}. \end{array}\right. $$

The forcing term f and the boundary conditions u0 for the forward problem are $$ f = 1 \; \forall {\bf x} \in \Omega, \quad u = 0 \; \forall {\bf x} \in \partial \Omega. $$

In [2]:
# create mesh and define function spaces
nx = 32
ny = 32
mesh = UnitSquareMesh(nx, ny)
Va = FunctionSpace(mesh, 'Lagrange', 1)
Vu = FunctionSpace(mesh, 'Lagrange', 2)

# The true and inverted parameter
atrue = interpolate(Expression('8. - 4.*(pow(x[0] - 0.5,2) + pow(x[1] - 0.5,2) < pow(0.2,2))'), Va)
a = interpolate(Expression("4."),Va)

# define function for state and adjoint
u = Function(Vu)
p = Function(Vu)

# define Trial and Test Functions
u_trial, p_trial, a_trial = TrialFunction(Vu), TrialFunction(Vu), TrialFunction(Va)
u_test, p_test, a_test = TestFunction(Vu), TestFunction(Vu), TestFunction(Va)

# initialize input functions
f = Constant("1.0")
u0 = Constant("0.0")

# plot
plt.figure(figsize=(15,5))
nb.plot(mesh,subplot_loc=121, mytitle="Mesh", show_axis='on')
nb.plot(atrue,subplot_loc=122, mytitle="True parameter field")
plt.show()
In [3]:
# set up dirichlet boundary conditions
def boundary(x,on_boundary):
    return on_boundary

bc_state = DirichletBC(Vu, u0, boundary)
bc_adj = DirichletBC(Vu, Constant(0.), boundary)

The cost functional evaluation:

$$ J(a):=\underbrace{\frac{1}{2}\int_\Omega (u-u_d)^2\, dx}_{\text misfit} + \underbrace{\frac{\gamma}{2}\int_\Omega|\nabla a|^2\,dx}_{\text reg} $$

In the code below, $W$ and $R$ are symmetric positive definite matrices that stem from finite element discretization of the misfit and regularization component of the cost functional, respectively.

In [4]:
# Regularization parameter
gamma = 1e-10

# weak for for setting up the misfit and regularization compoment of the cost
W_equ   = inner(u_trial, u_test) * dx
R_equ   = gamma * inner(nabla_grad(a_trial), nabla_grad(a_test)) * dx

W = assemble(W_equ)
R = assemble(R_equ)

# Define cost function
def cost(u, ud, a, W, R):
    diff = u.vector() - ud.vector()
    reg = 0.5 * a.vector().inner(R*a.vector() ) 
    misfit = 0.5 * diff.inner(W * diff)
    return [reg + misfit, misfit, reg]

Set up synthetic observations:

To generate the synthetic observation we first solve the PDE for the state variable utrue corresponding to the true parameter atrue. More specifically, we solve the variational problem

Find $u\in H_0^1(\Omega)$ such that $\underbrace{(a_{\text true} \nabla u,\nabla v)}_{\; := \; a_{\rm goal}} - \underbrace{(f,v)}_{\; := \;L_{\rm goal}} = 0, \text{ for all } v\in H_0^1(\Omega)$.

Then we perturb the true state variable and write the observation ud as $$ u_{d} = u_{\rm true} + \eta, \quad {\rm where} \; \eta \sim \mathcal{N}(0, \sigma^2).$$ Here the standard variation $\sigma$ is proportional to noise_level.

In [5]:
# noise level
noise_level = 0.01

# weak form for setting up the synthetic observations
a_goal = inner( atrue * nabla_grad(u_trial), nabla_grad(u_test)) * dx
L_goal = f * u_test * dx

# solve the forward/state problem to generate synthetic observations
goal_A, goal_b = assemble_system(a_goal, L_goal, bc_state)

utrue = Function(Vu)
solve(goal_A, utrue.vector(), goal_b)

ud = Function(Vu)
ud.assign(utrue)

# perturb state solution and create synthetic measurements ud
# ud = u + ||u||/SNR * random.normal
MAX = ud.vector().norm("linf")
noise = Vector()
goal_A.init_vector(noise,1)
noise.set_local( noise_level * MAX * np.random.normal(0, 1, len(ud.vector().array())) )
bc_adj.apply(noise)

ud.vector().axpy(1., noise)

# plot
nb.multi1_plot([utrue, ud], ["State solution with atrue", "Synthetic observations"])
plt.show()

Setting up the state equations, right hand side for the adjoint and the necessary matrices:

$$ \begin{alignat}{2} \mathscr{L}_p(a,u,p)(\tilde{p}) &= (a\nabla u, \nabla \tilde{p}) - (f,\tilde{p}) &&= 0,\\ \mathscr{L}_u(a,u,p)(\tilde{u}) &= (a\nabla p, \nabla \tilde{u}) + (u-u_d,\tilde{u}) && = 0,\\ \mathscr{L}_a(a,u,p)(\tilde{a}) &= \gamma(\nabla a, \nabla \tilde{a}) + (\tilde{a}\nabla u, \nabla p) &&= 0, \end{alignat} $$
In [6]:
# weak form for setting up the state equation
a_state = inner( a * nabla_grad(u_trial), nabla_grad(u_test)) * dx
L_state = f * u_test * dx

# weak form for setting up the adjoint equations
a_adj = inner( a * nabla_grad(p_trial), nabla_grad(p_test) ) * dx
L_adjoint = -inner(u - ud, u_test) * dx


# weak form for setting up matrices
CT_equ   = inner(a_test * nabla_grad(u), nabla_grad(p_trial)) * dx
M_equ   = inner(a_trial, a_test) * dx


# assemble matrices M
M = assemble(M_equ)

Initial guess

We solve the state equation and compute the cost functional for the initial guess of the parameter a_ini

In [7]:
# solve state equation
A, state_b = assemble_system (a_state, L_state, bc_state)
solve (A, u.vector(), state_b)

# evaluate cost
[cost_old, misfit_old, reg_old] = cost(u, ud, a, W, R)

# plot
plt.figure(figsize=(15,5))
nb.plot(a,subplot_loc=121, mytitle="a_ini", vmin=atrue.vector().min(), vmax=atrue.vector().max())
nb.plot(u,subplot_loc=122, mytitle="u(a_ini)")
plt.show()

The steepest descent with Armijo line search:

We solve the constrained optimization problem using the steepest descent method with Armijo line search.

The stopping criterion is based on a relative reduction of the norm of the gradient (i.e. $\frac{\|g_{n}\|}{\|g_{0}\|} \leq \tau$).

The gradient is computed by solving the state and adjoint equation for the current parameter $a$, and then substituing the current state $u$, parameter $a$ and adjoint $p$ variables in the weak form expression of the gradient: $$ (g, \tilde{a}) = \gamma(\nabla a, \nabla \tilde{a}) +(\tilde{a}\nabla u, \nabla p).$$

The Armijo line search uses backtracking to find $\alpha$ such that a sufficient reduction in the cost functional is achieved. More specifically, we use backtracking to find $\alpha$ such that: $$J( a - \alpha g ) \leq J(a) - \alpha c_{\rm armijo} (g,g). $$

In [8]:
# define parameters for the optimization
tol = 1e-4
maxiter = 1000
plot_any = 30
c_armijo = 1e-5

# initialize iter counters
iter = 1
converged = False

# initializations
g = Vector()
R.init_vector(g,0)

a_prev = Function(Va)

print "Nit  cost          misfit        reg         ||grad||       alpha  N backtrack"

while iter <  maxiter and not converged:

    # assemble matrix C
    CT =  assemble(CT_equ)

    # solve the adoint problem
    adj_A, adjoint_RHS = assemble_system(a_adj, L_adjoint, bc_adj)
    solve(adj_A, p.vector(), adjoint_RHS)

    # evaluate the  gradient
    MG = CT*p.vector() + R * a.vector()
    solve(M, g, MG)

    # calculate the norm of the gradient
    grad_norm2 = g.inner(MG)
    gradnorm = sqrt(grad_norm2)
    
    if iter == 1:
        gradnorm0 = gradnorm

    # linesearch
    it_backtrack = 0
    a_prev.assign(a)
    alpha = 8.e5
    backtrack_converged = False
    for it_backtrack in range(20):
        
        a.vector().axpy(-alpha, g )

        # solve the state/forward problem
        state_A, state_b = assemble_system(a_state, L_state, bc_state)
        solve(state_A, u.vector(), state_b)

        # evaluate cost
        [cost_new, misfit_new, reg_new] = cost(u, ud, a, W, R)

        # check if Armijo conditions are satisfied
        if cost_new < cost_old - alpha * c_armijo * grad_norm2:
            cost_old = cost_new
            backtrack_converged = True
            break
        else:
            alpha *= 0.5
            a.assign(a_prev)  # reset a
            
    if backtrack_converged == False:
        print "Backtracking failed. A sufficient descent direction was not found"
        converged = False
        break

    sp = ""
    print "%3d %1s %8.5e %1s %8.5e %1s %8.5e %1s %8.5e %1s %8.5e %1s %3d" % \
        (iter, sp, cost_new, sp, misfit_new, sp, reg_new, sp, \
        gradnorm, sp, alpha, sp, it_backtrack)

    if (iter % plot_any)==0 :
        nb.multi1_plot([a,u,p], ["a","u","p"], same_colorbar=False)
        plt.show()
    
    # check for convergence
    if gradnorm < tol*gradnorm0 and iter > 1:
        converged = True
        print "Steepest descent converged in ",iter,"  iterations"
        
    iter += 1
    
if not converged:
    print "Steepest descent did not converge in ", maxiter, " iterations"

print "Time elapsed: ", time.clock()-start
Nit  cost          misfit        reg         ||grad||       alpha  N backtrack
  1   4.12912e-06   3.92307e-06   2.06047e-07   1.52436e-05   8.00000e+05     0
  2   3.91185e-06   3.71829e-06   1.93568e-07   5.19644e-07   8.00000e+05     0
  3   3.68081e-06   3.49382e-06   1.86995e-07   5.31748e-07   8.00000e+05     0
  4   3.42693e-06   3.24444e-06   1.82488e-07   5.55925e-07   8.00000e+05     0
  5   3.14133e-06   2.96148e-06   1.79850e-07   5.87562e-07   8.00000e+05     0
  6   2.80841e-06   2.62878e-06   1.79630e-07   6.30743e-07   8.00000e+05     0
  7   2.40676e-06   2.22305e-06   1.83714e-07   6.94667e-07   8.00000e+05     0
  8   2.34319e-06   2.15417e-06   1.89016e-07   7.08385e-07   4.00000e+05     1
  9   2.31098e-06   2.11357e-06   1.97411e-07   3.75179e-06   1.00000e+05     3
 10   2.13356e-06   1.93649e-06   1.97071e-07   7.18427e-07   4.00000e+05     1
 11   2.11625e-06   1.92502e-06   1.91230e-07   4.93037e-07   8.00000e+05     0
 12   2.05497e-06   1.78969e-06   2.65277e-07   1.78443e-04   3.12500e+03     8
 13   1.92720e-06   1.71949e-06   2.07711e-07   8.13006e-07   8.00000e+05     0
 14   1.70543e-06   1.50941e-06   1.96016e-07   6.80580e-07   8.00000e+05     0
 15   1.66465e-06   1.48760e-06   1.77047e-07   1.07661e-06   4.00000e+05     1
 16   1.47612e-06   1.30469e-06   1.71430e-07   5.77832e-07   8.00000e+05     0
 17   1.36002e-06   1.19914e-06   1.60884e-07   4.45571e-07   8.00000e+05     0
 18   1.24872e-06   1.08875e-06   1.59972e-07   4.21676e-07   8.00000e+05     0
 19   1.15024e-06   9.99942e-07   1.50295e-07   3.94140e-07   8.00000e+05     0
 20   1.05586e-06   9.07913e-07   1.47947e-07   3.71410e-07   8.00000e+05     0
 21   9.70218e-07   8.29733e-07   1.40485e-07   3.44479e-07   8.00000e+05     0
 22   8.90357e-07   7.54311e-07   1.36046e-07   3.31161e-07   8.00000e+05     0
 23   8.16142e-07   6.85712e-07   1.30430e-07   3.19565e-07   8.00000e+05     0
 24   7.47075e-07   6.21787e-07   1.25289e-07   3.10626e-07   8.00000e+05     0
 25   6.82912e-07   5.62863e-07   1.20049e-07   3.02190e-07   8.00000e+05     0
 26   6.23367e-07   5.08512e-07   1.14855e-07   2.94583e-07   8.00000e+05     0
 27   5.68215e-07   4.58481e-07   1.09734e-07   2.87751e-07   8.00000e+05     0
 28   5.17271e-07   4.12554e-07   1.04718e-07   2.81753e-07   8.00000e+05     0
 29   4.70373e-07   3.70538e-07   9.98351e-08   2.76704e-07   8.00000e+05     0
 30   4.27359e-07   3.32249e-07   9.51099e-08   2.72797e-07   8.00000e+05     0
 31   3.88066e-07   2.97485e-07   9.05809e-08   2.70270e-07   8.00000e+05     0
 32   3.52335e-07   2.66043e-07   8.62920e-08   2.69387e-07   8.00000e+05     0
 33   3.20014e-07   2.37711e-07   8.23029e-08   2.70432e-07   8.00000e+05     0
 34   2.90956e-07   2.12280e-07   7.86755e-08   2.73711e-07   8.00000e+05     0
 35   2.65025e-07   1.89536e-07   7.54887e-08   2.79549e-07   8.00000e+05     0
 36   2.42096e-07   1.69271e-07   7.28250e-08   2.88285e-07   8.00000e+05     0
 37   2.22062e-07   1.51267e-07   7.07950e-08   3.00272e-07   8.00000e+05     0
 38   2.04845e-07   1.35325e-07   6.95204e-08   3.15880e-07   8.00000e+05     0
 39   1.90406e-07   1.21230e-07   6.91760e-08   3.35524e-07   8.00000e+05     0
 40   1.78769e-07   1.08800e-07   6.99697e-08   3.59695e-07   8.00000e+05     0
 41   1.70056e-07   9.78351e-08   7.22214e-08   3.89027e-07   8.00000e+05     0
 42   1.64532e-07   8.81857e-08   7.63467e-08   4.24380e-07   8.00000e+05     0
 43   1.62689e-07   7.96695e-08   8.30192e-08   4.66956e-07   8.00000e+05     0
 44   1.09608e-07   7.57331e-08   3.38748e-08   5.18458e-07   4.00000e+05     1
 45   9.99636e-08   6.86609e-08   3.13027e-08   1.18019e-07   8.00000e+05     0
 46   9.14369e-08   6.24236e-08   2.90134e-08   1.15172e-07   8.00000e+05     0
 47   8.39581e-08   5.69038e-08   2.70543e-08   1.14751e-07   8.00000e+05     0
 48   7.75046e-08   5.20159e-08   2.54887e-08   1.17878e-07   8.00000e+05     0
 49   7.21219e-08   4.76686e-08   2.44533e-08   1.26083e-07   8.00000e+05     0
 50   6.79608e-08   4.38072e-08   2.41536e-08   1.41234e-07   8.00000e+05     0
 51   6.53431e-08   4.03538e-08   2.49893e-08   1.65438e-07   8.00000e+05     0
 52   6.48721e-08   3.72823e-08   2.75898e-08   2.01039e-07   8.00000e+05     0
 53   5.29760e-08   3.58666e-08   1.71095e-08   2.50847e-07   4.00000e+05     1
 54   4.93072e-08   3.32488e-08   1.60584e-08   7.86109e-08   8.00000e+05     0
 55   4.61339e-08   3.08946e-08   1.52393e-08   8.10389e-08   8.00000e+05     0
 56   4.34895e-08   2.87804e-08   1.47090e-08   8.73400e-08   8.00000e+05     0
 57   4.14649e-08   2.68696e-08   1.45952e-08   9.89440e-08   8.00000e+05     0
 58   4.02444e-08   2.51518e-08   1.50926e-08   1.17402e-07   8.00000e+05     0
 59   4.01684e-08   2.35901e-08   1.65782e-08   1.44415e-07   8.00000e+05     0
 60   3.39598e-08   2.28733e-08   1.10865e-08   1.82015e-07   4.00000e+05     1