Post

PINNs_Discovery

SIMILAR to the former blog, the accurately learnt neural networks could also be beneficial when architectures are already known.

Data-driven discovery for inverse problems

Upon solving inverse problems, PINNs1 also gives examples in continuous and discrete equations. In Navier-Strokes equations, the velocity $V(x,y,t)$ at x and y components, pressure fields $P(x,y,t)$ and their gradients would be described by neural networks. With data sampled in experiments, those NNs would be trained with great accuracy. Since the NS equation in 2D have been recognized as follows. The two unknown parameters $\lambda_{1}$ and $\lambda_{2}$ will be correspondingly calculated with the simulated $u(x,y,t)$, $v(x,y,t)$ and $P(x,y,t)$.

\[\begin{align*} f: u_{t}+ \lambda_{1}(uu_{x}+vu_{y})+&P_{x}-\lambda_{2}(u_{xx}+u_{yy}) \\ g: v_{t}+ \lambda_{1}(uv_{x}+vv_{y})+&P_{y}-\lambda_{2}(v_{xx}+v_{yy}) \\ u_{x}+v_{y}&=0\\ \end{align*}\]

where $u(x,y,t)$ and $v(x,y,t)$ denote the x-component and y-component of $V(x,y,t)$, and the NNs are searched within conditions of the third equation.

Unlike the reasonable loss function in PINNs_Inference, which is made up of losses of initial and boundary condition as well ass PDE, the loss function here is represented as follows to achieve the minimum of different velocity fields and the the two PDE eqaution. In summary, although it is a little different in the form, the core concept of loss function is the same, to reach as accurate simulation as possible.

\[\begin{align} MSE&=\frac{1}{N}\sum^{N}_{i=1}(\lvert u_{pred} - u^{i} \rvert^{2} + \lvert v_{pred}-v^{i} \rvert^{2}) \\ &+\frac{1}{N}\sum^{N}_{i=1}(f_{pred}^{2}+g_{pred}^{2}) \end{align}\]

Notices

A technique was used here to jointly approximating $V(x,y,t)$ and $P(x,y,t)$ by a single neural network with two outputs. which could be seen in net_NS function. The last layer of neural_net was set as 2.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
def net_NS(self, x, y, t):
        lambda_1 = self.lambda_1
        lambda_2 = self.lambda_2
        
        psi_and_p = self.neural_net(tf.concat([x,y,t], 1), self.weights, self.biases)
        psi = psi_and_p[:,0:1]
        p = psi_and_p[:,1:2]
        
        u = tf.gradients(psi, y)[0]
        v = -tf.gradients(psi, x)[0]  
        
        u_t = tf.gradients(u, t)[0]
        u_x = tf.gradients(u, x)[0]
        u_y = tf.gradients(u, y)[0]
        u_xx = tf.gradients(u_x, x)[0]
        u_yy = tf.gradients(u_y, y)[0]
        
        v_t = tf.gradients(v, t)[0]
        v_x = tf.gradients(v, x)[0]
        v_y = tf.gradients(v, y)[0]
        v_xx = tf.gradients(v_x, x)[0]
        v_yy = tf.gradients(v_y, y)[0]
        
        p_x = tf.gradients(p, x)[0]
        p_y = tf.gradients(p, y)[0]

        f_u = u_t + lambda_1*(u*u_x + v*u_y) + p_x - lambda_2*(u_xx + u_yy) 
        f_v = v_t + lambda_1*(u*v_x + v*v_y) + p_y - lambda_2*(v_xx + v_yy)
        
        return u, v, p, f_u, f_v

Discrete equations are dealt similar to what was done in PINN_Inference chapter.

Personal views

  • In the indentification of $\lambda_{1}$ and $\lambda_{2}$, the structure of NS equation and the PDEs must be given. Can we learn each component including paramteters in the equation all by data? What if we don’t know the priori structure, is the equation simulated by networks the same as the priori knowledge we learnt through theories and experiments?

References

  1. Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations. (2019). https://doi.org/10.1016/j.jcp.2018.10.045. 

This post is licensed under CC BY 4.0 by the author.