대학원 공부노트
Hasegawa(1969) 1번 식 해설 4편(완) 본문
$$\rho\partial_{t}^{2}u_{i}=\sum_{j}{\partial_{j}}\left(\lambda \delta_{ij}\sum_{k}{\varepsilon_{kk}}+\mu(\partial_{i}u_{j}+\partial_{j}u_{i})\right)$$
식이 복잡하므로 식을 전개한 뒤 우변을 둘로 나눠 정리하도록 한다.
이때 \(\lambda\), \(\mu\)는 상수이므로 앞으로 자유롭게 이동할 수 있다.
$$\lambda \sum_{j}{\partial_{j}\delta_{ij}}\sum_{k}{\varepsilon_{kk}}+\mu\sum_{j}{\partial_{j}(\partial_{i}u_{j}+\partial_{j}u_{i})}$$
식이 여전히 복잡하므로 조금 더 전개한 뒤 3개로 나눠 살펴보도록 한다.
$$\lambda\sum_{j}{\partial_{j}\delta_{ij}}\sum_{k}{\varepsilon_{kk}}+\mu\sum_{j}{\partial_{j}\partial_{i}u_{j}}+\mu\sum_{j}{\partial_{j}\partial_{j}u_{i}}$$
이 정도 하면 어느 정도 식이 눈에 들어오게 된다.
First, let's apply kronecker delta in our equation.
$$\delta_{ij}=\begin{cases}1~~(i=j)\\0~~(i\neq j)\end{cases}$$
So we can simplify the equation like below.
$$\lambda \sum_{j}{\partial_{j}}\sum_{k}{\varepsilon_{kk}}+\mu\sum_{j}{\partial_{j}\partial_{i}u_{j}}+\mu\sum_{j}{\partial_{j}\partial_{j}u_{i}}$$
At this point, let's bring the concepts gradient and divergence.
#1 Gradient
: Scalar function to Vector function
$${\rm{grad}}\,u=\nabla f=\frac{\partial f}{\partial x}\mathbf{i}+\frac{\partial f}{\partial y}\mathbf{j}+\frac{\partial f}{\partial z}\mathbf{k}$$
$$\nabla=\frac{\partial}{\partial x}\mathbf{i}+\frac{\partial}{\partial y}\mathbf{j}+\frac{\partial}{\partial z}\mathbf{k}$$
$$\nabla=\sum_{k}{\partial_{k}}$$
$$=\sum_{k}{\partial_{k}f}$$
#2 Divergence
: Vector function to Scalar funtion
$${\rm{div}}\mathbf{F}=\nabla\cdot\mathbf{F}=\left(\frac{\partial}{\partial x},\frac{\partial}{\partial y},\frac{\partial}{\partial z}\right)\cdot\left(F_{x},F_{y},F_{z}\right)=\frac{\partial F_{x}}{\partial x}+\frac{\partial F_{y}}{\partial y}+\frac{\partial F_{z}}{\partial z}$$
$$=\frac{\partial u}{\partial x}+\frac{\partial v}{\partial y}+\frac{\partial w}{\partial z}=\sum_{k}{\partial_{k}u_{k}}$$
Before applying gradient and divergence with Einstein notation, bring the equation we used before.
$$\varepsilon_{ij}=\frac{1}{2}\left(\partial_{i}u_{j}+\partial_{j}u_{i}\right)$$
if we assume \(i=j=k\), we can write it like below.
$$\varepsilon_{kk}=\frac{1}{2}\left(\partial_{k}u_{k}+\partial_{k}u_{k}\right)=\partial_{k}u_{k}$$
$$\sum_{k}{\varepsilon_{kk}}=\sum_{k}{\partial_{k}u_{k}}=\nabla\cdot\mathbf{u}$$
$$\lambda \sum_{j}{\partial_{j}}{\color{blue}\sum_{k}{\varepsilon_{kk}}}+\mu\sum_{j}{\partial_{j}\partial_{i}u_{j}}+\mu\sum_{j}{\partial_{j}\partial_{j}u_{i}}$$
$$\lambda \sum_{j}{\partial_{j}\,{\color{blue}\left(\nabla\cdot\mathbf{u}\right)}}+\mu\sum_{j}{\partial_{j}\partial_{i}u_{j}}+\mu\sum_{j}{\partial_{j}\partial_{j}u_{i}}$$
$$\lambda\nabla(\nabla\cdot\mathbf{u})+\mu\sum_{j}{\partial_{j}\partial_{i}u_{j}}+\mu\sum_{j}{\partial_{j}\partial_{j}u_{i}}$$
Now let's arrange the second term,
(다만, 이게 맞는 풀이인지 여전히 의문입니다.)
$$\mu\sum_{j}{\partial_{j}\partial_{i}u_{j}}=\mu\sum_{j}{\partial_{i}\partial_{j}u_{j}}$$
$$\partial_{j}u_{j}=\frac{\partial u}{\partial x}+\frac{\partial v}{\partial y}+\frac{\partial w}{\partial z}=\nabla\cdot\mathbf{u}$$
$$=\mu\nabla(\nabla\cdot\mathbf{u})$$
Then we can arrange the eqaution
$$\lambda\nabla(\nabla\cdot\mathbf{u})+\mu\nabla(\nabla\cdot\mathbf{u})+\mu\sum_{j}{\partial_{j}\partial_{j}u_{i}}$$
Then the third term can be...
(세 번째 항도 이렇게 전개하는 것이 옳은건지 의문입니다.)
$$\sum_{j}{\partial_{j}\partial_{j}}=\frac{\partial^{2}}{\partial x^{2}}+\frac{\partial^{2}}{\partial y^{2}}+\frac{\partial^{2}}{\partial z^{2}}$$
$$=\mu\sum_{j}{\partial_{j}\partial_{j}u_{i}}=\mu\nabla^{2}\mathbf{u}$$
$$\lambda\nabla(\nabla\cdot\mathbf{u})+\mu\nabla(\nabla\cdot\mathbf{u})+\mu\nabla^{2}\mathbf{u}$$
And let's apply vector calculus
$$\nabla^{2}\mathbf{F}=\nabla(\nabla\cdot\mathbf{F})-\nabla\times(\nabla\times\mathbf{F})$$
$$\lambda\nabla(\nabla\cdot\mathbf{u})+\mu\nabla(\nabla\cdot\mathbf{u})+\mu\nabla^{2}\mathbf{u}$$
$$=\lambda\nabla(\nabla\cdot\mathbf{u})+\mu\nabla(\nabla\cdot\mathbf{u})+\mu\left[\nabla(\nabla\cdot\mathbf{u})-\nabla\times(\nabla\times\mathbf{u})\right]$$
$$=\left(\lambda+2\mu\right)\nabla(\nabla\cdot\mathbf{u})-\mu\nabla\times(\nabla\times\mathbf{u})$$
$$\rho\partial_{t}^{2}u_{i}=\left(\lambda+2\mu\right)\nabla(\nabla\cdot\mathbf{u})-\mu\nabla\times(\nabla\times\mathbf{u})$$
We can also write this equation into this way
$$\left(\lambda+2\mu\right)\,{\rm{grad}}\,{\rm{div}}\,\mathbf{u}-\mu\,{\rm{rot}}\left({\rm{rot}}\mathbf{u}\right)=\rho^{*}\partial^{2}\mathbf{u}/\partial t^{2}$$
글을 작성하면서 suffix notation, kronecker delta and Levi-Civita tensor에 대한 학습이 추가로 이뤄져야함을 느낌.
Reference
Wikipedia, S wave
Wikipedia, Einstein notation
A Primer on Index Notation, John Crimaldi. August 28, 2006
네이버 블로그, 공부가 싫은 사람 「Gradient, Divergence and Curl in Suffix Notation」
'대학원 공부 > Hasegawa' 카테고리의 다른 글
Hasegawa(1969) 5번 식 해설 (0) | 2022.08.16 |
---|---|
Hasegawa equation (0) | 2022.08.05 |
Hasegawa(1969) 1번 식 해설 3편 (0) | 2022.07.29 |
Hasegawa(1969) 1번 식 해설 2편 (0) | 2022.07.26 |
Hasegawa(1969) 1번 식 해설 1편 (0) | 2022.07.25 |