From 67d5a4d1a7aab0dd0bbd256fa0f542767945f834 Mon Sep 17 00:00:00 2001 From: Benjamin_Doerich Date: Mon, 21 Jul 2025 16:35:24 +0200 Subject: [PATCH 1/2] N10 ueberarbeitet --- .../10-appendix-convergence-testing.ipynb | 100 ++++++++++++++---- 1 file changed, 82 insertions(+), 18 deletions(-) diff --git a/docs/notebooks/10-appendix-convergence-testing.ipynb b/docs/notebooks/10-appendix-convergence-testing.ipynb index 327a018..17c1806 100644 --- a/docs/notebooks/10-appendix-convergence-testing.ipynb +++ b/docs/notebooks/10-appendix-convergence-testing.ipynb @@ -27,23 +27,38 @@ "- The method of manufactured solutions (MMS) can be used to test the order of time integration and compare various discretizations of the same initial-boundary-value problem." ] }, + { + "cell_type": "markdown", + "id": "a1413cf5", + "metadata": {}, + "source": [ + "# Method of Manufactured Solution (MMS)\n", + "\n", + "The main idea of this approach is to construct test cases where the solution to a problem is known a-priori. This means for example when testing the discretization of a spatial derivative, one considers functions, where the continuous counterpart can be computed analytically. We make this more precise in the following sections.\n", + "A detailed introduction to MMS can be found [here](https://www.osti.gov/servlets/purl/759450-wLI4Ux/native/).\n" + ] + }, { "cell_type": "markdown", "id": "eb058f7c", "metadata": {}, "source": [ - "## Testing spatial order of convergence\n", + "## 1. Testing spatial order of convergence\n", "\n", "In the following we show some examples which are all part of the CI testing of the codebase. They can be found in the test folder in the files `test_laplace.py` and `test_rhs.py`.\n", "\n", - "In a nutshell, it is as simple as this\n", + "In a nutshell, it is as simple as this (for the leading example $-\\Delta u = rhs$)\n", "\n", - "1. Define a test function $u(x,y,z)$ which is given analytically in sympy logic\n", - "2. Compute the exact solution to the right-hand side and evaluate it on the defined grid $$\\text{rhs}_\\text{analytic}=\\text{rhs}_\\text{analytic}(x_i, y_j, z_k),$$ where $(x_i, y_j, z_k)$ are the discrete positions in the regular meshgrid.\n", - "3. Compute the discrete initial field $u_\\text{numeric}(x_i, y_j, z_k)$ and the corresponding numerical right-hand side $$\\text{rhs}_\\text{numeric}=\\text{rhs}_\\text{numeric}(u_\\text{numeric})$$\n", + "1. Define a test function $u_\\text{analytic}(x,y,z)$ which is given analytically in sympy logic\n", + "2. Compute the exact solution to the right-hand side (e.g. $rhs_\\text{analytic} = -\\Delta u_\\text{analytic}$ via sympy) and evaluate it on the defined grid $$\\text{rhs}_\\text{analytic}=\\text{rhs}_\\text{analytic}(x_i, y_j, z_k),$$ where $(x_i, y_j, z_k)$ are the discrete positions in the regular meshgrid.\n", + "3. Compute the discrete initial field $u_\\text{numeric}(x_i, y_j, z_k)$ and the corresponding numerical right-hand side (e.g. $\\text{rhs}_\\text{numeric} = - \\Delta_h u_\\text{numeric}$ for a stencil $\\Delta_h$) $$\\text{rhs}_\\text{numeric}=\\text{rhs}_\\text{numeric}(u_\\text{numeric})$$\n", "4. Compute the relative $L_2$ error i.e. the discrete $L_2$–norm of the difference $\\text{rhs}_\\text{numeric}-\\text{rhs}_\\text{analytic}$ divided by the $L_2$–norm of the analytical solution $$\\epsilon = \\sqrt{\\sum_{i,j,k} (\\text{rhs}_\\text{numeric}-\\text{rhs}_\\text{analytic})^2} \\bigg/ \\sqrt{\\sum_{i,j,k} (\\text{rhs}_\\text{analytic})^2}$$\n", "\n", - "Repeat the scheme for various spatial discretizations $\\Delta x$ on the same physical domain with the same analytical reference." + "Repeat the scheme for various spatial discretizations $\\Delta x$ on the same physical domain with the same analytical reference. Since it is known from the theory that the error $\\epsilon$ scales like a constant times $h^p$ for the mesh width $h = \\max\\{\\Delta x, \\Delta y, \\Delta z \\}$ and some integer $p$, we obtain the relation\n", + "\\begin{equation}\n", + "\\log \\epsilon \\sim \\log C + p \\log h ,\n", + "\\end{equation}\n", + "which corresponds in a log-log-plot to a straight line with slope $p$. This slope $p$ (computed via linear regression) is then compared to the theoretically expected exponent to validate the implementation." ] }, { @@ -67,7 +82,19 @@ "execution_count": 1, "id": "dc303630", "metadata": {}, - "outputs": [], + "outputs": [ + { + "ename": "ModuleNotFoundError", + "evalue": "No module named 'evoxels'", + "output_type": "error", + "traceback": [ + "\u001b[31m---------------------------------------------------------------------------\u001b[39m", + "\u001b[31mModuleNotFoundError\u001b[39m Traceback (most recent call last)", + "\u001b[36mCell\u001b[39m\u001b[36m \u001b[39m\u001b[32mIn[1]\u001b[39m\u001b[32m, line 2\u001b[39m\n\u001b[32m 1\u001b[39m \u001b[38;5;28;01mimport\u001b[39;00m\u001b[38;5;250m \u001b[39m\u001b[34;01mnumpy\u001b[39;00m\u001b[38;5;250m \u001b[39m\u001b[38;5;28;01mas\u001b[39;00m\u001b[38;5;250m \u001b[39m\u001b[34;01mnp\u001b[39;00m\n\u001b[32m----> \u001b[39m\u001b[32m2\u001b[39m \u001b[38;5;28;01mimport\u001b[39;00m\u001b[38;5;250m \u001b[39m\u001b[34;01mevoxels\u001b[39;00m\u001b[38;5;250m \u001b[39m\u001b[38;5;28;01mas\u001b[39;00m\u001b[38;5;250m \u001b[39m\u001b[34;01mevo\u001b[39;00m\n\u001b[32m 4\u001b[39m \u001b[38;5;28;01mdef\u001b[39;00m\u001b[38;5;250m \u001b[39m\u001b[34mgrid_convergence_test\u001b[39m(\n\u001b[32m 5\u001b[39m test_function,\n\u001b[32m 6\u001b[39m rhs_analytic,\n\u001b[32m (...)\u001b[39m\u001b[32m 10\u001b[39m powers = np.array([\u001b[32m3\u001b[39m,\u001b[32m4\u001b[39m,\u001b[32m5\u001b[39m,\u001b[32m6\u001b[39m,\u001b[32m7\u001b[39m]),\n\u001b[32m 11\u001b[39m ):\n\u001b[32m 12\u001b[39m dx = np.zeros(\u001b[38;5;28mlen\u001b[39m(powers))\n", + "\u001b[31mModuleNotFoundError\u001b[39m: No module named 'evoxels'" + ] + } + ], "source": [ "import numpy as np\n", "import evoxels as evo\n", @@ -122,7 +149,7 @@ }, { "cell_type": "code", - "execution_count": 2, + "execution_count": null, "id": "e6aee0c6", "metadata": {}, "outputs": [ @@ -197,7 +224,7 @@ }, { "cell_type": "code", - "execution_count": 3, + "execution_count": null, "id": "380354eb", "metadata": {}, "outputs": [ @@ -275,7 +302,7 @@ }, { "cell_type": "code", - "execution_count": 4, + "execution_count": null, "id": "1e1ba86c", "metadata": {}, "outputs": [], @@ -295,7 +322,7 @@ }, { "cell_type": "code", - "execution_count": 5, + "execution_count": null, "id": "845aeff1", "metadata": {}, "outputs": [ @@ -342,6 +369,8 @@ "source": [ "### Testing ODE right-hand sides\n", "\n", + "The ODEs are all given in the form u'(t) = rhs(t,u), e.g. for Allen-Cahn in the form $rhs(t,u) = \\Delta u + g(t,u)$.\n", + "\n", "The above procedure has been adapted to automatically test all implementations of ODE classes by comparing the ``rhs_analytic`` to the results produced by ``rhs``. This ensures two things\n", "1. Does the ``rhs`` function implement a discretized version of the envisioned PDE?\n", "2. Does the chosen discretization converge at the expected spatial order of convergence?\n", @@ -351,7 +380,7 @@ }, { "cell_type": "code", - "execution_count": 6, + "execution_count": null, "id": "32618cd8", "metadata": {}, "outputs": [ @@ -415,16 +444,51 @@ "id": "da5b50fd", "metadata": {}, "source": [ - "## Method of Manufactured Solution (MMS)\n", - "\n", - "A detailed introduction to MMS can be found [here](https://www.osti.gov/servlets/purl/759450-wLI4Ux/native/).\n", - "More tests tbd." + "## 2. Time-dependent problems" ] + }, + { + "cell_type": "markdown", + "id": "8b19ecef", + "metadata": {}, + "source": [ + "The idea of manufactured solutions can further be generalized to problems of the form\n", + "\\begin{equation}\n", + "\\partial_t u (t,x) = rhs(t,x,u)\n", + "\\end{equation}\n", + "where for example in Allen-Cahn $rhs(t,x,u) = \\Delta u + g(t,u)$. For a general problem it is often difficult to test the problem with a known solution in order to determine if the code produces the correct solution. One can therefore simply study the modified problem\n", + "\\begin{equation}\n", + "\\partial_t u (t,x) = rhs(t,x,u) + f_{\\text{ex}}(t,x) \\qquad (+)\n", + "\\end{equation}\n", + "with an artificial forcing term $f_{\\text{ex}}$. This allows the following construction:\n", + "1. Choose some analytically given function $u (t,x)$.\n", + "2. Define $f_{\\text{ex}}(t,x) = \\partial_t u (t,x) - rhs(t,x,u)$ (again computable via sympy)\n", + "3. Then $u$ is by construction the exact solution to (+).\n", + "\n", + "If we now pass problem (+) to the solver (i.e. the right-hand side as well as the artificial forcing), then we can compare its approximations at time $t_n$ denoted by $u_{h,n}$ to the exact solution $u(t_n,x_i, y_j, z_k)$. Thus, we can determine if the error behaves asymptotically (i.e. as the time-step size and the mesh width tend to zero) with the correct order. \n", + "\n", + "\n", + "TODO: sagen, dass man das modifizierte Problem leicht in den Code einbauen kann?! oder wir das eben machen?" + ] + }, + { + "cell_type": "markdown", + "id": "758f6eab", + "metadata": {}, + "source": [ + "### More tests tbd." + ] + }, + { + "cell_type": "markdown", + "id": "ffd6fc90", + "metadata": {}, + "source": [] } ], "metadata": { "kernelspec": { - "display_name": "voxenv", + "display_name": "vox-doe", "language": "python", "name": "python3" }, @@ -438,7 +502,7 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.12.11" + "version": "3.13.3" } }, "nbformat": 4, From 2d103d2a62060066a032bca4a98170353d044747 Mon Sep 17 00:00:00 2001 From: daubners Date: Wed, 20 Aug 2025 17:31:19 +0200 Subject: [PATCH 2/2] clean up --- .../10-appendix-convergence-testing.ipynb | 62 +++++++------------ 1 file changed, 22 insertions(+), 40 deletions(-) diff --git a/docs/notebooks/10-appendix-convergence-testing.ipynb b/docs/notebooks/10-appendix-convergence-testing.ipynb index 17c1806..4ac9ef2 100644 --- a/docs/notebooks/10-appendix-convergence-testing.ipynb +++ b/docs/notebooks/10-appendix-convergence-testing.ipynb @@ -47,17 +47,17 @@ "\n", "In the following we show some examples which are all part of the CI testing of the codebase. They can be found in the test folder in the files `test_laplace.py` and `test_rhs.py`.\n", "\n", - "In a nutshell, it is as simple as this (for the leading example $-\\Delta u = rhs$)\n", + "In a nutshell, it is as simple as this (for the leading example $-\\Delta u = \\text{rhs}$)\n", "\n", "1. Define a test function $u_\\text{analytic}(x,y,z)$ which is given analytically in sympy logic\n", - "2. Compute the exact solution to the right-hand side (e.g. $rhs_\\text{analytic} = -\\Delta u_\\text{analytic}$ via sympy) and evaluate it on the defined grid $$\\text{rhs}_\\text{analytic}=\\text{rhs}_\\text{analytic}(x_i, y_j, z_k),$$ where $(x_i, y_j, z_k)$ are the discrete positions in the regular meshgrid.\n", + "2. Compute the exact solution to the right-hand side (e.g. $\\text{rhs}_\\text{analytic} = -\\Delta u_\\text{analytic}$ via sympy) and evaluate it on the defined grid $$\\text{rhs}_\\text{analytic}=\\text{rhs}_\\text{analytic}(x_i, y_j, z_k),$$ where $(x_i, y_j, z_k)$ are the discrete positions in the regular meshgrid.\n", "3. Compute the discrete initial field $u_\\text{numeric}(x_i, y_j, z_k)$ and the corresponding numerical right-hand side (e.g. $\\text{rhs}_\\text{numeric} = - \\Delta_h u_\\text{numeric}$ for a stencil $\\Delta_h$) $$\\text{rhs}_\\text{numeric}=\\text{rhs}_\\text{numeric}(u_\\text{numeric})$$\n", "4. Compute the relative $L_2$ error i.e. the discrete $L_2$–norm of the difference $\\text{rhs}_\\text{numeric}-\\text{rhs}_\\text{analytic}$ divided by the $L_2$–norm of the analytical solution $$\\epsilon = \\sqrt{\\sum_{i,j,k} (\\text{rhs}_\\text{numeric}-\\text{rhs}_\\text{analytic})^2} \\bigg/ \\sqrt{\\sum_{i,j,k} (\\text{rhs}_\\text{analytic})^2}$$\n", "\n", "Repeat the scheme for various spatial discretizations $\\Delta x$ on the same physical domain with the same analytical reference. Since it is known from the theory that the error $\\epsilon$ scales like a constant times $h^p$ for the mesh width $h = \\max\\{\\Delta x, \\Delta y, \\Delta z \\}$ and some integer $p$, we obtain the relation\n", - "\\begin{equation}\n", + "$$\n", "\\log \\epsilon \\sim \\log C + p \\log h ,\n", - "\\end{equation}\n", + "$$\n", "which corresponds in a log-log-plot to a straight line with slope $p$. This slope $p$ (computed via linear regression) is then compared to the theoretically expected exponent to validate the implementation." ] }, @@ -82,19 +82,7 @@ "execution_count": 1, "id": "dc303630", "metadata": {}, - "outputs": [ - { - "ename": "ModuleNotFoundError", - "evalue": "No module named 'evoxels'", - "output_type": "error", - "traceback": [ - "\u001b[31m---------------------------------------------------------------------------\u001b[39m", - "\u001b[31mModuleNotFoundError\u001b[39m Traceback (most recent call last)", - "\u001b[36mCell\u001b[39m\u001b[36m \u001b[39m\u001b[32mIn[1]\u001b[39m\u001b[32m, line 2\u001b[39m\n\u001b[32m 1\u001b[39m \u001b[38;5;28;01mimport\u001b[39;00m\u001b[38;5;250m \u001b[39m\u001b[34;01mnumpy\u001b[39;00m\u001b[38;5;250m \u001b[39m\u001b[38;5;28;01mas\u001b[39;00m\u001b[38;5;250m \u001b[39m\u001b[34;01mnp\u001b[39;00m\n\u001b[32m----> \u001b[39m\u001b[32m2\u001b[39m \u001b[38;5;28;01mimport\u001b[39;00m\u001b[38;5;250m \u001b[39m\u001b[34;01mevoxels\u001b[39;00m\u001b[38;5;250m \u001b[39m\u001b[38;5;28;01mas\u001b[39;00m\u001b[38;5;250m \u001b[39m\u001b[34;01mevo\u001b[39;00m\n\u001b[32m 4\u001b[39m \u001b[38;5;28;01mdef\u001b[39;00m\u001b[38;5;250m \u001b[39m\u001b[34mgrid_convergence_test\u001b[39m(\n\u001b[32m 5\u001b[39m test_function,\n\u001b[32m 6\u001b[39m rhs_analytic,\n\u001b[32m (...)\u001b[39m\u001b[32m 10\u001b[39m powers = np.array([\u001b[32m3\u001b[39m,\u001b[32m4\u001b[39m,\u001b[32m5\u001b[39m,\u001b[32m6\u001b[39m,\u001b[32m7\u001b[39m]),\n\u001b[32m 11\u001b[39m ):\n\u001b[32m 12\u001b[39m dx = np.zeros(\u001b[38;5;28mlen\u001b[39m(powers))\n", - "\u001b[31mModuleNotFoundError\u001b[39m: No module named 'evoxels'" - ] - } - ], + "outputs": [], "source": [ "import numpy as np\n", "import evoxels as evo\n", @@ -149,7 +137,7 @@ }, { "cell_type": "code", - "execution_count": null, + "execution_count": 2, "id": "e6aee0c6", "metadata": {}, "outputs": [ @@ -224,7 +212,7 @@ }, { "cell_type": "code", - "execution_count": null, + "execution_count": 3, "id": "380354eb", "metadata": {}, "outputs": [ @@ -302,7 +290,7 @@ }, { "cell_type": "code", - "execution_count": null, + "execution_count": 4, "id": "1e1ba86c", "metadata": {}, "outputs": [], @@ -322,7 +310,7 @@ }, { "cell_type": "code", - "execution_count": null, + "execution_count": 5, "id": "845aeff1", "metadata": {}, "outputs": [ @@ -369,7 +357,7 @@ "source": [ "### Testing ODE right-hand sides\n", "\n", - "The ODEs are all given in the form u'(t) = rhs(t,u), e.g. for Allen-Cahn in the form $rhs(t,u) = \\Delta u + g(t,u)$.\n", + "The ODEs are all given in the form $\\partial_t u = \\text{rhs}(t,u)$, e.g. for Allen-Cahn in the form $\\text{rhs}(t,u) = \\Delta u + g(t,u)$.\n", "\n", "The above procedure has been adapted to automatically test all implementations of ODE classes by comparing the ``rhs_analytic`` to the results produced by ``rhs``. This ensures two things\n", "1. Does the ``rhs`` function implement a discretized version of the envisioned PDE?\n", @@ -380,7 +368,7 @@ }, { "cell_type": "code", - "execution_count": null, + "execution_count": 6, "id": "32618cd8", "metadata": {}, "outputs": [ @@ -453,22 +441,22 @@ "metadata": {}, "source": [ "The idea of manufactured solutions can further be generalized to problems of the form\n", - "\\begin{equation}\n", - "\\partial_t u (t,x) = rhs(t,x,u)\n", - "\\end{equation}\n", - "where for example in Allen-Cahn $rhs(t,x,u) = \\Delta u + g(t,u)$. For a general problem it is often difficult to test the problem with a known solution in order to determine if the code produces the correct solution. One can therefore simply study the modified problem\n", - "\\begin{equation}\n", - "\\partial_t u (t,x) = rhs(t,x,u) + f_{\\text{ex}}(t,x) \\qquad (+)\n", - "\\end{equation}\n", + "$$\n", + "\\partial_t u (t,x) = \\text{rhs}(t,x,u)\n", + "$$\n", + "where for example in Allen-Cahn $\\text{rhs}(t,x,u) = \\Delta u + g(t,u)$. For a general problem it is often difficult to test the problem with a known solution in order to determine if the code produces the correct solution. One can therefore simply study the modified problem\n", + "$$\n", + "\\partial_t u (t,x) = \\text{rhs}(t,x,u) + f_{\\text{ex}}(t,x) \\qquad (+)\n", + "$$\n", "with an artificial forcing term $f_{\\text{ex}}$. This allows the following construction:\n", "1. Choose some analytically given function $u (t,x)$.\n", - "2. Define $f_{\\text{ex}}(t,x) = \\partial_t u (t,x) - rhs(t,x,u)$ (again computable via sympy)\n", + "2. Define $f_{\\text{ex}}(t,x) = \\partial_t u (t,x) - \\text{rhs}(t,x,u)$ (again computable via sympy)\n", "3. Then $u$ is by construction the exact solution to (+).\n", "\n", "If we now pass problem (+) to the solver (i.e. the right-hand side as well as the artificial forcing), then we can compare its approximations at time $t_n$ denoted by $u_{h,n}$ to the exact solution $u(t_n,x_i, y_j, z_k)$. Thus, we can determine if the error behaves asymptotically (i.e. as the time-step size and the mesh width tend to zero) with the correct order. \n", "\n", "\n", - "TODO: sagen, dass man das modifizierte Problem leicht in den Code einbauen kann?! oder wir das eben machen?" + "TODO: automate temporal and spatial testing via pre-defined test function which constructs the forcing term." ] }, { @@ -478,17 +466,11 @@ "source": [ "### More tests tbd." ] - }, - { - "cell_type": "markdown", - "id": "ffd6fc90", - "metadata": {}, - "source": [] } ], "metadata": { "kernelspec": { - "display_name": "vox-doe", + "display_name": "voxenv", "language": "python", "name": "python3" }, @@ -502,7 +484,7 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.13.3" + "version": "3.12.11" } }, "nbformat": 4,