main_vic_petsc.cpp 44.5 KB
 incardon committed Jun 13, 2017 1 /*! \page Vortex_in_cell_petsc Vortex in Cell 3D  incardon committed Jun 08, 2017 2  *  incardon committed Jun 13, 2017 3 4 5  * # Vortex in Cell 3D ring with ringlets # {#vic_ringlets} * * In this example we solve the Navier-Stokes equation in the vortex formulation in 3D  incardon committed Jun 15, 2017 6  * for an incompressible fluid. (bold symbols are vectorial quantity)  incardon committed Jun 13, 2017 7 8 9 10 11 12  * * ## Numerical method ## {#num_vic_mt} * * In this code we solve the Navier-stokes equation for incompressible fluid in the * vorticity formulation. We first recall the Navier-stokes equation in vorticity formulation *  incardon committed Jun 15, 2017 13  * \f$\nabla \times \boldsymbol u = - \boldsymbol w \f$  incardon committed Jun 13, 2017 14  *  incardon committed Jun 15, 2017 15  * \f$\frac{\displaystyle D \boldsymbol w}{\displaystyle dt} = ( \boldsymbol w \cdot \vec \nabla) \boldsymbol u + \nu \nabla^{2} \boldsymbol w \f$ (5)  incardon committed Jun 13, 2017 16 17 18 19 20 21 22 23 24 25 26 27 28  * * Where \f$w\f$ is the vorticity and \f$u\f$ is the velocity of the fluid. * With high Reynold number \f$Re = \frac{uL}{\nu}\f$ and the term \f$uL\f$ significantly * smaller than the reynold number we have that \f$\nu\f$ is small and the term \f$\nu \nabla^{2} w\f$ is * negligible. The algorithm can be expressed with the following pseudo code. * * \verbatim 1) Initialize the vortex ring on grid 2) Do an helmotz hodge projection to make the vorticity divergent free 3) Initialize particles on the same position as the grid or remesh while (t < t_end) do  incardon committed Jun 15, 2017 29  4) Interpolate vorticity from the particles to mesh  incardon committed Jun 13, 2017 30  5) calculate velocity u from the vorticity w  incardon committed Jun 15, 2017 31 32  6) calculate the right-hand-side on grid and interpolate on particles 7) interpolate velocity u to particles  incardon committed Jun 13, 2017 33  8) move particles accordingly to the velocity  incardon committed Jun 15, 2017 34 35  9) interpolate the vorticity into mesh and reinitialize the particles in a grid like position  incardon committed Jun 13, 2017 36 37 38 39  end while \endverbatim *  incardon committed Jun 15, 2017 40  * This pseudo code show how to solve the equation above using euler integration.  incardon committed Jun 13, 2017 41 42 43 44 45 46 47 48 49 50  * In case of Runge-kutta of order two the pseudo code change into * * * \verbatim 1) Initialize the vortex ring on grid 2) Do an helmotz hodge projection to make the vorticity divergent free 3) Initialize particles on the same position as the grid or remesh while (t < t_end) do  incardon committed Jun 15, 2017 51  4) 4) Interpolate vorticity from the particles to mesh  incardon committed Jun 13, 2017 52  5) calculate velocity u from the vorticity w  incardon committed Jun 15, 2017 53 54  6) calculate the right-hand-side on grid and interpolate on particles 7) interpolate velocity u to particles  incardon committed Jun 13, 2017 55 56  8) move particles accordingly to the velocity and save the old position in x_old  incardon committed Jun 15, 2017 57 58 59  9) Interpolate vorticity on mesh on the particles 10) calculate velocity u from the vorticity w 11) calculate the right-hand-side on grid and interpolate on particles  incardon committed Jun 13, 2017 60  12) interpolate velocity u to particles  incardon committed Jun 15, 2017 61 62 63  13) move particles accordingly to the velocity starting from x_old 14) interpolate the vorticity into mesh and reinitialize the particles in a grid like position  incardon committed Jun 13, 2017 64 65 66 67 68 69 70 71 72  end while \endverbatim * * In the following we explain how each step is implemented in the code * * ## Inclusion ## {#num_vic_inc} * * This example code need several components. First because is a particle  incardon committed Jun 15, 2017 73  * mesh example we have to activate **grid_dist_id.hpp** and **vector_dist_id.hpp**.  incardon committed Jun 13, 2017 74  * Because we use a finite-difference scheme and linear-algebra to calculate the  incardon committed Jun 15, 2017 75 76 77 78 79 80  * velocity out of the vorticity, we have to include **FDScheme.hpp** to produce * from the finite difference scheme a matrix that represent the linear-system * to solve. **SparseMatrix.hpp** is the Sparse-Matrix that will contain the linear * system to solve in order to get the velocity out of the vorticity. * **Vector.hpp** is the data-structure that contain the solution of the * linear system. **petsc_solver.hpp** is the library to use in order invert the linear system.  incardon committed Jun 13, 2017 81  * Because we have to interpolate between particles and grid we the to include  incardon committed Jun 15, 2017 82 83  * **interpolate.hpp** as interpolation kernel we use the mp4, so we include the * **mp4_kernel.hpp**  incardon committed Jun 13, 2017 84 85 86 87 88  * * For convenience we also define the particles type and the grid type and some * convenient constants * * \snippet Numerics/Vortex_in_cell/main_vic_petsc.cpp inclusion  incardon committed Jun 08, 2017 89 90 91 92 93 94 95 96  * * * */ //#define SE_CLASS1 //#define PRINT_STACKTRACE  incardon committed Jun 13, 2017 97 //! \cond [inclusion] \endcond  incardon committed Jun 08, 2017 98 99 100 101 102 103 104 105 106 107 108 109 110 111  #include "Grid/grid_dist_id.hpp" #include "Vector/vector_dist.hpp" #include "Matrix/SparseMatrix.hpp" #include "Vector/Vector.hpp" #include "FiniteDifference/FDScheme.hpp" #include "Solvers/petsc_solver.hpp" #include "interpolation/mp4_kernel.hpp" #include "interpolation/interpolation.hpp" constexpr int x = 0; constexpr int y = 1; constexpr int z = 2;  incardon committed Jun 15, 2017 112 // The type of the grids  incardon committed Jun 08, 2017 113 typedef grid_dist_id<3,float,aggregate> grid_type;  incardon committed Jun 15, 2017 114 115  // The type of the particles  incardon committed Jun 08, 2017 116 117 typedef vector_dist<3,float,aggregate> particles_type;  incardon committed Jun 17, 2017 118 119 #include "CG.hpp"  incardon committed Jun 08, 2017 120 121 122 123 // radius of the torus float ringr1 = 5.0/4.0; // radius of the core of the torus float ringr2 = 0.5;  incardon committed Jun 13, 2017 124 // Reynold number  incardon committed Jun 08, 2017 125 126 127 128 129 130 131 float tgtre = 10000.0; // Noise factor for the ring vorticity on z float ringnz = 0.00009; // Kinematic viscosity float nu = 0.0001535;  incardon committed Jun 13, 2017 132 133 // Time step float dt = 0.006;  incardon committed Jun 08, 2017 134 135   incardon committed Jun 17, 2017 136 137 138 139 #ifdef SE_CLASS1 DIOCANE #endif  incardon committed Jun 15, 2017 140 // All the properties by index  incardon committed Jun 08, 2017 141 142 143 144 145 146 147 constexpr unsigned int vorticity = 0; constexpr unsigned int velocity = 0; constexpr unsigned int p_vel = 1; constexpr int rhs_part = 2; constexpr unsigned int old_vort = 3; constexpr unsigned int old_pos = 4;  incardon committed Jun 13, 2017 148 //! \cond [inclusion] \endcond  incardon committed Jun 08, 2017 149   incardon committed Jun 15, 2017 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 template void calc_and_print_max_div_and_int(grid & g_vort) { //! \cond [sanity_int_div] \endcond g_vort.template ghost_get(); auto it5 = g_vort.getDomainIterator(); double max_vort = 0.0; double int_vort[3] = {0.0,0.0,0.0}; while (it5.isNext()) { auto key = it5.get(); double tmp; tmp = 1.0f/g_vort.spacing(x)/2.0f*(g_vort.template get(key.move(x,1))[x] - g_vort.template get(key.move(x,-1))[x] ) + 1.0f/g_vort.spacing(y)/2.0f*(g_vort.template get(key.move(y,1))[y] - g_vort.template get(key.move(y,-1))[y] ) + 1.0f/g_vort.spacing(z)/2.0f*(g_vort.template get(key.move(z,1))[z] - g_vort.template get(key.move(z,-1))[z] ); int_vort[x] += g_vort.template get(key)[x]; int_vort[y] += g_vort.template get(key)[y]; int_vort[z] += g_vort.template get(key)[z]; if (tmp > max_vort) max_vort = tmp; ++it5; } Vcluster & v_cl = create_vcluster(); v_cl.max(max_vort); v_cl.sum(int_vort[0]); v_cl.sum(int_vort[1]); v_cl.sum(int_vort[2]); v_cl.execute(); if (v_cl.getProcessUnitID() == 0) {std::cout << "Max div for vorticity " << max_vort << " Integral: " << int_vort[0] << " " << int_vort[1] << " " << int_vort[2] << std::endl;} //! \cond [sanity_int_div] \endcond }  incardon committed Jun 13, 2017 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 /*! \page Vortex_in_cell_petsc Vortex in Cell 3D * * # Step 1: Initialization of the vortex ring # {#vic_ring_init} * * In this function we initialize the vortex ring. The vortex ring is * initialized accordingly to these formula. * * \f$w(t = 0) = \frac{\Gamma}{\pi \sigma^{2}} e^{-(s/ \sigma)^2} \f$ * * \f$s^2 = (z-z_c)^{2} + ((x-x_c)^2 + (y-y_c)^2 - R^2) \f$ * * \f$\Gamma = \nu Re \f$ * * With this initialization the vortex ring look like the one in figure * * \image html int_vortex_arrow_small.jpg "Vortex ring initialization the arrow indicate the direction where the vortex point while the colors indicate the magnitude from blue (low) to red (high)" * * \snippet Numerics/Vortex_in_cell/main_vic_petsc.cpp init_vort * * * */  incardon committed Jun 08, 2017 216   incardon committed Jun 13, 2017 217 218 219 220 221 222 223 //! \cond [init_vort] \endcond /* * gr is the grid where we are initializing the vortex ring * domain is the simulation domain * */  incardon committed Jun 08, 2017 224 225 void init_ring(grid_type & gr, const Box<3,float> & domain) {  incardon committed Jun 13, 2017 226 227  // To add some noise to the vortex ring we create two random // vector  incardon committed Jun 08, 2017 228 229 230 231 232 233 234 235 236 237 238  constexpr int nk = 32; float ak[nk]; float bk[nk]; for (size_t i = 0 ; i < nk ; i++) { ak[i] = 0.0/*rand()/RAND_MAX*/; bk[i] = 0.0/*rand()/RAND_MAX*/; }  incardon committed Jun 13, 2017 239  // We calculate the circuitation gamma  incardon committed Jun 08, 2017 240 241 242 243  float gamma = nu * tgtre; float rinv2 = 1.0f/(ringr2*ringr2); float max_vorticity = gamma*rinv2/M_PI;  incardon committed Jun 13, 2017 244  // We go through the grid to initialize the vortex  incardon committed Jun 08, 2017 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282  auto it = gr.getDomainIterator(); while (it.isNext()) { auto key_d = it.get(); auto key = it.getGKey(key_d); float tx = (key.get(x)-2)*gr.spacing(x) + domain.getLow(x); float ty = (key.get(y)-2)*gr.spacing(y) + domain.getLow(y); float tz = (key.get(z)-2)*gr.spacing(z) + domain.getLow(z); float theta1 = atan2((ty-2.5f),(tz-2.5f)); float noise = 0.0f; for (int kk=1 ; kk < nk; kk++) noise = noise + sin(kk*(theta1+2.0f*M_PI*ak[kk])) + cos(kk*(theta1+2.0f*M_PI*bk[kk])); float rad1r = sqrt((ty-2.5f)*(ty-2.5f) + (tz-2.5f)*(tz-2.5f)) - ringr1*(1.0f + ringnz * noise); float rad1t = tx - 2.5f; float rad1sq = rad1r*rad1r + rad1t*rad1t; float radstr = -exp(-rad1sq*rinv2)*rinv2*gamma/M_PI; gr.template get(key_d)[x] = 0.0f; gr.template get(key_d)[y] = -radstr * cos(theta1); gr.template get(key_d)[z] = radstr * sin(theta1); /* theta1 = atan2((ty-2.5f),(tz-2.5f)); rad1r = sqrt((ty-2.5f)*(ty-2.5f) + (tz-2.5f)*(tz-2.5f)) + ringr1*(1.0f + ringnz * noise); rad1t = tx - 2.5f; rad1sq = rad1r*rad1r + rad1t*rad1t; float rad1sqTILDA = rad1sq*rinv2; radstr = exp(-rad1sqTILDA)*rinv2*gamma/M_PI; gr.template get(key_d)[x] = 0.0f; gr.template get(key_d)[y] = gr.template get(key_d)[y] + radstr * cos(theta1); gr.template get(key_d)[z] = gr.template get(key_d)[z] - radstr * sin(theta1);*/ ++it; } }  incardon committed Jun 13, 2017 283 284 285 286 //! \cond [init_vort] \endcond //! \cond [poisson_syseq] \endcond  incardon committed Jun 08, 2017 287 288 // Specification of the poisson equation for the helmotz-hodge projection // 3D (dims = 3). The field is a scalar value (nvar = 1), bournary are periodic  incardon committed Jun 13, 2017 289 290 291 292 // type of the the space is float. Final grid that will store \phi, the result (grid_dist_id<.....>) // The other indicate which kind of Matrix to use to construct the linear system and // which kind of vector to construct for the right hand side. Here we use a PETSC Sparse Matrix // and PETSC vector. NORMAL_GRID indicate that is a standard grid (non-staggered)  incardon committed Jun 08, 2017 293 294 295 296 297 298 299 300 301 302 303 304 305 306 struct poisson_nn_helm { static const unsigned int dims = 3; static const unsigned int nvar = 1; static const bool boundary[]; typedef float stype; typedef grid_dist_id<3,float,aggregate> b_grid; typedef SparseMatrix SparseMatrix_type; typedef Vector Vector_type; static const int grid_type = NORMAL_GRID; }; const bool poisson_nn_helm::boundary[] = {PERIODIC,PERIODIC,PERIODIC};  incardon committed Jun 13, 2017 307 //! \cond [poisson_syseq] \endcond  incardon committed Jun 08, 2017 308   incardon committed Jun 17, 2017 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 void OpA(grid_type_scal & in, grid_type_scal & out) { float fac1 = 0.25f/(in.spacing(0)*in.spacing(0)); float fac2 = 0.25f/(in.spacing(1)*in.spacing(1)); float fac3 = 0.25f/(in.spacing(2)*in.spacing(2)); in.ghost_get<0>(); auto it = in.getDomainIterator(); while (it.isNext()) { auto p = it.get(); out.template getProp<0>(p) = fac1*(in.template get<0>(p.move(x,2))+in.template get<0>(p.move(x,-2)))+ fac2*(in.template get<0>(p.move(y,2))+in.template get<0>(p.move(y,-2)))+ fac3*(in.template get<0>(p.move(z,2))+in.template get<0>(p.move(z,-2)))- 2.0f*(fac1+fac2+fac3)*in.template get<0>(p); ++it; } }  incardon committed Jun 13, 2017 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382  /*! \page Vortex_in_cell_petsc Vortex in Cell 3D * * # Step 2: Helmotz-hodge projection # {#vic_hlm_proj} * * The Helnotz hodge projection is required in order to make the vorticity divergent * free. The Helmotz-holde projection work in this way. A field can be divided into * a curl-free part and a divergent-free part. * * \f$w = w_{rot} + w_{div} \f$ * * with * * \f$\vec \nabla \times w_{rot} = 0 \f$ * * \f$\nabla \cdot w_{div} = 0 \f$ * * To have a vorticity divergent free we have to get the component (3) \f$w_{div} = w - w_{rot}\f$. * In particular it hold * * \f$\nabla \cdot w = \nabla \cdot w_{rot} \f$ * * Bacause \f$\vec \nabla \times w_{rot} = 0 \f$ we can introduce a field \f$\psi \f$ * such that * * (2) \f$w_{rot} = \vec \nabla \psi \f$ * * Doing the \f$\nabla \cdot \vec \nabla \psi \f$ we obtain * * \f$\nabla \cdot \vec \nabla \psi = \nabla^{2} \psi = \nabla \cdot w_{rot} = \nabla \cdot w \f$ * * so we lead to this equation * * (1) \f$\nabla^{2} \psi = \nabla \cdot w \f$ * * Solving the equation for \f$\psi \f$ we can obtain \f$w_{rot} \f$ doing the gradient of \f$\psi \f$ * and finally correct \f$w \f$ obtaining \f$w_{div} \f$ * * The **helmotz_hodge_projection** function do this correction to the vorticity * * In particular it solve the equation (1) it calculate \f$w_{rot} \f$ * using (2) and correct the vorticity using using (3) * * * ## Poisson equation ## * * To solve a poisson equation on a grid using finite-difference, we need to create * an object that carry information about the system of equations * * \snippet Numerics/Vortex_in_cell/main_vic_petsc.cpp poisson_syseq *  incardon committed Jun 15, 2017 383 384  * Once created this object we can define the equation we are trying to solve. * In particular the code below define the left-hand-side of the equation \f$\nabla^{2} \psi \f$  incardon committed Jun 13, 2017 385 386 387 388 389  * * \snippet Numerics/Vortex_in_cell/main_vic_petsc.cpp poisson_obj_eq * * Before to construct the linear system we also calculate the divergence of the * vorticity \f$\nabla \cdot w \f$ that will be the right-hand-side  incardon committed Jun 15, 2017 390  * of the equation  incardon committed Jun 13, 2017 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410  * * \snippet Numerics/Vortex_in_cell/main_vic_petsc.cpp calc_div_vort * * Finally we can create the object FDScheme using the object **poisson_nn_helm** * as template variable. In addition to the constructor we have to specify the maximum extension of the stencil, the domain and the * grid that will store the result. At this point we can impose an equation to construct * our SparseMatrix. In this example we are imposing the poisson equation with right hand * side equal to the divergence of vorticity (note: to avoid to create another field we use * \f$\psi \f$ to preliminary store the divergence of the vorticity). Imposing the * equations produce an invertible SparseMatrix **A** and a right-hand-side Vector **b**. * * \snippet Numerics/Vortex_in_cell/main_vic_petsc.cpp create_fdscheme * * Because we need \f$x = A^{-1}b \f$. We have to invert and solve a linear system. * In this case we use the Conjugate-gradient-Method an iterative solver. Such method * is controlled by two parameters. One is the tollerance that determine when the * method is converged, the second one is the maximum number of iterations to avoid that * the method go into infinite loop. After we set the parameters of the solver we can the * the solution **x**. Finaly we copy back the solution **x** into the grid \f$\psi \f$. *  incardon committed Jun 15, 2017 411  * \snippet Numerics/Vortex_in_cell/main_vic_petsc.cpp solve_petsc  incardon committed Jun 13, 2017 412 413 414 415 416  * * ### Note ### * * Because we are solving the poisson equation in periodic boundary conditions the Matrix has * determinat equal to zero. This mean that \f$\psi \f$ has no unique solution (if it has one).  incardon committed Jun 15, 2017 417  * In order to recover one, we have to ensure that the integral of the righ hand side or vorticity  incardon committed Jun 13, 2017 418 419 420  * is zero. (In our case is the case). We have to ensure that across time the integral of the * vorticity is conserved. (In our case is the case if we consider the \f$\nu = 0 \f$ and \f$* \nabla \cdot w = 0 \f$ we can rewrite (5) in a conservative way \f$\frac{Dw}{dt} = div(w \otimes v) \f$ ).  incardon committed Jun 15, 2017 421  * Is also good to notice that the solution that you get is the one with \f$\int w = 0 \f$  incardon committed Jun 13, 2017 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445  * * * * \snippet Numerics/Vortex_in_cell/main_vic_petsc.cpp solve_petsc * * ## Correction ## {#vort_correction} * * After we got our solution for \f$\psi \f$ we can calculate the correction of the vorticity * doing the gradient of \f$\psi \f$. * * \snippet Numerics/Vortex_in_cell/main_vic_petsc.cpp vort_correction * * We also do a sanity check and we control that the vorticity remain * divergent-free. Getting the maximum value of the divergence and printing out * its value * * */ /* * gr vorticity grid where we apply the correction * domain simulation domain * */  incardon committed Jun 08, 2017 446 447 448 449 void helmotz_hodge_projection(grid_type & gr, const Box<3,float> & domain) { ///////////////////////////////////////////////////////////////  incardon committed Jun 13, 2017 450 451 452  //! \cond [poisson_obj_eq] \endcond // Convenient constant  incardon committed Jun 08, 2017 453 454  constexpr unsigned int phi = 0;  incardon committed Jun 13, 2017 455  // We define a field phi_f  incardon committed Jun 08, 2017 456 457  typedef Field phi_f;  incardon committed Jun 13, 2017 458 459 460 461  // We assemble the poisson equation doing the // poisson of the Laplacian of phi using Laplacian // central scheme (where the both derivative use central scheme, not // forward composed backward like usually)  incardon committed Jun 08, 2017 462 463  typedef Lap poisson;  incardon committed Jun 13, 2017 464 465 466  //! \cond [poisson_obj_eq] \endcond //! \cond [calc_div_vort] \endcond  incardon committed Jun 08, 2017 467 468 469 470  // ghost get gr.template ghost_get();  incardon committed Jun 13, 2017 471  // ghost size of the psi function  incardon committed Jun 08, 2017 472 473 474  Ghost<3,long int> g(2); // Here we create a distributed grid to store the result of the helmotz projection  incardon committed Jun 13, 2017 475  grid_dist_id<3,float,aggregate> psi(gr.getDecomposition(),gr.getGridInfo().getSize(),g);  incardon committed Jun 08, 2017 476 477 478 479 480 481 482 483  // Calculate the divergence of the vortex field auto it = gr.getDomainIterator(); while (it.isNext()) { auto key = it.get();  incardon committed Jun 13, 2017 484  psi.template get(key) = 1.0f/gr.spacing(x)/2.0f*(gr.template get(key.move(x,1))[x] - gr.template get(key.move(x,-1))[x] ) +  incardon committed Jun 08, 2017 485 486 487 488 489 490  1.0f/gr.spacing(y)/2.0f*(gr.template get(key.move(y,1))[y] - gr.template get(key.move(y,-1))[y] ) + 1.0f/gr.spacing(z)/2.0f*(gr.template get(key.move(z,1))[z] - gr.template get(key.move(z,-1))[z] ); ++it; }  incardon committed Jun 13, 2017 491 492  //! \cond [calc_div_vort] \endcond  incardon committed Jun 13, 2017 493   incardon committed Jun 15, 2017 494  calc_and_print_max_div_and_int(gr);  incardon committed Jun 08, 2017 495 496   incardon committed Jun 13, 2017 497 498 499 500 501  //! \cond [create_fdscheme] \endcond // In order to create a matrix that represent the poisson equation we have to indicate // we have to indicate the maximum extension of the stencil and we we need an extra layer // of points in case we have to impose particular boundary conditions.  incardon committed Jun 08, 2017 502 503  Ghost<3,long int> stencil_max(2);  incardon committed Jun 13, 2017 504 505 506 507 508 509 510  // Here we define our Finite difference disctetization scheme object FDScheme fd(stencil_max, domain, psi); // impose the poisson equation, using right hand side psi on the full domain (psi.getDomainIterator) // the template paramereter instead define which property of phi carry the righ-hand-side // in this case phi has only one property, so the property 0 carry the right hand side fd.template impose_dit<0>(poisson(),psi,psi.getDomainIterator());  incardon committed Jun 08, 2017 511   incardon committed Jun 13, 2017 512  //! \cond [create_fdscheme] \endcond  incardon committed Jun 08, 2017 513   incardon committed Jun 13, 2017 514  //! \cond [solve_petsc] \endcond  incardon committed Jun 08, 2017 515   incardon committed Jun 13, 2017 516  // Create a PETSC solver to get the solution x  incardon committed Jun 08, 2017 517 518  petsc_solver solver;  incardon committed Jun 13, 2017 519  // Set the conjugate-gradient as method to solve the linear solver  incardon committed Jun 08, 2017 520  solver.setSolver(KSPCG);  incardon committed Jun 13, 2017 521 522  // Set the absolute tolerance to determine that the method is converged  incardon committed Jun 17, 2017 523  solver.setAbsTol(0.1);  incardon committed Jun 13, 2017 524 525  // Set the maximum number of iterations  incardon committed Jun 08, 2017 526 527  solver.setMaxIter(500);  incardon committed Jun 17, 2017 528 529 530 531 532  solver.log_monitor(); timer tm_solve; tm_solve.start();  incardon committed Jun 08, 2017 533  // Give to the solver A and b, return x, the solution  incardon committed Jun 13, 2017 534 535  auto x_ = solver.solve(fd.getA(),fd.getB());  incardon committed Jun 17, 2017 536 537  tm_solve.stop();  incardon committed Jun 17, 2017 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553  //////////////// CG Call ////////////////////////// // Here we create a distributed grid to store the result of the helmotz projection grid_dist_id<3,float,aggregate> sol(gr.getDecomposition(),gr.getGridInfo().getSize(),g); auto zit = sol.getDomainIterator(); while (zit.isNext()) { auto p = zit.get(); sol.template getProp<0>(p) = 0.0; ++zit; }  incardon committed Jun 17, 2017 554 555 556 557  timer tm_solve2; tm_solve2.start();  incardon committed Jun 17, 2017 558 559  CG(OpA,sol,psi);  incardon committed Jun 17, 2017 560 561 562 563  tm_solve2.stop(); std::cout << "Time to solve: " << tm_solve2.getwct() << " " << tm_solve.getwct() << std::endl;  incardon committed Jun 17, 2017 564 565  ///////////////////////////////////////////////////  incardon committed Jun 13, 2017 566 567  // copy the solution x to the grid psi fd.template copy(x_,psi);  incardon committed Jun 08, 2017 568   incardon committed Jun 13, 2017 569  //! \cond [solve_petsc] \endcond  incardon committed Jun 08, 2017 570   incardon committed Jun 13, 2017 571  //! \cond [vort_correction] \endcond  incardon committed Jun 08, 2017 572   incardon committed Jun 13, 2017 573  psi.template ghost_get();  incardon committed Jun 08, 2017 574 575 576 577 578 579 580 581 582  // Correct the vorticity to make it divergence free auto it2 = gr.getDomainIterator(); while (it2.isNext()) { auto key = it2.get();  incardon committed Jun 13, 2017 583 584 585  gr.template get(key)[x] -= 1.0f/2.0f/psi.spacing(x)*(psi.template get(key.move(x,1))-psi.template get(key.move(x,-1))); gr.template get(key)[y] -= 1.0f/2.0f/psi.spacing(y)*(psi.template get(key.move(y,1))-psi.template get(key.move(y,-1))); gr.template get(key)[z] -= 1.0f/2.0f/psi.spacing(z)*(psi.template get(key.move(z,1))-psi.template get(key.move(z,-1)));  incardon committed Jun 08, 2017 586 587 588 589  ++it2; }  incardon committed Jun 13, 2017 590 591  //! \cond [vort_correction] \endcond  incardon committed Jun 15, 2017 592  calc_and_print_max_div_and_int(gr);  incardon committed Jun 08, 2017 593 594 }  incardon committed Jun 15, 2017 595   incardon committed Jun 13, 2017 596 597 598 /*! \page Vortex_in_cell_petsc Vortex in Cell 3D * * # Step 3: Remeshing vorticity # {#vic_remesh_vort}  incardon committed Jun 08, 2017 599  *  incardon committed Jun 15, 2017 600  * After that we initialized the vorticity on the grid, we initialize the particles  incardon committed Jun 13, 2017 601 602 603 604 605 606 607  * in a grid like position and we interpolate the vorticity on particles. Because * of the particles position being in a grid-like position and the symmetry of the * interpolation kernels, the re-mesh step simply reduce to initialize the particle * in a grid like position and assign the property vorticity of the particles equal to the * grid vorticity. * * \snippet Numerics/Vortex_in_cell/main_vic_petsc.cpp remesh_part  incardon committed Jun 08, 2017 608 609  * */  incardon committed Jun 13, 2017 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681  //! \cond [remesh_part] \endcond void remesh(particles_type & vd, grid_type & gr,Box<3,float> & domain) { // Remove all particles because we reinitialize in a grid like position vd.clear(); // Get a grid iterator auto git = vd.getGridIterator(gr.getGridInfo().getSize()); // For each grid point while (git.isNext()) { // Get the grid point in global coordinates (key). p is in local coordinates auto p = git.get(); auto key = git.get_dist(); // Add the particle vd.add(); // Assign the position to the particle in a grid like way vd.getLastPos()[x] = gr.spacing(x)*p.get(x) + domain.getLow(x); vd.getLastPos()[y] = gr.spacing(y)*p.get(y) + domain.getLow(y); vd.getLastPos()[z] = gr.spacing(z)*p.get(z) + domain.getLow(z); // Initialize the vorticity vd.template getLastProp()[x] = gr.template get(key)[x]; vd.template getLastProp()[y] = gr.template get(key)[y]; vd.template getLastProp()[z] = gr.template get(key)[z]; // next grid point ++git; } // redistribute the particles vd.map(); } //! \cond [remesh_part] \endcond /*! \page Vortex_in_cell_petsc Vortex in Cell 3D * * # Step 5: Compute velocity from vorticity # {#vic_vel_from_vort} * * Computing the velocity from vorticity is done in the following way. Given * * \f$\vec \nabla \times u = -w \f$ * * We intrododuce the stream line function defined as * * \f$\nabla \times \phi = u \f$ (7) * * \f$\nabla \cdot \phi = 0 \f$ * * We obtain * * \f$\nabla \times \nabla \times \phi = -w = \vec \nabla (\nabla \cdot \phi) - \nabla^{2} \phi \f$ * * Because the divergence of \f$\phi \f$ is defined to be zero we have * * \f$\nabla^{2} \phi = w \f$ * * The velocity can be recovered by the equation (7) * * Putting into code what explained before, we again generate a poisson * object * * \snippet Numerics/Vortex_in_cell/main_vic_petsc.cpp poisson_obj_eq * * In order to calculate the velocity out of the vorticity, we solve a poisson * equation like we did in helmotz-projection equation, but we do it for each  incardon committed Jun 15, 2017 682 683 684 685 686 687  * component \f$i \f$ of the vorticity. Qnce we have the solution in **psi_s** * we copy the result back into the grid **gr_ps**. We than calculate the * quality of the solution printing the norm infinity of the residual and * finally we save in the grid vector vield **phi_v** the compinent \f$i \f$ * (Copy from phi_s to phi_v is necessary because in phi_s is not a grid * and cannot be used as a grid like object)  incardon committed Jun 13, 2017 688 689 690  * * \snippet Numerics/Vortex_in_cell/main_vic_petsc.cpp solve_poisson_comp *  incardon committed Jun 15, 2017 691  * We save the component \f$i \f$ of \f$\phi \f$ into **phi_v**  incardon committed Jun 13, 2017 692  *  incardon committed Jun 15, 2017 693  * \snippet Numerics/Vortex_in_cell/main_vic_petsc.cpp copy_to_phi_v  incardon committed Jun 13, 2017 694  *  incardon committed Jun 15, 2017 695 696  * Once we filled phi_v we can implement (7) and calculate the curl of **phi_v** * to recover the velocity v  incardon committed Jun 13, 2017 697 698 699 700 701 702  * * \snippet Numerics/Vortex_in_cell/main_vic_petsc.cpp curl_phi_v * * */  incardon committed Jun 08, 2017 703 704 void comp_vel(Box<3,float> & domain, grid_type & g_vort,grid_type & g_vel, petsc_solver::return_type (& phi_s)[3]) {  incardon committed Jun 13, 2017 705 706 707  //! \cond [poisson_obj_eq] \endcond // Convenient constant  incardon committed Jun 08, 2017 708 709  constexpr unsigned int phi = 0;  incardon committed Jun 13, 2017 710  // We define a field phi_f  incardon committed Jun 08, 2017 711 712  typedef Field phi_f;  incardon committed Jun 13, 2017 713 714 715 716  // We assemble the poisson equation doing the // poisson of the Laplacian of phi using Laplacian // central scheme (where the both derivative use central scheme, not // forward composed backward like usually)  incardon committed Jun 08, 2017 717 718  typedef Lap poisson;  incardon committed Jun 13, 2017 719  //! \cond [poisson_obj_eq] \endcond  incardon committed Jun 08, 2017 720   incardon committed Jun 15, 2017 721  // Maximum size of the stencil  incardon committed Jun 08, 2017 722 723  Ghost<3,long int> g(2);  incardon committed Jun 15, 2017 724  // Here we create a distributed grid to store the result  incardon committed Jun 08, 2017 725  grid_dist_id<3,float,aggregate> gr_ps(g_vort.getDecomposition(),g_vort.getGridInfo().getSize(),g);  incardon committed Jun 13, 2017 726  grid_dist_id<3,float,aggregate> phi_v(g_vort.getDecomposition(),g_vort.getGridInfo().getSize(),g);  incardon committed Jun 08, 2017 727   incardon committed Jun 15, 2017 728 729 730  // We calculate and print the maximum divergence of the vorticity // and the integral of it calc_and_print_max_div_and_int(g_vort);  incardon committed Jun 13, 2017 731   incardon committed Jun 08, 2017 732 733 734  // For each component solve a poisson for (size_t i = 0 ; i < 3 ; i++) {  incardon committed Jun 13, 2017 735 736  //! \cond [solve_poisson_comp] \endcond  incardon committed Jun 15, 2017 737  // Copy the vorticity component i in gr_ps  incardon committed Jun 08, 2017 738 739 740 741 742 743 744  auto it2 = gr_ps.getDomainIterator(); // calculate the velocity from the curl of phi while (it2.isNext()) { auto key = it2.get();  incardon committed Jun 15, 2017 745  // copy  incardon committed Jun 08, 2017 746 747 748 749 750  gr_ps.get(key) = g_vort.template get(key)[i]; ++it2; }  incardon committed Jun 15, 2017 751  // Maximum size of the stencil  incardon committed Jun 13, 2017 752  Ghost<3,long int> stencil_max(2);  incardon committed Jun 08, 2017 753 754  // Finite difference scheme  incardon committed Jun 13, 2017 755  FDScheme fd(stencil_max, domain, gr_ps);  incardon committed Jun 08, 2017 756   incardon committed Jun 15, 2017 757 758  // impose the poisson equation using gr_ps = vorticity for the right-hand-side (on the full domain) fd.template impose_dit(poisson(),gr_ps,gr_ps.getDomainIterator());  incardon committed Jun 08, 2017 759 760 761 762 763  // Create an PETSC solver petsc_solver solver; solver.setSolver(KSPCG);  incardon committed Jun 17, 2017 764  solver.setAbsTol(0.1);  incardon committed Jun 08, 2017 765  solver.setMaxIter(500);  incardon committed Jun 17, 2017 766  solver.log_monitor();  incardon committed Jun 08, 2017 767   incardon committed Jun 15, 2017 768 769  // Get the sparse matrix that represent the left-hand-side // of the equation  incardon committed Jun 08, 2017 770 771  SparseMatrix & A = fd.getA();  incardon committed Jun 15, 2017 772  // Get the vector that represent the right-hand-side  incardon committed Jun 08, 2017 773 774  Vector & b = fd.getB();  incardon committed Jun 17, 2017 775 776 777  timer tm_solve; tm_solve.start();  incardon committed Jun 15, 2017 778 779 780  // Give to the solver A and b in this case we are giving // an intitial guess phi_s calculated in the previous // time step  incardon committed Jun 08, 2017 781 782  solver.solve(A,phi_s[i],b);  incardon committed Jun 17, 2017 783 784  tm_solve.stop();  incardon committed Jun 17, 2017 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 800  //////////////// CG Call ////////////////////////// // Here we create a distributed grid to store the result of the helmotz projection grid_dist_id<3,float,aggregate> sol(gr_ps.getDecomposition(),gr_ps.getGridInfo().getSize(),g); auto zit = sol.getDomainIterator(); while (zit.isNext()) { auto p = zit.get(); sol.template getProp<0>(p) = 0.0; ++zit; }  incardon committed Jun 17, 2017 801 802 803  timer tm_solve2; tm_solve2.start();  incardon committed Jun 17, 2017 804 805  CG(OpA,sol,gr_ps);  incardon committed Jun 17, 2017 806 807 808 809 810  tm_solve2.stop(); std::cout << "Time to solve: " << tm_solve2.getwct() << " " << tm_solve.getwct() << std::endl;  incardon committed Jun 17, 2017 811 812  ///////////////////////////////////////////////////  incardon committed Jun 08, 2017 813 814  // Calculate the residual  incardon committed Jun 13, 2017 815 816  solError serr; serr = solver.get_residual_error(A,phi_s[i],b);  incardon committed Jun 08, 2017 817   incardon committed Jun 15, 2017 818  Vcluster & v_cl = create_vcluster();  incardon committed Jun 13, 2017 819 820  if (v_cl.getProcessUnitID() == 0) {std::cout << "Solved component " << i << " Error: " << serr.err_inf << std::endl;}  incardon committed Jun 08, 2017 821 822  // copy the solution to grid  incardon committed Jun 13, 2017 823  fd.template copy(phi_s[i],gr_ps);  incardon committed Jun 08, 2017 824   incardon committed Jun 15, 2017 825 826 827 828  //! \cond [solve_poisson_comp] \endcond //! \cond [copy_to_phi_v] \endcond  incardon committed Jun 08, 2017 829 830 831 832 833 834 835  auto it3 = gr_ps.getDomainIterator(); // calculate the velocity from the curl of phi while (it3.isNext()) { auto key = it3.get();  incardon committed Jun 13, 2017 836  phi_v.get(key)[i] = gr_ps.get(key);  incardon committed Jun 08, 2017 837 838 839  ++it3; }  incardon committed Jun 13, 2017 840   incardon committed Jun 15, 2017 841  //! \cond [copy_to_phi_v] \endcond  incardon committed Jun 08, 2017 842 843  }  incardon committed Jun 13, 2017 844 845 846  //! \cond [curl_phi_v] \endcond phi_v.ghost_get();  incardon committed Jun 08, 2017 847   incardon committed Jun 13, 2017 848 849 850 851 852  float inv_dx = 0.5f / g_vort.spacing(0); float inv_dy = 0.5f / g_vort.spacing(1); float inv_dz = 0.5f / g_vort.spacing(2); auto it3 = phi_v.getDomainIterator();  incardon committed Jun 08, 2017 853 854 855 856 857 858  // calculate the velocity from the curl of phi while (it3.isNext()) { auto key = it3.get();  incardon committed Jun 13, 2017 859 860 861 862 863 864  float phi_xy = (phi_v.get(key.move(y,1))[x] - phi_v.get(key.move(y,-1))[x])*inv_dy; float phi_xz = (phi_v.get(key.move(z,1))[x] - phi_v.get(key.move(z,-1))[x])*inv_dz; float phi_yx = (phi_v.get(key.move(x,1))[y] - phi_v.get(key.move(x,-1))[y])*inv_dx; float phi_yz = (phi_v.get(key.move(z,1))[y] - phi_v.get(key.move(z,-1))[y])*inv_dz; float phi_zx = (phi_v.get(key.move(x,1))[z] - phi_v.get(key.move(x,-1))[z])*inv_dx; float phi_zy = (phi_v.get(key.move(y,1))[z] - phi_v.get(key.move(y,-1))[z])*inv_dy;  incardon committed Jun 08, 2017 865 866 867 868 869 870 871 872  g_vel.template get(key)[x] = phi_zy - phi_yz; g_vel.template get(key)[y] = phi_xz - phi_zx; g_vel.template get(key)[z] = phi_yx - phi_xy; ++it3; }  incardon committed Jun 13, 2017 873  //! \cond [curl_phi_v] \endcond  incardon committed Jun 08, 2017 874 875 876 877 878 879 880 881 882 883 884 885 886 887 888 889 890 891 892 893 894 895 896 897 898 899 900 901 902 903 904 905 906 907 908 909 910 911 } template void set_zero(grid_type & gr) { auto it = gr.getDomainGhostIterator(); // calculate the velocity from the curl of phi while (it.isNext()) { auto key = it.get(); gr.template get(key)[0] = 0.0; gr.template get(key)[1] = 0.0; gr.template get(key)[2] = 0.0; ++it; } } template void set_zero(particles_type & vd) { auto it = vd.getDomainIterator(); // calculate the velocity from the curl of phi while (it.isNext()) { auto key = it.get(); vd.template getProp(key)[0] = 0.0; vd.template getProp(key)[1] = 0.0; vd.template getProp(key)[2] = 0.0; ++it; } }  incardon committed Jun 15, 2017 912 913 914 915 916 917 918 919 920 921 922 923 924 925 /*! \page Vortex_in_cell_petsc Vortex in Cell 3D * * # Step 6: Compute right hand side # {#vic_rhs_calc} * * Computing the right hand side is performed calculating the term * \f$(w \cdot \nabla) u \f$. For the nabla operator we use second * order finite difference central scheme. The commented part is the * term \f$\nu \nabla^{2} w \f$ that we said to neglect * * \snippet Numerics/Vortex_in_cell/main_vic_petsc.cpp calc_rhs * */ //! \cond [calc_rhs] \endcond  incardon committed Jun 08, 2017 926 927 928 929  // Calculate the right hand side of the vorticity formulation template void calc_rhs(grid & g_vort, grid & g_vel, grid & g_dwp) {  incardon committed Jun 15, 2017 930  // usefull constant  incardon committed Jun 08, 2017 931 932  constexpr int rhs = 0;  incardon committed Jun 15, 2017 933 934  // calculate several pre-factors for the stencil finite // difference  incardon committed Jun 08, 2017 935 936 937 938 939 940 941 942 943 944 945 946 947 948 949 950 951 952 953 954 955 956 957 958 959 960 961 962 963 964 965 966 967 968 969 970 971 972 973 974 975 976 977 978 979 980 981 982 983 984 985 986 987 988  float fac1 = 2.0f*nu/(g_vort.spacing(0)); float fac2 = 2.0f*nu/(g_vort.spacing(1)); float fac3 = 2.0f*nu/(g_vort.spacing(2)); float fac4 = 0.5f/(g_vort.spacing(0)); float fac5 = 0.5f/(g_vort.spacing(1)); float fac6 = 0.5f/(g_vort.spacing(2)); auto it = g_dwp.getDomainIterator(); g_vort.template ghost_get(); while (it.isNext()) { auto key = it.get(); g_dwp.template get(key)[x] = /*fac1*(g_vort.template get(key.move(x,1))[x]+g_vort.template get(key.move(x,-1))[x])+ fac2*(g_vort.template get(key.move(y,1))[x]+g_vort.template get(key.move(y,-1))[x])+ fac3*(g_vort.template get(key.move(z,1))[x]+g_vort.template get(key.move(z,-1))[x])- 2.0f*(fac1+fac2+fac3)*g_vort.template get(key)[x]+*/ fac4*g_vort.template get(key)[x]* (g_vel.template get(key.move(x,1))[x] - g_vel.template get(key.move(x,-1))[x])+ fac5*g_vort.template get(key)[y]* (g_vel.template get(key.move(y,1))[x] - g_vel.template get(key.move(y,-1))[x])+ fac6*g_vort.template get(key)[z]* (g_vel.template get(key.move(z,1))[x] - g_vel.template get(key.move(z,-1))[x]); g_dwp.template get(key)[y] = /*fac1*(g_vort.template get(key.move(x,1))[y]+g_vort.template get(key.move(x,-1))[y])+ fac2*(g_vort.template get(key.move(y,1))[y]+g_vort.template get(key.move(y,-1))[y])+ fac3*(g_vort.template get(key.move(z,1))[y]+g_vort.template get(key.move(z,-1))[y])- 2.0f*(fac1+fac2+fac3)*g_vort.template get(key)[y]+*/ fac4*g_vort.template get(key)[x]* (g_vel.template get(key.move(x,1))[y] - g_vel.template get(key.move(x,-1))[y])+ fac5*g_vort.template get(key)[y]* (g_vel.template get(key.move(y,1))[y] - g_vel.template get(key.move(y,-1))[y])+ fac6*g_vort.template get(key)[z]* (g_vel.template get(key.move(z,1))[y] - g_vel.template get(key.move(z,-1))[y]); g_dwp.template get(key)[z] = /*fac1*(g_vort.template get(key.move(x,1))[z]+g_vort.template get(key.move(x,-1))[z])+ fac2*(g_vort.template get(key.move(y,1))[z]+g_vort.template get(key.move(y,-1))[z])+ fac3*(g_vort.template get(key.move(z,1))[z]+g_vort.template get(key.move(z,-1))[z])- 2.0f*(fac1+fac2+fac3)*g_vort.template get(key)[z]+*/ fac4*g_vort.template get(key)[x]* (g_vel.template get(key.move(x,1))[z] - g_vel.template get(key.move(x,-1))[z])+ fac5*g_vort.template get(key)[y]* (g_vel.template get(key.move(y,1))[z] - g_vel.template get(key.move(y,-1))[z])+ fac6*g_vort.template get(key)[z]* (g_vel.template get(key.move(z,1))[z] - g_vel.template get(key.move(z,-1))[z]); ++it; } }  incardon committed Jun 15, 2017 989 990 991 992 993 994 995 996 997 998 999 1000 1001 1002 1003 1004 1005 1006 1007 1008 1009 //! \cond [calc_rhs] \endcond /*! \page Vortex_in_cell_petsc Vortex in Cell 3D * * # Step 8: Runge-Kutta # {#vic_runge_kutta1} * * Here we do the first step of the runge kutta update. In particular we * update the vorticity and position of the particles. The right-hand-side * of the vorticity update is calculated on the grid and interpolated * on the particles. The Runge-Kutta of order two * require the following update for the vorticity and position as first step * * \f$\boldsymbol w = \boldsymbol w + \frac{1}{2} \boldsymbol {rhs} \delta t \f$ * * \f$\boldsymbol x = \boldsymbol x + \frac{1}{2} \boldsymbol u \delta t \f$ * * \snippet Numerics/Vortex_in_cell/main_vic_petsc.cpp runge_kutta_1 * */ //! \cond [runge_kutta_1] \endcond  incardon committed Jun 08, 2017 1010 1011 1012 1013 1014 1015 1016 1017 1018 1019 1020 1021 1022 1023 1024 1025 1026 1027 1028 1029 1030 1031 1032 1033 1034 1035 1036 1037 1038 1039 1040 1041 1042 1043 1044 1045 1046 1047 1048 1049 1050 1051  void rk_step1(particles_type & particles) { auto it = particles.getDomainIterator(); while (it.isNext()) { auto key = it.get(); // Save the old vorticity particles.getProp(key)[x] = particles.getProp(key)[x]; particles.getProp(key)[y] = particles.getProp(key)[y]; particles.getProp(key)[z] = particles.getProp(key)[z]; particles.getProp(key)[x] = particles.getProp(key)[x] + 0.5f * dt * particles.getProp(key)[x]; particles.getProp(key)[y] = particles.getProp(key)[y] + 0.5f * dt * particles.getProp(key)[y]; particles.getProp(key)[z] = particles.getProp(key)[z] + 0.5f * dt * particles.getProp(key)[z]; ++it; } auto it2 = particles.getDomainIterator(); while (it2.isNext()) { auto key = it2.get(); // Save the old position particles.getProp(key)[x] = particles.getPos(key)[x]; particles.getProp(key)[y] = particles.getPos(key)[y]; particles.getProp(key)[z] = particles.getPos(key)[z]; particles.getPos(key)[x] = particles.getPos(key)[x] + 0.5f * dt * particles.getProp(key)[x]; particles.getPos(key)[y] = particles.getPos(key)[y] + 0.5f * dt * particles.getProp(key)[y]; particles.getPos(key)[z] = particles.getPos(key)[z] + 0.5f * dt * particles.getProp(key)[z]; ++it2; } particles.map(); }  incardon committed Jun 15, 2017 1052 1053 1054 1055 1056 1057 1058 1059 1060 1061 1062 1063 1064 1065 1066 1067 1068 1069 1070 1071 1072 1073 //! \cond [runge_kutta_1] \endcond /*! \page Vortex_in_cell_petsc Vortex in Cell 3D * * # Step 13: Runge-Kutta # {#vic_runge_kutta2} * * Here we do the second step of the Runge-Kutta update. In particular we * update the vorticity and position of the particles. The right-hand-side * of the vorticity update is calculated on the grid and interpolated * on the particles. The Runge-Kutta of order two * require the following update for the vorticity and position as first step * * \f$\boldsymbol w = \boldsymbol w + \frac{1}{2} \boldsymbol {rhs} \delta t \f$ * * \f$\boldsymbol x = \boldsymbol x + \frac{1}{2} \boldsymbol u \delta t \f$ * * \snippet Numerics/Vortex_in_cell/main_vic_petsc.cpp runge_kutta_2 * */ //! \cond [runge_kutta_2] \endcond  incardon committed Jun 08, 2017 1074 1075 1076 1077 1078 1079 1080 1081 1082 1083 1084 1085 1086 1087 1088 1089 1090 1091 1092 1093 1094 1095 1096 1097 1098 1099 1100 1101 1102 1103 1104 void rk_step2(particles_type & particles) { auto it = particles.getDomainIterator(); while (it.isNext()) { auto key = it.get(); particles.getProp(key)[x] = particles.getProp(key)[x] + dt * particles.getProp(key)[x]; particles.getProp(key)[y] = particles.getProp(key)[y] + dt * particles.getProp(key)[y]; particles.getProp(key)[z] = particles.getProp(key)[z] + dt * particles.getProp(key)[z]; ++it; } auto it2 = particles.getDomainIterator(); while (it2.isNext()) { auto key = it2.get(); particles.getPos(key)[x] = particles.getProp(key)[x] + dt * particles.getProp(key)[x]; particles.getPos(key)[y] = particles.getProp(key)[y] + dt * particles.getProp(key)[y]; particles.getPos(key)[z] = particles.getProp(key)[z] + dt * particles.getProp(key)[z]; ++it2; } particles.map(); }  incardon committed Jun 15, 2017 1105 1106 1107 1108 1109 1110 1111 1112 1113 1114 1115 1116 1117 1118 1119 1120 1121 1122 1123 1124 1125 //! \cond [runge_kutta_2] \endcond /*! \page Vortex_in_cell_petsc Vortex in Cell 3D * * # Step 4-5-6-7: Do step # {#vic_do_step} * * The do step function assemble multiple steps some of them already explained. * First we interpolate the vorticity from particles to mesh * * \snippet Numerics/Vortex_in_cell/main_vic_petsc.cpp do_step_p2m * * than we calculate velocity out of vorticity and the right-hand-side * recalling step 5 and 6 * * \snippet Numerics/Vortex_in_cell/main_vic_petsc.cpp step_56 * * Finally we interpolate velocity and right-hand-side back to particles * * \snippet Numerics/Vortex_in_cell/main_vic_petsc.cpp inte_m2p * */  incardon committed Jun 13, 2017 1126   incardon committed Jun 08, 2017 1127 1128 1129 1130 1131 1132 1133 1134 1135 1136 template void do_step(vector & particles, grid & g_vort, grid & g_vel, grid & g_dvort, Box<3,float> & domain, interpolate> & inte, petsc_solver::return_type (& phi_s)[3]) { constexpr int rhs = 0;  incardon committed Jun 15, 2017 1137 1138  //! \cond [do_step_p2m] \endcond  incardon committed Jun 08, 2017 1139 1140 1141 1142  set_zero(g_vort); inte.template p2m(particles,g_vort); g_vort.template ghost_put();  incardon committed Jun 15, 2017 1143 1144 1145 1146  //! \cond [do_step_p2m] \endcond //! \cond [step_56] \endcond  incardon committed Jun 08, 2017 1147 1148 1149 1150  // Calculate velocity from vorticity comp_vel(domain,g_vort,g_vel,phi_s); calc_rhs(g_vort,g_vel,g_dvort);  incardon committed Jun 15, 2017 1151 1152 1153 1154  //! \cond [step_56] \endcond //! \cond [inte_m2p] \endcond  incardon committed Jun 08, 2017 1155 1156 1157 1158 1159 1160  g_dvort.template ghost_get(); g_vel.template ghost_get(); set_zero(particles); set_zero(particles); inte.template m2p(g_dvort,particles); inte.template m2p(g_vel,particles);  incardon committed Jun 15, 2017 1161 1162  //! \cond [inte_m2p] \endcond  incardon committed Jun 08, 2017 1163 1164 }  incardon committed Jun 15, 2017 1165 1166 1167 1168 1169 1170 1171 1172 1173 1174 1175 1176 1177 1178 1179 1180 1181 1182 1183 1184 1185 1186 1187 1188 1189 1190 1191 1192 1193 1194 1195 1196 1197 1198 1199 1200 1201 1202 1203 1204 1205 1206 1207 1208 1209 1210 1211 1212 1213 1214 1215 1216 1217 1218 1219 1220 1221 1222 1223 1224 1225 1226 1227 1228 1229 1230 1231 1232 1233 1234 1235 1236 template void check_point_and_save(vector & particles, grid & g_vort, grid & g_vel, grid & g_dvort, size_t i) { Vcluster & v_cl = create_vcluster(); if (v_cl.getProcessingUnits() < 24) { particles.write("part_out_" + std::to_string(i),VTK_WRITER | FORMAT_BINARY); g_vort.template ghost_get(); g_vel.template ghost_get(); g_vel.write("grid_velocity_" + std::to_string(i), VTK_WRITER | FORMAT_BINARY); g_vort.write("grid_vorticity_" + std::to_string(i), VTK_WRITER | FORMAT_BINARY); } // In order to reduce the size of the saved data we apply a threshold. // We only save particles with vorticity higher than 0.1 vector_dist<3,float,aggregate> part_save(particles.getDecomposition(),0); auto it_s = particles.getDomainIterator(); while (it_s.isNext()) { auto p = it_s.get(); float vort_magn = sqrt(particles.template getProp(p)[0] * particles.template getProp(p)[0] + particles.template getProp(p)[1] * particles.template getProp(p)[1] + particles.template getProp(p)[2] * particles.template getProp(p)[2]); if (vort_magn < 0.1) { ++it_s; continue; } part_save.add(); part_save.getLastPos()[0] = particles.getPos(p)[0]; part_save.getLastPos()[1] = particles.getPos(p)[1]; part_save.getLastPos()[2] = particles.getPos(p)[2]; part_save.template getLastProp<0>() = vort_magn; ++it_s; } part_save.map(); // Save and HDF5 file for checkpoint restart particles.save("check_point"); } /*! \page Vortex_in_cell_petsc Vortex in Cell 3D * * # Main # {#vic_main} * * The main function as usual call the function **openfpm_init** it define * the domain where the simulation take place. The ghost size of the grid * the size of the grid in grid units on each direction, the periodicity * of the domain, in this case PERIODIC in each dimension and we create * our basic data structure for the simulation. A grid for the vorticity * **g_vort** a grid for the velocity **g_vel** a grid for the right-hand-side * of the vorticity update, and the **particles** vector. Additionally * we define the data structure **phi_s[3]** that store the velocity solution * in the previous time-step. keeping track of the previous solution for * the velocity help the interative-solver to find the solution more quickly. * Using the old velocity configuration as initial guess the solver will * converge in few iterations refining the old one. * */  incardon committed Jun 08, 2017 1237 1238 1239 1240 1241 1242 1243 1244 1245 1246 1247 1248 int main(int argc, char* argv[]) { // Initialize openfpm_init(&argc,&argv); // Domain, a rectangle Box<3,float> domain({0.0,0.0,0.0},{22.0,5.0,5.0}); // Ghost (Not important in this case but required) Ghost<3,long int> g(2); // Grid points on x=128 y=64 z=64  incardon committed Jun 13, 2017 1249  long int sz[] = {512,128,128};  incardon committed Jun 08, 2017 1250 1251 1252 1253 1254 1255 1256 1257 1258  size_t szu[] = {(size_t)sz[0],(size_t)sz[1],(size_t)sz[2]}; periodicity<3> bc = {{PERIODIC,PERIODIC,PERIODIC}}; grid_type g_vort(szu,domain,g,bc); grid_type g_vel(g_vort.getDecomposition(),szu,g); grid_type g_dvort(g_vort.getDecomposition(),szu,g); particles_type particles(g_vort.getDecomposition(),0);  incardon committed Jun 15, 2017 1259 1260 1261 1262 1263  // It store the solution to compute velocity // It is used as initial guess every time we call the solver Vector phi_s[3]; // Parallel object  incardon committed Jun 13, 2017 1264 1265  Vcluster & v_cl = create_vcluster();  incardon committed Jun 15, 2017 1266  // print out the spacing of the grid  incardon committed Jun 13, 2017 1267 1268  if (v_cl.getProcessUnitID() == 0) {std::cout << "SPACING " << g_vort.spacing(0) << " " << g_vort.spacing(1) << " " << g_vort.spacing(2) << std::endl;}  incardon committed Jun 08, 2017 1269   incardon committed Jun 15, 2017 1270  // initialize the ring step 1  incardon committed Jun 08, 2017 1271  init_ring(g_vort,domain);  incardon committed Jun 15, 2017 1272 1273  // Do the helmotz projection step 2  incardon committed Jun 08, 2017 1274 1275  helmotz_hodge_projection(g_vort,domain);  incardon committed Jun 15, 2017 1276  // initialize the particle on a mesh step 3  incardon committed Jun 08, 2017 1277 1278  remesh(particles,g_vort,domain);  incardon committed Jun 15, 2017 1279  // Get the total number of particles  incardon committed Jun 08, 2017 1280 1281 1282 1283  size_t tot_part = particles.size_local(); v_cl.sum(tot_part); v_cl.execute();  incardon committed Jun 15, 2017 1284  // Now we set the size of phi_s  incardon committed Jun 08, 2017 1285 1286 1287 1288  phi_s[0].resize(tot_part,particles.size_local()); phi_s[1].resize(tot_part,particles.size_local()); phi_s[2].resize(tot_part,particles.size_local());  incardon committed Jun 15, 2017 1289  // And we initially set it to zero  incardon committed Jun 08, 2017 1290 1291 1292 1293  phi_s[0].setZero(); phi_s[1].setZero(); phi_s[2].setZero();  incardon committed Jun 15, 2017 1294 1295 1296 1297 1298  // We create the interpolation object outside the loop cycles // to avoid to recreate it at every cycle. Internally this object // create some data-structure to optimize the conversion particle // position to sub-domain. If there are no re-balancing it is safe // to reuse-it  incardon committed Jun 08, 2017 1299 1300  interpolate> inte(particles,g_vort);  incardon committed Jun 13, 2017 1301 1302 1303  // With more than 24 core we rely on the HDF5 checkpoint restart file if (v_cl.getProcessingUnits() < 24) {g_vort.write("grid_vorticity_init", VTK_WRITER | FORMAT_BINARY);}  incardon committed Jun 08, 2017 1304   incardon committed Jun 13, 2017 1305   incardon committed Jun 15, 2017 1306 1307  // Before entering the loop we check if we want to restart from // a previous simulation  incardon committed Jun 13, 2017 1308 1309  size_t i = 0;  incardon committed Jun 15, 2017 1310  // if we have an argument  incardon committed Jun 13, 2017 1311 1312  if (argc > 1) {  incardon committed Jun 15, 2017 1313  // take that argument  incardon committed Jun 13, 2017 1314 1315  std::string restart(argv[1]);  incardon committed Jun 15, 2017 1316  // convert it into number  incardon committed Jun 13, 2017 1317 1318  i = std::stoi(restart);  incardon committed Jun 15, 2017 1319  // load the file  incardon committed Jun 13, 2017 1320 1321  particles.load("check_point_" + std::to_string(i));  incardon committed Jun 15, 2017 1322 1323  // Print to inform that we are restarting from a // particular position  incardon committed Jun 13, 2017 1324 1325  if (v_cl.getProcessUnitID() == 0) {std::cout << "Restarting from " << i << std::endl;}  incardon committed Jun 13, 2017 1326 1327  }  incardon committed Jun 08, 2017 1328  // Time Integration  incardon committed Jun 17, 2017 1329  for ( ; i < 1 ; i++)  incardon committed Jun 08, 2017 1330  {  incardon committed Jun 15, 2017 1331  // do step 4-5-6-7  incardon committed Jun 08, 2017 1332 1333  do_step(particles,g_vort,g_vel,g_dvort,domain,inte,phi_s);  incardon committed Jun 15, 2017 1334  // do step 8  incardon committed Jun 08, 2017 1335 1336  rk_step1(particles);  incardon committed Jun 15, 2017 1337  // do step 9-10-11-12  incardon committed Jun 17, 2017 1338 /* do_step(particles,g_vort,g_vel,g_dvort,domain,inte,phi_s);  incardon committed Jun 08, 2017 1339   incardon committed Jun 15, 2017 1340  // do step 13  incardon committed Jun 08, 2017 1341 1342  rk_step2(particles);  incardon committed Jun 15, 2017 1343  // so step 14  incardon committed Jun 08, 2017 1344 1345 1346 1347 1348 1349  set_zero(g_vort); inte.template p2m(particles,g_vort); g_vort.template ghost_put(); remesh(particles,g_vort,domain);  incardon committed Jun 15, 2017 1350 1351 1352  // print the step number if (v_cl.getProcessUnitID() == 0) {std::cout << "Step " << i << std::endl;}  incardon committed Jun 08, 2017 1353   incardon committed Jun 15, 2017 1354 1355  // every 100 steps write the output if (i % 100 == 0) {check_point_and_save(particles,g_vort,g_vel,g_dvort,i);}  incardon committed Jun 17, 2017 1356 */  incardon committed Jun 08, 2017 1357 1358 1359 1360 1361  } openfpm_finalize(); }