main_vic_petsc.cpp 44.3 KB
 incardon committed Jun 13, 2017 1 /*! \page Vortex_in_cell_petsc Vortex in Cell 3D  incardon committed Jun 08, 2017 2  *  incardon committed Jun 13, 2017 3 4 5  * # Vortex in Cell 3D ring with ringlets # {#vic_ringlets} * * In this example we solve the Navier-Stokes equation in the vortex formulation in 3D  incardon committed Jun 15, 2017 6  * for an incompressible fluid. (bold symbols are vectorial quantity)  incardon committed Jun 13, 2017 7  *  incardon committed Aug 31, 2017 8 9 10 11 12 13 14 15 16 17 18 19 20  * \htmlonly * Video 1 * * Video 2 * * \endhtmlonly *  incardon committed Jun 13, 2017 21 22 23 24 25  * ## Numerical method ## {#num_vic_mt} * * In this code we solve the Navier-stokes equation for incompressible fluid in the * vorticity formulation. We first recall the Navier-stokes equation in vorticity formulation *  incardon committed Jun 15, 2017 26  * \f$\nabla \times \boldsymbol u = - \boldsymbol w \f$  incardon committed Jun 13, 2017 27  *  incardon committed Jun 15, 2017 28  * \f$\frac{\displaystyle D \boldsymbol w}{\displaystyle dt} = ( \boldsymbol w \cdot \vec \nabla) \boldsymbol u + \nu \nabla^{2} \boldsymbol w \f$ (5)  incardon committed Jun 13, 2017 29 30  * * Where \f$w\f$ is the vorticity and \f$u\f$ is the velocity of the fluid.  incardon committed Aug 31, 2017 31  * With Reynold number defined as \f$Re = \frac{uL}{\nu}\f$. The algorithm can be expressed with the following pseudo code.  incardon committed Jun 13, 2017 32 33 34 35 36 37 38 39  * * \verbatim 1) Initialize the vortex ring on grid 2) Do an helmotz hodge projection to make the vorticity divergent free 3) Initialize particles on the same position as the grid or remesh while (t < t_end) do  incardon committed Jun 15, 2017 40  4) Interpolate vorticity from the particles to mesh  incardon committed Jun 13, 2017 41  5) calculate velocity u from the vorticity w  incardon committed Jun 15, 2017 42 43  6) calculate the right-hand-side on grid and interpolate on particles 7) interpolate velocity u to particles  incardon committed Jun 13, 2017 44  8) move particles accordingly to the velocity  incardon committed Jun 15, 2017 45 46  9) interpolate the vorticity into mesh and reinitialize the particles in a grid like position  incardon committed Jun 13, 2017 47 48 49 50  end while \endverbatim *  incardon committed Jun 15, 2017 51  * This pseudo code show how to solve the equation above using euler integration.  incardon committed Jun 13, 2017 52 53 54 55 56 57 58 59 60 61  * In case of Runge-kutta of order two the pseudo code change into * * * \verbatim 1) Initialize the vortex ring on grid 2) Do an helmotz hodge projection to make the vorticity divergent free 3) Initialize particles on the same position as the grid or remesh while (t < t_end) do  incardon committed Aug 31, 2017 62  4) Interpolate vorticity from the particles to mesh  incardon committed Jun 13, 2017 63  5) calculate velocity u from the vorticity w  incardon committed Jun 15, 2017 64 65  6) calculate the right-hand-side on grid and interpolate on particles 7) interpolate velocity u to particles  incardon committed Jun 13, 2017 66 67  8) move particles accordingly to the velocity and save the old position in x_old  incardon committed Feb 16, 2018 68  9) Interpolate vorticity on mesh from the particles  incardon committed Jun 15, 2017 69 70  10) calculate velocity u from the vorticity w 11) calculate the right-hand-side on grid and interpolate on particles  incardon committed Jun 13, 2017 71  12) interpolate velocity u to particles  incardon committed Jun 15, 2017 72 73 74  13) move particles accordingly to the velocity starting from x_old 14) interpolate the vorticity into mesh and reinitialize the particles in a grid like position  incardon committed Jun 13, 2017 75 76 77 78 79 80 81 82 83  end while \endverbatim * * In the following we explain how each step is implemented in the code * * ## Inclusion ## {#num_vic_inc} * * This example code need several components. First because is a particle  incardon committed Jun 15, 2017 84  * mesh example we have to activate **grid_dist_id.hpp** and **vector_dist_id.hpp**.  incardon committed Jun 13, 2017 85  * Because we use a finite-difference scheme and linear-algebra to calculate the  incardon committed Jun 15, 2017 86 87 88 89 90 91  * velocity out of the vorticity, we have to include **FDScheme.hpp** to produce * from the finite difference scheme a matrix that represent the linear-system * to solve. **SparseMatrix.hpp** is the Sparse-Matrix that will contain the linear * system to solve in order to get the velocity out of the vorticity. * **Vector.hpp** is the data-structure that contain the solution of the * linear system. **petsc_solver.hpp** is the library to use in order invert the linear system.  incardon committed Jun 13, 2017 92  * Because we have to interpolate between particles and grid we the to include  incardon committed Jun 15, 2017 93 94  * **interpolate.hpp** as interpolation kernel we use the mp4, so we include the * **mp4_kernel.hpp**  incardon committed Jun 13, 2017 95 96 97 98 99  * * For convenience we also define the particles type and the grid type and some * convenient constants * * \snippet Numerics/Vortex_in_cell/main_vic_petsc.cpp inclusion  incardon committed Jun 08, 2017 100 101 102 103 104 105 106 107  * * * */ //#define SE_CLASS1 //#define PRINT_STACKTRACE  incardon committed Jun 13, 2017 108 //! \cond [inclusion] \endcond  incardon committed Jun 08, 2017 109   incardon committed Jul 12, 2017 110 #include "interpolation/interpolation.hpp"  incardon committed Jun 08, 2017 111 112 113 114 115 116 117 #include "Grid/grid_dist_id.hpp" #include "Vector/vector_dist.hpp" #include "Matrix/SparseMatrix.hpp" #include "Vector/Vector.hpp" #include "FiniteDifference/FDScheme.hpp" #include "Solvers/petsc_solver.hpp" #include "interpolation/mp4_kernel.hpp"  incardon committed Jul 02, 2017 118 #include "Solvers/petsc_solver_AMG_report.hpp"  incardon committed Jun 08, 2017 119 120 121 122 123  constexpr int x = 0; constexpr int y = 1; constexpr int z = 2;  incardon committed Jun 15, 2017 124 // The type of the grids  incardon committed Jul 26, 2017 125 typedef grid_dist_id<3,float,aggregate> grid_type;  incardon committed Jun 15, 2017 126 127  // The type of the particles  incardon committed Jul 26, 2017 128 typedef vector_dist<3,float,aggregate> particles_type;  incardon committed Jun 08, 2017 129 130  // radius of the torus  incardon committed Jul 26, 2017 131 float ringr1 = 1.0;  incardon committed Jun 08, 2017 132 // radius of the core of the torus  incardon committed Jul 26, 2017 133 float sigma = 1.0/3.523;  incardon committed Aug 24, 2017 134 135 // Reynold number (If you want to use 7500.0 you have to use a grid 1600x400x400) //float tgtre = 7500.0;  incardon committed Aug 31, 2017 136 float tgtre = 3000.0;  incardon committed Aug 02, 2017 137   incardon committed Jun 08, 2017 138 // Kinematic viscosity  incardon committed Jul 26, 2017 139 float nu = 1.0/tgtre;  incardon committed Jun 08, 2017 140   incardon committed Jun 13, 2017 141 // Time step  incardon committed Aug 31, 2017 142 // float dt = 0.0025 for Re 7500  incardon committed Aug 24, 2017 143 float dt = 0.0125;  incardon committed Jun 08, 2017 144   incardon committed Sep 04, 2017 145 146 147 148 149 150 #ifdef TEST_RUN const unsigned int nsteps = 10; #else const unsigned int nsteps = 10001; #endif  incardon committed Jun 15, 2017 151 // All the properties by index  incardon committed Jun 08, 2017 152 153 154 155 156 157 158 constexpr unsigned int vorticity = 0; constexpr unsigned int velocity = 0; constexpr unsigned int p_vel = 1; constexpr int rhs_part = 2; constexpr unsigned int old_vort = 3; constexpr unsigned int old_pos = 4;  incardon committed Jun 13, 2017 159 //! \cond [inclusion] \endcond  incardon committed Jun 08, 2017 160   incardon committed Jun 15, 2017 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 template void calc_and_print_max_div_and_int(grid & g_vort) { //! \cond [sanity_int_div] \endcond g_vort.template ghost_get(); auto it5 = g_vort.getDomainIterator(); double max_vort = 0.0; double int_vort[3] = {0.0,0.0,0.0}; while (it5.isNext()) { auto key = it5.get(); double tmp; tmp = 1.0f/g_vort.spacing(x)/2.0f*(g_vort.template get(key.move(x,1))[x] - g_vort.template get(key.move(x,-1))[x] ) + 1.0f/g_vort.spacing(y)/2.0f*(g_vort.template get(key.move(y,1))[y] - g_vort.template get(key.move(y,-1))[y] ) + 1.0f/g_vort.spacing(z)/2.0f*(g_vort.template get(key.move(z,1))[z] - g_vort.template get(key.move(z,-1))[z] ); int_vort[x] += g_vort.template get(key)[x]; int_vort[y] += g_vort.template get(key)[y]; int_vort[z] += g_vort.template get(key)[z]; if (tmp > max_vort) max_vort = tmp; ++it5; } Vcluster & v_cl = create_vcluster(); v_cl.max(max_vort); v_cl.sum(int_vort[0]); v_cl.sum(int_vort[1]); v_cl.sum(int_vort[2]); v_cl.execute(); if (v_cl.getProcessUnitID() == 0) {std::cout << "Max div for vorticity " << max_vort << " Integral: " << int_vort[0] << " " << int_vort[1] << " " << int_vort[2] << std::endl;} //! \cond [sanity_int_div] \endcond }  incardon committed Jun 13, 2017 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 /*! \page Vortex_in_cell_petsc Vortex in Cell 3D * * # Step 1: Initialization of the vortex ring # {#vic_ring_init} * * In this function we initialize the vortex ring. The vortex ring is * initialized accordingly to these formula. * * \f$w(t = 0) = \frac{\Gamma}{\pi \sigma^{2}} e^{-(s/ \sigma)^2} \f$ * * \f$s^2 = (z-z_c)^{2} + ((x-x_c)^2 + (y-y_c)^2 - R^2) \f$ * * \f$\Gamma = \nu Re \f$ * * With this initialization the vortex ring look like the one in figure * * \image html int_vortex_arrow_small.jpg "Vortex ring initialization the arrow indicate the direction where the vortex point while the colors indicate the magnitude from blue (low) to red (high)" * * \snippet Numerics/Vortex_in_cell/main_vic_petsc.cpp init_vort * * * */  incardon committed Jun 08, 2017 227   incardon committed Jun 13, 2017 228 229 230 231 232 233 234 //! \cond [init_vort] \endcond /* * gr is the grid where we are initializing the vortex ring * domain is the simulation domain * */  incardon committed Jul 26, 2017 235 void init_ring(grid_type & gr, const Box<3,float> & domain)  incardon committed Jun 08, 2017 236 {  incardon committed Jun 13, 2017 237 238  // To add some noise to the vortex ring we create two random // vector  incardon committed Jun 08, 2017 239 240  constexpr int nk = 32;  incardon committed Jul 26, 2017 241 242  float ak[nk]; float bk[nk];  incardon committed Jun 08, 2017 243 244 245  for (size_t i = 0 ; i < nk ; i++) {  incardon committed Jun 30, 2017 246 247  ak[i] = rand()/RAND_MAX; bk[i] = rand()/RAND_MAX;  incardon committed Jun 08, 2017 248 249  }  incardon committed Jun 13, 2017 250  // We calculate the circuitation gamma  incardon committed Jul 26, 2017 251 252 253  float gamma = nu * tgtre; float rinv2 = 1.0f/(sigma*sigma); float max_vorticity = gamma*rinv2/M_PI;  incardon committed Jun 08, 2017 254   incardon committed Jun 13, 2017 255  // We go through the grid to initialize the vortex  incardon committed Jun 08, 2017 256 257 258 259 260 261 262  auto it = gr.getDomainIterator(); while (it.isNext()) { auto key_d = it.get(); auto key = it.getGKey(key_d);  incardon committed Jul 26, 2017 263 264 265  float tx = (key.get(x)-2)*gr.spacing(x) + domain.getLow(x); float ty = (key.get(y)-2)*gr.spacing(y) + domain.getLow(y); float tz = (key.get(z)-2)*gr.spacing(z) + domain.getLow(z);  incardon committed Aug 26, 2017 266  float theta1 = atan2((ty-domain.getHigh(1)/2.0f),(tz-domain.getHigh(2)/2.0f));  incardon committed Jun 08, 2017 267   incardon committed Jun 30, 2017 268   incardon committed Aug 31, 2017 269  float rad1r = sqrt((ty-domain.getHigh(1)/2.0f)*(ty-domain.getHigh(1)/2.0f) + (tz-domain.getHigh(2)/2.0f)*(tz-domain.getHigh(2)/2.0f)) - ringr1;  incardon committed Jul 26, 2017 270 271 272  float rad1t = tx - 1.0f; float rad1sq = rad1r*rad1r + rad1t*rad1t; float radstr = -exp(-rad1sq*rinv2)*rinv2*gamma/M_PI;  incardon committed Jun 08, 2017 273 274 275 276  gr.template get(key_d)[x] = 0.0f; gr.template get(key_d)[y] = -radstr * cos(theta1); gr.template get(key_d)[z] = radstr * sin(theta1);  incardon committed Aug 24, 2017 277 278  // kill the axis term  incardon committed Aug 31, 2017 279  float rad1r_ = sqrt((ty-domain.getHigh(1)/2.0f)*(ty-domain.getHigh(1)/2.0f) + (tz-domain.getHigh(2)/2.0f)*(tz-domain.getHigh(2)/2.0f)) + ringr1;  incardon committed Aug 24, 2017 280 281 282 283 284  float rad1sqTILDA = rad1sq*rinv2; radstr = exp(-rad1sq*rinv2)*rinv2*gamma/M_PI; gr.template get(key_d)[x] = 0.0f; gr.template get(key_d)[y] = -radstr * cos(theta1); gr.template get(key_d)[z] = radstr * sin(theta1);  incardon committed Jun 08, 2017 285 286 287 288 289  ++it; } }  incardon committed Jun 13, 2017 290 291 292 293 //! \cond [init_vort] \endcond //! \cond [poisson_syseq] \endcond  incardon committed Jun 08, 2017 294 295 // Specification of the poisson equation for the helmotz-hodge projection // 3D (dims = 3). The field is a scalar value (nvar = 1), bournary are periodic  incardon committed Aug 31, 2017 296 297 // type of the the space is float. The grid type that store \psi // The others indicate which kind of Matrix to use to construct the linear system and  incardon committed Jun 13, 2017 298 299 // which kind of vector to construct for the right hand side. Here we use a PETSC Sparse Matrix // and PETSC vector. NORMAL_GRID indicate that is a standard grid (non-staggered)  incardon committed Jun 08, 2017 300 301 struct poisson_nn_helm {  incardon committed Aug 31, 2017 302  //! 3D Stystem  incardon committed Jun 08, 2017 303  static const unsigned int dims = 3;  incardon committed Aug 31, 2017 304  //! We are solving for \psi that is a scalar  incardon committed Jun 08, 2017 305  static const unsigned int nvar = 1;  incardon committed Aug 31, 2017 306  //! Boundary conditions  incardon committed Jun 08, 2017 307  static const bool boundary[];  incardon committed Aug 31, 2017 308  //! type of the spatial coordinates  incardon committed Jul 26, 2017 309  typedef float stype;  incardon committed Aug 31, 2017 310  //! grid that store \psi  incardon committed Jul 26, 2017 311  typedef grid_dist_id<3,float,aggregate> b_grid;  incardon committed Aug 31, 2017 312  //! Sparse matrix used to sove the linear system (we use PETSC)  incardon committed Jun 08, 2017 313  typedef SparseMatrix SparseMatrix_type;  incardon committed Aug 31, 2017 314  //! Vector to solve the system (PETSC)  incardon committed Jun 08, 2017 315  typedef Vector Vector_type;  incardon committed Aug 31, 2017 316  //! It is a normal grid  incardon committed Jun 08, 2017 317 318 319  static const int grid_type = NORMAL_GRID; };  incardon committed Aug 31, 2017 320 //! boundary conditions are PERIODIC  incardon committed Jun 08, 2017 321 322 const bool poisson_nn_helm::boundary[] = {PERIODIC,PERIODIC,PERIODIC};  incardon committed Jun 13, 2017 323 //! \cond [poisson_syseq] \endcond  incardon committed Jun 08, 2017 324   incardon committed Jul 03, 2017 325   incardon committed Jun 13, 2017 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 /*! \page Vortex_in_cell_petsc Vortex in Cell 3D * * # Step 2: Helmotz-hodge projection # {#vic_hlm_proj} * * The Helnotz hodge projection is required in order to make the vorticity divergent * free. The Helmotz-holde projection work in this way. A field can be divided into * a curl-free part and a divergent-free part. * * \f$w = w_{rot} + w_{div} \f$ * * with * * \f$\vec \nabla \times w_{rot} = 0 \f$ * * \f$\nabla \cdot w_{div} = 0 \f$ * * To have a vorticity divergent free we have to get the component (3) \f$w_{div} = w - w_{rot}\f$. * In particular it hold * * \f$\nabla \cdot w = \nabla \cdot w_{rot} \f$ * * Bacause \f$\vec \nabla \times w_{rot} = 0 \f$ we can introduce a field \f$\psi \f$ * such that * * (2) \f$w_{rot} = \vec \nabla \psi \f$ * * Doing the \f$\nabla \cdot \vec \nabla \psi \f$ we obtain * * \f$\nabla \cdot \vec \nabla \psi = \nabla^{2} \psi = \nabla \cdot w_{rot} = \nabla \cdot w \f$ * * so we lead to this equation * * (1) \f$\nabla^{2} \psi = \nabla \cdot w \f$ * * Solving the equation for \f$\psi \f$ we can obtain \f$w_{rot} \f$ doing the gradient of \f$\psi \f$ * and finally correct \f$w \f$ obtaining \f$w_{div} \f$ * * The **helmotz_hodge_projection** function do this correction to the vorticity * * In particular it solve the equation (1) it calculate \f$w_{rot} \f$ * using (2) and correct the vorticity using using (3) * * * ## Poisson equation ## * * To solve a poisson equation on a grid using finite-difference, we need to create * an object that carry information about the system of equations * * \snippet Numerics/Vortex_in_cell/main_vic_petsc.cpp poisson_syseq *  incardon committed Jun 15, 2017 376 377  * Once created this object we can define the equation we are trying to solve. * In particular the code below define the left-hand-side of the equation \f$\nabla^{2} \psi \f$  incardon committed Jun 13, 2017 378 379 380 381 382  * * \snippet Numerics/Vortex_in_cell/main_vic_petsc.cpp poisson_obj_eq * * Before to construct the linear system we also calculate the divergence of the * vorticity \f$\nabla \cdot w \f$ that will be the right-hand-side  incardon committed Jun 15, 2017 383  * of the equation  incardon committed Jun 13, 2017 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403  * * \snippet Numerics/Vortex_in_cell/main_vic_petsc.cpp calc_div_vort * * Finally we can create the object FDScheme using the object **poisson_nn_helm** * as template variable. In addition to the constructor we have to specify the maximum extension of the stencil, the domain and the * grid that will store the result. At this point we can impose an equation to construct * our SparseMatrix. In this example we are imposing the poisson equation with right hand * side equal to the divergence of vorticity (note: to avoid to create another field we use * \f$\psi \f$ to preliminary store the divergence of the vorticity). Imposing the * equations produce an invertible SparseMatrix **A** and a right-hand-side Vector **b**. * * \snippet Numerics/Vortex_in_cell/main_vic_petsc.cpp create_fdscheme * * Because we need \f$x = A^{-1}b \f$. We have to invert and solve a linear system. * In this case we use the Conjugate-gradient-Method an iterative solver. Such method * is controlled by two parameters. One is the tollerance that determine when the * method is converged, the second one is the maximum number of iterations to avoid that * the method go into infinite loop. After we set the parameters of the solver we can the * the solution **x**. Finaly we copy back the solution **x** into the grid \f$\psi \f$. *  incardon committed Jun 15, 2017 404  * \snippet Numerics/Vortex_in_cell/main_vic_petsc.cpp solve_petsc  incardon committed Jun 13, 2017 405 406 407 408 409  * * ### Note ### * * Because we are solving the poisson equation in periodic boundary conditions the Matrix has * determinat equal to zero. This mean that \f$\psi \f$ has no unique solution (if it has one).  incardon committed Jun 15, 2017 410  * In order to recover one, we have to ensure that the integral of the righ hand side or vorticity  incardon committed Jun 13, 2017 411 412 413  * is zero. (In our case is the case). We have to ensure that across time the integral of the * vorticity is conserved. (In our case is the case if we consider the \f$\nu = 0 \f$ and \f$* \nabla \cdot w = 0 \f$ we can rewrite (5) in a conservative way \f$\frac{Dw}{dt} = div(w \otimes v) \f$ ).  incardon committed Jun 15, 2017 414  * Is also good to notice that the solution that you get is the one with \f$\int w = 0 \f$  incardon committed Jun 13, 2017 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438  * * * * \snippet Numerics/Vortex_in_cell/main_vic_petsc.cpp solve_petsc * * ## Correction ## {#vort_correction} * * After we got our solution for \f$\psi \f$ we can calculate the correction of the vorticity * doing the gradient of \f$\psi \f$. * * \snippet Numerics/Vortex_in_cell/main_vic_petsc.cpp vort_correction * * We also do a sanity check and we control that the vorticity remain * divergent-free. Getting the maximum value of the divergence and printing out * its value * * */ /* * gr vorticity grid where we apply the correction * domain simulation domain * */  incardon committed Jul 26, 2017 439 void helmotz_hodge_projection(grid_type & gr, const Box<3,float> & domain, petsc_solver & solver, petsc_solver::return_type & x_ , bool init)  incardon committed Jun 08, 2017 440 441 442 { ///////////////////////////////////////////////////////////////  incardon committed Jun 13, 2017 443 444 445  //! \cond [poisson_obj_eq] \endcond // Convenient constant  incardon committed Jun 08, 2017 446 447  constexpr unsigned int phi = 0;  incardon committed Jun 13, 2017 448  // We define a field phi_f  incardon committed Jun 08, 2017 449 450  typedef Field phi_f;  incardon committed Jun 13, 2017 451 452 453 454  // We assemble the poisson equation doing the // poisson of the Laplacian of phi using Laplacian // central scheme (where the both derivative use central scheme, not // forward composed backward like usually)  incardon committed Jun 08, 2017 455 456  typedef Lap poisson;  incardon committed Jun 13, 2017 457 458 459  //! \cond [poisson_obj_eq] \endcond //! \cond [calc_div_vort] \endcond  incardon committed Jun 08, 2017 460 461 462 463  // ghost get gr.template ghost_get();  incardon committed Jun 13, 2017 464  // ghost size of the psi function  incardon committed Jun 08, 2017 465 466 467  Ghost<3,long int> g(2); // Here we create a distributed grid to store the result of the helmotz projection  incardon committed Jul 26, 2017 468  grid_dist_id<3,float,aggregate> psi(gr.getDecomposition(),gr.getGridInfo().getSize(),g);  incardon committed Jun 08, 2017 469 470 471 472 473 474 475 476  // Calculate the divergence of the vortex field auto it = gr.getDomainIterator(); while (it.isNext()) { auto key = it.get();  incardon committed Jun 13, 2017 477  psi.template get(key) = 1.0f/gr.spacing(x)/2.0f*(gr.template get(key.move(x,1))[x] - gr.template get(key.move(x,-1))[x] ) +  incardon committed Jun 08, 2017 478 479 480 481 482 483  1.0f/gr.spacing(y)/2.0f*(gr.template get(key.move(y,1))[y] - gr.template get(key.move(y,-1))[y] ) + 1.0f/gr.spacing(z)/2.0f*(gr.template get(key.move(z,1))[z] - gr.template get(key.move(z,-1))[z] ); ++it; }  incardon committed Jun 13, 2017 484 485  //! \cond [calc_div_vort] \endcond  incardon committed Jun 13, 2017 486   incardon committed Jun 15, 2017 487  calc_and_print_max_div_and_int(gr);  incardon committed Jun 08, 2017 488 489   incardon committed Jun 13, 2017 490 491 492 493 494  //! \cond [create_fdscheme] \endcond // In order to create a matrix that represent the poisson equation we have to indicate // we have to indicate the maximum extension of the stencil and we we need an extra layer // of points in case we have to impose particular boundary conditions.  incardon committed Jun 08, 2017 495 496  Ghost<3,long int> stencil_max(2);  incardon committed Jun 13, 2017 497 498 499 500 501 502 503  // Here we define our Finite difference disctetization scheme object FDScheme fd(stencil_max, domain, psi); // impose the poisson equation, using right hand side psi on the full domain (psi.getDomainIterator) // the template paramereter instead define which property of phi carry the righ-hand-side // in this case phi has only one property, so the property 0 carry the right hand side fd.template impose_dit<0>(poisson(),psi,psi.getDomainIterator());  incardon committed Jun 08, 2017 504   incardon committed Jun 13, 2017 505  //! \cond [create_fdscheme] \endcond  incardon committed Jun 08, 2017 506   incardon committed Jun 13, 2017 507  //! \cond [solve_petsc] \endcond  incardon committed Jun 08, 2017 508   incardon committed Jul 12, 2017 509  timer tm_solve;  incardon committed Jul 03, 2017 510 511 512 513  if (init == true) { // Set the conjugate-gradient as method to solve the linear solver solver.setSolver(KSPBCGS);  incardon committed Jun 13, 2017 514   incardon committed Jul 03, 2017 515  // Set the absolute tolerance to determine that the method is converged  incardon committed Aug 26, 2017 516  solver.setAbsTol(0.0001);  incardon committed Jun 13, 2017 517   incardon committed Jul 03, 2017 518 519  // Set the maximum number of iterations solver.setMaxIter(500);  incardon committed Jun 08, 2017 520   incardon committed Aug 26, 2017 521 522 #ifdef USE_MULTIGRID  incardon committed Jul 12, 2017 523 524 525 526 527 528 529 530  solver.setPreconditioner(PCHYPRE_BOOMERAMG); solver.setPreconditionerAMG_nl(6); solver.setPreconditionerAMG_maxit(1); solver.setPreconditionerAMG_relax("SOR/Jacobi"); solver.setPreconditionerAMG_cycleType("V",0,4); solver.setPreconditionerAMG_coarsenNodalType(0); solver.setPreconditionerAMG_coarsen("HMIS");  incardon committed Aug 26, 2017 531 532 533  #endif  incardon committed Jul 03, 2017 534  tm_solve.start();  incardon committed Jun 17, 2017 535   incardon committed Jul 03, 2017 536 537  // Give to the solver A and b, return x, the solution solver.solve(fd.getA(),x_,fd.getB());  incardon committed Jun 13, 2017 538   incardon committed Jul 03, 2017 539 540 541 542  tm_solve.stop(); } else {  incardon committed Jul 12, 2017 543  tm_solve.start();  incardon committed Jul 03, 2017 544  solver.solve(x_,fd.getB());  incardon committed Jul 12, 2017 545  tm_solve.stop();  incardon committed Jul 03, 2017 546  }  incardon committed Jun 17, 2017 547   incardon committed Jun 13, 2017 548 549  // copy the solution x to the grid psi fd.template copy(x_,psi);  incardon committed Jun 08, 2017 550   incardon committed Jun 13, 2017 551  //! \cond [solve_petsc] \endcond  incardon committed Jun 08, 2017 552   incardon committed Jun 13, 2017 553  //! \cond [vort_correction] \endcond  incardon committed Jun 08, 2017 554   incardon committed Jun 13, 2017 555  psi.template ghost_get();  incardon committed Jun 08, 2017 556 557 558 559 560 561 562 563 564  // Correct the vorticity to make it divergence free auto it2 = gr.getDomainIterator(); while (it2.isNext()) { auto key = it2.get();  incardon committed Jun 13, 2017 565 566 567  gr.template get(key)[x] -= 1.0f/2.0f/psi.spacing(x)*(psi.template get(key.move(x,1))-psi.template get(key.move(x,-1))); gr.template get(key)[y] -= 1.0f/2.0f/psi.spacing(y)*(psi.template get(key.move(y,1))-psi.template get(key.move(y,-1))); gr.template get(key)[z] -= 1.0f/2.0f/psi.spacing(z)*(psi.template get(key.move(z,1))-psi.template get(key.move(z,-1)));  incardon committed Jun 08, 2017 568 569 570 571  ++it2; }  incardon committed Jun 13, 2017 572 573  //! \cond [vort_correction] \endcond  incardon committed Jun 15, 2017 574  calc_and_print_max_div_and_int(gr);  incardon committed Jun 08, 2017 575 576 }  incardon committed Jun 15, 2017 577   incardon committed Jun 13, 2017 578 579 580 /*! \page Vortex_in_cell_petsc Vortex in Cell 3D * * # Step 3: Remeshing vorticity # {#vic_remesh_vort}  incardon committed Jun 08, 2017 581  *  incardon committed Jun 15, 2017 582  * After that we initialized the vorticity on the grid, we initialize the particles  incardon committed Jun 13, 2017 583 584 585 586 587 588 589  * in a grid like position and we interpolate the vorticity on particles. Because * of the particles position being in a grid-like position and the symmetry of the * interpolation kernels, the re-mesh step simply reduce to initialize the particle * in a grid like position and assign the property vorticity of the particles equal to the * grid vorticity. * * \snippet Numerics/Vortex_in_cell/main_vic_petsc.cpp remesh_part  incardon committed Jun 08, 2017 590 591  * */  incardon committed Jun 13, 2017 592 593 594  //! \cond [remesh_part] \endcond  incardon committed Jul 26, 2017 595 void remesh(particles_type & vd, grid_type & gr,Box<3,float> & domain)  incardon committed Jun 13, 2017 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 { // Remove all particles because we reinitialize in a grid like position vd.clear(); // Get a grid iterator auto git = vd.getGridIterator(gr.getGridInfo().getSize()); // For each grid point while (git.isNext()) { // Get the grid point in global coordinates (key). p is in local coordinates auto p = git.get(); auto key = git.get_dist(); // Add the particle vd.add(); // Assign the position to the particle in a grid like way vd.getLastPos()[x] = gr.spacing(x)*p.get(x) + domain.getLow(x); vd.getLastPos()[y] = gr.spacing(y)*p.get(y) + domain.getLow(y); vd.getLastPos()[z] = gr.spacing(z)*p.get(z) + domain.getLow(z); // Initialize the vorticity vd.template getLastProp()[x] = gr.template get(key)[x]; vd.template getLastProp()[y] = gr.template get(key)[y]; vd.template getLastProp()[z] = gr.template get(key)[z]; // next grid point ++git; } // redistribute the particles vd.map(); } //! \cond [remesh_part] \endcond  incardon committed Jul 12, 2017 633   incardon committed Jun 13, 2017 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 /*! \page Vortex_in_cell_petsc Vortex in Cell 3D * * # Step 5: Compute velocity from vorticity # {#vic_vel_from_vort} * * Computing the velocity from vorticity is done in the following way. Given * * \f$\vec \nabla \times u = -w \f$ * * We intrododuce the stream line function defined as * * \f$\nabla \times \phi = u \f$ (7) * * \f$\nabla \cdot \phi = 0 \f$ * * We obtain * * \f$\nabla \times \nabla \times \phi = -w = \vec \nabla (\nabla \cdot \phi) - \nabla^{2} \phi \f$ * * Because the divergence of \f$\phi \f$ is defined to be zero we have * * \f$\nabla^{2} \phi = w \f$ * * The velocity can be recovered by the equation (7) * * Putting into code what explained before, we again generate a poisson * object * * \snippet Numerics/Vortex_in_cell/main_vic_petsc.cpp poisson_obj_eq * * In order to calculate the velocity out of the vorticity, we solve a poisson * equation like we did in helmotz-projection equation, but we do it for each  incardon committed Jun 15, 2017 665 666 667 668 669 670  * component \f$i \f$ of the vorticity. Qnce we have the solution in **psi_s** * we copy the result back into the grid **gr_ps**. We than calculate the * quality of the solution printing the norm infinity of the residual and * finally we save in the grid vector vield **phi_v** the compinent \f$i \f$ * (Copy from phi_s to phi_v is necessary because in phi_s is not a grid * and cannot be used as a grid like object)  incardon committed Jun 13, 2017 671 672 673  * * \snippet Numerics/Vortex_in_cell/main_vic_petsc.cpp solve_poisson_comp *  incardon committed Jun 15, 2017 674  * We save the component \f$i \f$ of \f$\phi \f$ into **phi_v**  incardon committed Jun 13, 2017 675  *  incardon committed Jun 15, 2017 676  * \snippet Numerics/Vortex_in_cell/main_vic_petsc.cpp copy_to_phi_v  incardon committed Jun 13, 2017 677  *  incardon committed Jun 15, 2017 678 679  * Once we filled phi_v we can implement (7) and calculate the curl of **phi_v** * to recover the velocity v  incardon committed Jun 13, 2017 680 681 682 683 684 685  * * \snippet Numerics/Vortex_in_cell/main_vic_petsc.cpp curl_phi_v * * */  incardon committed Jul 26, 2017 686 void comp_vel(Box<3,float> & domain, grid_type & g_vort,grid_type & g_vel, petsc_solver::return_type (& phi_s)[3],petsc_solver & solver)  incardon committed Jun 08, 2017 687 {  incardon committed Jun 13, 2017 688 689 690  //! \cond [poisson_obj_eq] \endcond // Convenient constant  incardon committed Jun 08, 2017 691 692  constexpr unsigned int phi = 0;  incardon committed Jun 13, 2017 693  // We define a field phi_f  incardon committed Jun 08, 2017 694 695  typedef Field phi_f;  incardon committed Jun 13, 2017 696 697 698 699  // We assemble the poisson equation doing the // poisson of the Laplacian of phi using Laplacian // central scheme (where the both derivative use central scheme, not // forward composed backward like usually)  incardon committed Jun 08, 2017 700 701  typedef Lap poisson;  incardon committed Jun 13, 2017 702  //! \cond [poisson_obj_eq] \endcond  incardon committed Jun 08, 2017 703   incardon committed Jun 15, 2017 704  // Maximum size of the stencil  incardon committed Jun 08, 2017 705 706  Ghost<3,long int> g(2);  incardon committed Jun 15, 2017 707  // Here we create a distributed grid to store the result  incardon committed Jul 26, 2017 708 709  grid_dist_id<3,float,aggregate> gr_ps(g_vort.getDecomposition(),g_vort.getGridInfo().getSize(),g); grid_dist_id<3,float,aggregate> phi_v(g_vort.getDecomposition(),g_vort.getGridInfo().getSize(),g);  incardon committed Jun 08, 2017 710   incardon committed Jun 15, 2017 711 712 713  // We calculate and print the maximum divergence of the vorticity // and the integral of it calc_and_print_max_div_and_int(g_vort);  incardon committed Jun 13, 2017 714   incardon committed Jun 08, 2017 715 716 717  // For each component solve a poisson for (size_t i = 0 ; i < 3 ; i++) {  incardon committed Jun 13, 2017 718 719  //! \cond [solve_poisson_comp] \endcond  incardon committed Jun 15, 2017 720  // Copy the vorticity component i in gr_ps  incardon committed Jun 08, 2017 721 722 723 724 725 726 727  auto it2 = gr_ps.getDomainIterator(); // calculate the velocity from the curl of phi while (it2.isNext()) { auto key = it2.get();  incardon committed Jun 15, 2017 728  // copy  incardon committed Jun 08, 2017 729 730 731 732 733  gr_ps.get(key) = g_vort.template get(key)[i]; ++it2; }  incardon committed Jun 15, 2017 734  // Maximum size of the stencil  incardon committed Jun 13, 2017 735  Ghost<3,long int> stencil_max(2);  incardon committed Jun 08, 2017 736 737  // Finite difference scheme  incardon committed Jun 13, 2017 738  FDScheme fd(stencil_max, domain, gr_ps);  incardon committed Jun 08, 2017 739   incardon committed Jun 15, 2017 740 741  // impose the poisson equation using gr_ps = vorticity for the right-hand-side (on the full domain) fd.template impose_dit(poisson(),gr_ps,gr_ps.getDomainIterator());  incardon committed Jun 08, 2017 742   743  solver.setAbsTol(0.01);  incardon committed Jun 08, 2017 744   incardon committed Jun 15, 2017 745  // Get the vector that represent the right-hand-side  incardon committed Jun 08, 2017 746 747  Vector & b = fd.getB();  incardon committed Jun 17, 2017 748 749 750  timer tm_solve; tm_solve.start();  incardon committed Jun 15, 2017 751 752 753  // Give to the solver A and b in this case we are giving // an intitial guess phi_s calculated in the previous // time step  incardon committed Jul 03, 2017 754  solver.solve(phi_s[i],b);  incardon committed Jun 08, 2017 755   incardon committed Jun 17, 2017 756 757  tm_solve.stop();  incardon committed Jun 08, 2017 758 759  // Calculate the residual  incardon committed Jun 13, 2017 760  solError serr;  incardon committed Jul 03, 2017 761  serr = solver.get_residual_error(phi_s[i],b);  incardon committed Jun 08, 2017 762   incardon committed Jun 15, 2017 763  Vcluster & v_cl = create_vcluster();  incardon committed Jun 13, 2017 764 765  if (v_cl.getProcessUnitID() == 0) {std::cout << "Solved component " << i << " Error: " << serr.err_inf << std::endl;}  incardon committed Jun 08, 2017 766 767  // copy the solution to grid  incardon committed Jun 13, 2017 768  fd.template copy(phi_s[i],gr_ps);  incardon committed Jun 08, 2017 769   incardon committed Jun 15, 2017 770 771 772 773  //! \cond [solve_poisson_comp] \endcond //! \cond [copy_to_phi_v] \endcond  incardon committed Jun 08, 2017 774 775 776 777 778 779 780  auto it3 = gr_ps.getDomainIterator(); // calculate the velocity from the curl of phi while (it3.isNext()) { auto key = it3.get();  incardon committed Jun 13, 2017 781  phi_v.get(key)[i] = gr_ps.get(key);  incardon committed Jun 08, 2017 782 783 784  ++it3; }  incardon committed Jun 13, 2017 785   incardon committed Jun 15, 2017 786  //! \cond [copy_to_phi_v] \endcond  incardon committed Jun 08, 2017 787 788  }  incardon committed Jun 13, 2017 789 790 791  //! \cond [curl_phi_v] \endcond phi_v.ghost_get();  incardon committed Jun 08, 2017 792   incardon committed Jul 26, 2017 793 794 795  float inv_dx = 0.5f / g_vort.spacing(0); float inv_dy = 0.5f / g_vort.spacing(1); float inv_dz = 0.5f / g_vort.spacing(2);  incardon committed Jun 13, 2017 796 797  auto it3 = phi_v.getDomainIterator();  incardon committed Jun 08, 2017 798 799 800 801 802 803  // calculate the velocity from the curl of phi while (it3.isNext()) { auto key = it3.get();  incardon committed Jul 26, 2017 804 805 806 807 808 809  float phi_xy = (phi_v.get(key.move(y,1))[x] - phi_v.get(key.move(y,-1))[x])*inv_dy; float phi_xz = (phi_v.get(key.move(z,1))[x] - phi_v.get(key.move(z,-1))[x])*inv_dz; float phi_yx = (phi_v.get(key.move(x,1))[y] - phi_v.get(key.move(x,-1))[y])*inv_dx; float phi_yz = (phi_v.get(key.move(z,1))[y] - phi_v.get(key.move(z,-1))[y])*inv_dz; float phi_zx = (phi_v.get(key.move(x,1))[z] - phi_v.get(key.move(x,-1))[z])*inv_dx; float phi_zy = (phi_v.get(key.move(y,1))[z] - phi_v.get(key.move(y,-1))[z])*inv_dy;  incardon committed Jun 08, 2017 810 811 812 813 814 815 816 817  g_vel.template get(key)[x] = phi_zy - phi_yz; g_vel.template get(key)[y] = phi_xz - phi_zx; g_vel.template get(key)[z] = phi_yx - phi_xy; ++it3; }  incardon committed Jun 13, 2017 818  //! \cond [curl_phi_v] \endcond  incardon committed Jun 08, 2017 819 820 821 822 823 824 825 826 827 828 829 830 831 832 833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 851 852 853 854 855 856 } template void set_zero(grid_type & gr) { auto it = gr.getDomainGhostIterator(); // calculate the velocity from the curl of phi while (it.isNext()) { auto key = it.get(); gr.template get(key)[0] = 0.0; gr.template get(key)[1] = 0.0; gr.template get(key)[2] = 0.0; ++it; } } template void set_zero(particles_type & vd) { auto it = vd.getDomainIterator(); // calculate the velocity from the curl of phi while (it.isNext()) { auto key = it.get(); vd.template getProp(key)[0] = 0.0; vd.template getProp(key)[1] = 0.0; vd.template getProp(key)[2] = 0.0; ++it; } }  incardon committed Jun 15, 2017 857 858 859 860 861 862 863 864 865 866 867 868 869 870 /*! \page Vortex_in_cell_petsc Vortex in Cell 3D * * # Step 6: Compute right hand side # {#vic_rhs_calc} * * Computing the right hand side is performed calculating the term * \f$(w \cdot \nabla) u \f$. For the nabla operator we use second * order finite difference central scheme. The commented part is the * term \f$\nu \nabla^{2} w \f$ that we said to neglect * * \snippet Numerics/Vortex_in_cell/main_vic_petsc.cpp calc_rhs * */ //! \cond [calc_rhs] \endcond  incardon committed Jun 08, 2017 871 872 873 874  // Calculate the right hand side of the vorticity formulation template void calc_rhs(grid & g_vort, grid & g_vel, grid & g_dwp) {  incardon committed Jun 15, 2017 875  // usefull constant  incardon committed Jun 08, 2017 876 877  constexpr int rhs = 0;  incardon committed Jun 15, 2017 878 879  // calculate several pre-factors for the stencil finite // difference  incardon committed Jan 31, 2018 880 881 882  float fac1 = 1.0f*nu/(g_vort.spacing(0)*g_vort.spacing(0)); float fac2 = 1.0f*nu/(g_vort.spacing(1)*g_vort.spacing(1)); float fac3 = 1.0f*nu/(g_vort.spacing(2)*g_vort.spacing(2));  incardon committed Jun 08, 2017 883   incardon committed Jul 26, 2017 884 885 886  float fac4 = 0.5f/(g_vort.spacing(0)); float fac5 = 0.5f/(g_vort.spacing(1)); float fac6 = 0.5f/(g_vort.spacing(2));  incardon committed Jun 08, 2017 887 888 889 890 891 892 893 894 895  auto it = g_dwp.getDomainIterator(); g_vort.template ghost_get(); while (it.isNext()) { auto key = it.get();  incardon committed Aug 26, 2017 896  g_dwp.template get(key)[x] = fac1*(g_vort.template get(key.move(x,1))[x]+g_vort.template get(key.move(x,-1))[x])+  incardon committed Jun 08, 2017 897 898  fac2*(g_vort.template get(key.move(y,1))[x]+g_vort.template get(key.move(y,-1))[x])+ fac3*(g_vort.template get(key.move(z,1))[x]+g_vort.template get(key.move(z,-1))[x])-  incardon committed Aug 26, 2017 899  2.0f*(fac1+fac2+fac3)*g_vort.template get(key)[x]+  incardon committed Jun 08, 2017 900 901 902 903 904 905 906  fac4*g_vort.template get(key)[x]* (g_vel.template get(key.move(x,1))[x] - g_vel.template get(key.move(x,-1))[x])+ fac5*g_vort.template get(key)[y]* (g_vel.template get(key.move(y,1))[x] - g_vel.template get(key.move(y,-1))[x])+ fac6*g_vort.template get(key)[z]* (g_vel.template get(key.move(z,1))[x] - g_vel.template get(key.move(z,-1))[x]);  incardon committed Aug 26, 2017 907  g_dwp.template get(key)[y] = fac1*(g_vort.template get(key.move(x,1))[y]+g_vort.template get(key.move(x,-1))[y])+  incardon committed Jun 08, 2017 908 909  fac2*(g_vort.template get(key.move(y,1))[y]+g_vort.template get(key.move(y,-1))[y])+ fac3*(g_vort.template get(key.move(z,1))[y]+g_vort.template get(key.move(z,-1))[y])-  incardon committed Aug 26, 2017 910  2.0f*(fac1+fac2+fac3)*g_vort.template get(key)[y]+  incardon committed Jun 08, 2017 911 912 913 914 915 916 917 918  fac4*g_vort.template get(key)[x]* (g_vel.template get(key.move(x,1))[y] - g_vel.template get(key.move(x,-1))[y])+ fac5*g_vort.template get(key)[y]* (g_vel.template get(key.move(y,1))[y] - g_vel.template get(key.move(y,-1))[y])+ fac6*g_vort.template get(key)[z]* (g_vel.template get(key.move(z,1))[y] - g_vel.template get(key.move(z,-1))[y]);  incardon committed Aug 26, 2017 919  g_dwp.template get(key)[z] = fac1*(g_vort.template get(key.move(x,1))[z]+g_vort.template get(key.move(x,-1))[z])+  incardon committed Jun 08, 2017 920 921  fac2*(g_vort.template get(key.move(y,1))[z]+g_vort.template get(key.move(y,-1))[z])+ fac3*(g_vort.template get(key.move(z,1))[z]+g_vort.template get(key.move(z,-1))[z])-  incardon committed Aug 26, 2017 922  2.0f*(fac1+fac2+fac3)*g_vort.template get(key)[z]+  incardon committed Jun 08, 2017 923 924 925 926 927 928 929 930 931 932 933  fac4*g_vort.template get(key)[x]* (g_vel.template get(key.move(x,1))[z] - g_vel.template get(key.move(x,-1))[z])+ fac5*g_vort.template get(key)[y]* (g_vel.template get(key.move(y,1))[z] - g_vel.template get(key.move(y,-1))[z])+ fac6*g_vort.template get(key)[z]* (g_vel.template get(key.move(z,1))[z] - g_vel.template get(key.move(z,-1))[z]); ++it; } }  incardon committed Jun 15, 2017 934 935 936 937 938 939 940 941 942 943 944 945 946 947 948 949 950 951 952 953 954 //! \cond [calc_rhs] \endcond /*! \page Vortex_in_cell_petsc Vortex in Cell 3D * * # Step 8: Runge-Kutta # {#vic_runge_kutta1} * * Here we do the first step of the runge kutta update. In particular we * update the vorticity and position of the particles. The right-hand-side * of the vorticity update is calculated on the grid and interpolated * on the particles. The Runge-Kutta of order two * require the following update for the vorticity and position as first step * * \f$\boldsymbol w = \boldsymbol w + \frac{1}{2} \boldsymbol {rhs} \delta t \f$ * * \f$\boldsymbol x = \boldsymbol x + \frac{1}{2} \boldsymbol u \delta t \f$ * * \snippet Numerics/Vortex_in_cell/main_vic_petsc.cpp runge_kutta_1 * */ //! \cond [runge_kutta_1] \endcond  incardon committed Jun 08, 2017 955 956 957 958 959 960 961 962 963 964 965 966 967 968 969 970 971 972 973 974 975 976 977 978 979 980 981 982 983 984 985 986 987 988 989 990 991 992 993 994 995 996  void rk_step1(particles_type & particles) { auto it = particles.getDomainIterator(); while (it.isNext()) { auto key = it.get(); // Save the old vorticity particles.getProp(key)[x] = particles.getProp(key)[x]; particles.getProp(key)[y] = particles.getProp(key)[y]; particles.getProp(key)[z] = particles.getProp(key)[z]; particles.getProp(key)[x] = particles.getProp(key)[x] + 0.5f * dt * particles.getProp(key)[x]; particles.getProp(key)[y] = particles.getProp(key)[y] + 0.5f * dt * particles.getProp(key)[y]; particles.getProp(key)[z] = particles.getProp(key)[z] + 0.5f * dt * particles.getProp(key)[z]; ++it; } auto it2 = particles.getDomainIterator(); while (it2.isNext()) { auto key = it2.get(); // Save the old position particles.getProp(key)[x] = particles.getPos(key)[x]; particles.getProp(key)[y] = particles.getPos(key)[y]; particles.getProp(key)[z] = particles.getPos(key)[z]; particles.getPos(key)[x] = particles.getPos(key)[x] + 0.5f * dt * particles.getProp(key)[x]; particles.getPos(key)[y] = particles.getPos(key)[y] + 0.5f * dt * particles.getProp(key)[y]; particles.getPos(key)[z] = particles.getPos(key)[z] + 0.5f * dt * particles.getProp(key)[z]; ++it2; } particles.map(); }  incardon committed Jun 15, 2017 997 998 999 1000 1001 1002 1003 1004 1005 1006 1007 1008 1009 1010 1011 1012 1013 1014 1015 1016 1017 1018 //! \cond [runge_kutta_1] \endcond /*! \page Vortex_in_cell_petsc Vortex in Cell 3D * * # Step 13: Runge-Kutta # {#vic_runge_kutta2} * * Here we do the second step of the Runge-Kutta update. In particular we * update the vorticity and position of the particles. The right-hand-side * of the vorticity update is calculated on the grid and interpolated * on the particles. The Runge-Kutta of order two * require the following update for the vorticity and position as first step * * \f$\boldsymbol w = \boldsymbol w + \frac{1}{2} \boldsymbol {rhs} \delta t \f$ * * \f$\boldsymbol x = \boldsymbol x + \frac{1}{2} \boldsymbol u \delta t \f$ * * \snippet Numerics/Vortex_in_cell/main_vic_petsc.cpp runge_kutta_2 * */ //! \cond [runge_kutta_2] \endcond  incardon committed Jun 08, 2017 1019 1020 1021 1022 1023 1024 1025 1026 1027 1028 1029 1030 1031 1032 1033 1034 1035 1036 1037 1038 1039 1040 1041 1042 1043 1044 1045 1046 1047 1048 1049 void rk_step2(particles_type & particles) { auto it = particles.getDomainIterator(); while (it.isNext()) { auto key = it.get(); particles.getProp(key)[x] = particles.getProp(key)[x] + dt * particles.getProp(key)[x]; particles.getProp(key)[y] = particles.getProp(key)[y] + dt * particles.getProp(key)[y]; particles.getProp(key)[z] = particles.getProp(key)[z] + dt * particles.getProp(key)[z]; ++it; } auto it2 = particles.getDomainIterator(); while (it2.isNext()) { auto key = it2.get(); particles.getPos(key)[x] = particles.getProp(key)[x] + dt * particles.getProp(key)[x]; particles.getPos(key)[y] = particles.getProp(key)[y] + dt * particles.getProp(key)[y]; particles.getPos(key)[z] = particles.getProp(key)[z] + dt * particles.getProp(key)[z]; ++it2; } particles.map(); }  incardon committed Jul 12, 2017 1050 1051   incardon committed Jun 15, 2017 1052 1053 1054 1055 1056 1057 1058 1059 1060 1061 1062 1063 1064 1065 1066 1067 1068 1069 1070 1071 1072 //! \cond [runge_kutta_2] \endcond /*! \page Vortex_in_cell_petsc Vortex in Cell 3D * * # Step 4-5-6-7: Do step # {#vic_do_step} * * The do step function assemble multiple steps some of them already explained. * First we interpolate the vorticity from particles to mesh * * \snippet Numerics/Vortex_in_cell/main_vic_petsc.cpp do_step_p2m * * than we calculate velocity out of vorticity and the right-hand-side * recalling step 5 and 6 * * \snippet Numerics/Vortex_in_cell/main_vic_petsc.cpp step_56 * * Finally we interpolate velocity and right-hand-side back to particles * * \snippet Numerics/Vortex_in_cell/main_vic_petsc.cpp inte_m2p * */  incardon committed Jun 13, 2017 1073   incardon committed Jun 08, 2017 1074 1075 1076 1077 template void do_step(vector & particles, grid & g_vort, grid & g_vel, grid & g_dvort,  incardon committed Jul 26, 2017 1078 1079  Box<3,float> & domain, interpolate> & inte,  incardon committed Jul 03, 2017 1080 1081  petsc_solver::return_type (& phi_s)[3], petsc_solver & solver)  incardon committed Jun 08, 2017 1082 1083 1084 { constexpr int rhs = 0;  incardon committed Jun 15, 2017 1085 1086  //! \cond [do_step_p2m] \endcond  incardon committed Jun 08, 2017 1087 1088  set_zero(g_vort); inte.template p2m(particles,g_vort);  incardon committed Jul 12, 2017 1089   incardon committed Jun 08, 2017 1090 1091  g_vort.template ghost_put();  incardon committed Jun 15, 2017 1092 1093 1094 1095  //! \cond [do_step_p2m] \endcond //! \cond [step_56] \endcond  incardon committed Jun 08, 2017 1096  // Calculate velocity from vorticity  incardon committed Jul 03, 2017 1097  comp_vel(domain,g_vort,g_vel,phi_s,solver);  incardon committed Jun 08, 2017 1098 1099  calc_rhs(g_vort,g_vel,g_dvort);  incardon committed Jun 15, 2017 1100 1101 1102 1103  //! \cond [step_56] \endcond //! \cond [inte_m2p] \endcond  incardon committed Jun 08, 2017 1104 1105 1106 1107 1108 1109  g_dvort.template ghost_get(); g_vel.template ghost_get(); set_zero(particles); set_zero(particles); inte.template m2p(g_dvort,particles); inte.template m2p(g_vel,particles);  incardon committed Jun 15, 2017 1110 1111  //! \cond [inte_m2p] \endcond  incardon committed Jun 08, 2017 1112 1113 }  incardon committed Jun 15, 2017 1114 1115 1116 1117 1118 1119 1120 1121 1122 1123 1124 1125 1126 template void check_point_and_save(vector & particles, grid & g_vort, grid & g_vel, grid & g_dvort, size_t i) { Vcluster & v_cl = create_vcluster(); if (v_cl.getProcessingUnits() < 24) { particles.write("part_out_" + std::to_string(i),VTK_WRITER | FORMAT_BINARY); g_vort.template ghost_get(); g_vel.template ghost_get();  incardon committed Aug 24, 2017 1127 1128  g_vel.write_frame("grid_velocity",i, VTK_WRITER | FORMAT_BINARY); g_vort.write_frame("grid_vorticity",i, VTK_WRITER | FORMAT_BINARY);  incardon committed Jun 15, 2017 1129 1130 1131 1132  } // In order to reduce the size of the saved data we apply a threshold. // We only save particles with vorticity higher than 0.1  incardon committed Jul 26, 2017 1133  vector_dist<3,float,aggregate> part_save(particles.getDecomposition(),0);  incardon committed Jun 15, 2017 1134 1135 1136 1137 1138 1139 1140  auto it_s = particles.getDomainIterator(); while (it_s.isNext()) { auto p = it_s.get();  incardon committed Jul 26, 2017 1141  float vort_magn = sqrt(particles.template getProp(p)[0] * particles.template getProp(p)[0] +  incardon committed Jun 15, 2017 1142 1143 1144 1145 1146 1147 1148 1149 1150 1151 1152 1153 1154 1155 1156 1157 1158 1159 1160 1161 1162 1163 1164 1165 1166 1167 1168 1169 1170 1171 1172 1173 1174 1175 1176 1177 1178 1179 1180 1181 1182 1183 1184 1185  particles.template getProp(p)[1] * particles.template getProp(p)[1] + particles.template getProp(p)[2] * particles.template getProp(p)[2]); if (vort_magn < 0.1) { ++it_s; continue; } part_save.add(); part_save.getLastPos()[0] = particles.getPos(p)[0]; part_save.getLastPos()[1] = particles.getPos(p)[1]; part_save.getLastPos()[2] = particles.getPos(p)[2]; part_save.template getLastProp<0>() = vort_magn; ++it_s; } part_save.map(); // Save and HDF5 file for checkpoint restart particles.save("check_point"); } /*! \page Vortex_in_cell_petsc Vortex in Cell 3D * * # Main # {#vic_main} * * The main function as usual call the function **openfpm_init** it define * the domain where the simulation take place. The ghost size of the grid * the size of the grid in grid units on each direction, the periodicity * of the domain, in this case PERIODIC in each dimension and we create * our basic data structure for the simulation. A grid for the vorticity * **g_vort** a grid for the velocity **g_vel** a grid for the right-hand-side * of the vorticity update, and the **particles** vector. Additionally * we define the data structure **phi_s[3]** that store the velocity solution * in the previous time-step. keeping track of the previous solution for * the velocity help the interative-solver to find the solution more quickly. * Using the old velocity configuration as initial guess the solver will * converge in few iterations refining the old one. * */  incardon committed Jun 08, 2017 1186 1187 1188 1189 int main(int argc, char* argv[]) { // Initialize openfpm_init(&argc,&argv);  incardon committed Jul 09, 2017 1190  {  incardon committed Jun 08, 2017 1191  // Domain, a rectangle  incardon committed Aug 24, 2017 1192 1193  // For the grid 1600x400x400 use // Box<3,float> domain({0.0,0.0,0.0},{22.0,5.57,5.57});  incardon committed Aug 26, 2017 1194  Box<3,float> domain({0.0,0.0,0.0},{3.57,3.57,3.57});  incardon committed Jun 08, 2017 1195 1196 1197 1198 1199  // Ghost (Not important in this case but required) Ghost<3,long int> g(2); // Grid points on x=128 y=64 z=64  incardon committed Aug 24, 2017 1200 1201  // if we use Re = 7500 // long int sz[] = {1600,400,400};  incardon committed Aug 26, 2017 1202  long int sz[] = {96,96,96};  incardon committed Jun 08, 2017 1203 1204 1205 1206 1207 1208 1209 1210 1211  size_t szu[] = {(size_t)sz[0],(size_t)sz[1],(size_t)sz[2]}; periodicity<3> bc = {{PERIODIC,PERIODIC,PERIODIC}}; grid_type g_vort(szu,domain,g,bc); grid_type g_vel(g_vort.getDecomposition(),szu,g); grid_type g_dvort(g_vort.getDecomposition(),szu,g); particles_type particles(g_vort.getDecomposition(),0);  incardon committed Jun 15, 2017 1212 1213 1214  // It store the solution to compute velocity // It is used as initial guess every time we call the solver Vector phi_s[3];  incardon committed Jul 03, 2017 1215  Vector x_;  incardon committed Jun 15, 2017 1216 1217  // Parallel object  incardon committed Jun 13, 2017 1218 1219  Vcluster & v_cl = create_vcluster();  incardon committed Jun 15, 2017 1220  // print out the spacing of the grid  incardon committed Jun 13, 2017 1221 1222  if (v_cl.getProcessUnitID() == 0) {std::cout << "SPACING " << g_vort.spacing(0) << " " << g_vort.spacing(1) << " " << g_vort.spacing(2) << std::endl;}  incardon committed Jun 08, 2017 1223   incardon committed Jun 15, 2017 1224  // initialize the ring step 1  incardon committed Jun 08, 2017 1225  init_ring(g_vort,domain);  incardon committed Jun 15, 2017 1226   incardon committed Jul 03, 2017 1227 1228 1229 1230 1231 1232  x_.resize(g_vort.size(),g_vort.getLocalDomainSize()); x_.setZero(); // Create a PETSC solver to get the solution x petsc_solver solver;  incardon committed Jun 15, 2017 1233  // Do the helmotz projection step 2  incardon committed Jul 03, 2017 1234  helmotz_hodge_projection(g_vort,domain,solver,x_,true);  incardon committed Jun 08, 2017 1235   incardon committed Jun 15, 2017 1236  // initialize the particle on a mesh step 3  incardon committed Jun 08, 2017 1237 1238  remesh(particles,g_vort,domain);  incardon committed Jun 15, 2017 1239  // Get the total number of particles  incardon committed Jun 08, 2017 1240 1241 1242 1243  size_t tot_part = particles.size_local(); v_cl.sum(tot_part); v_cl.execute();  incardon committed Jun 15, 2017 1244  // Now we set the size of phi_s  incardon committed Jul 09, 2017 1245 1246 1247  phi_s[0].resize(g_vort.size(),g_vort.getLocalDomainSize()); phi_s[1].resize(g_vort.size(),g_vort.getLocalDomainSize()); phi_s[2].resize(g_vort.size(),g_vort.getLocalDomainSize());  incardon committed Jun 08, 2017 1248   incardon committed Jun 15, 2017 1249  // And we initially set it to zero  incardon committed Jun 08, 2017 1250 1251 1252 1253  phi_s[0].setZero(); phi_s[1].setZero(); phi_s[2].setZero();  incardon committed Jun 15, 2017 1254 1255 1256 1257 1258  // We create the interpolation object outside the loop cycles // to avoid to recreate it at every cycle. Internally this object // create some data-structure to optimize the conversion particle // position to sub-domain. If there are no re-balancing it is safe // to reuse-it  incardon committed Jul 26, 2017 1259  interpolate> inte(particles,g_vort);  incardon committed Jun 08, 2017 1260   incardon committed Jun 13, 2017 1261 1262 1263  // With more than 24 core we rely on the HDF5 checkpoint restart file if (v_cl.getProcessingUnits() < 24) {g_vort.write("grid_vorticity_init", VTK_WRITER | FORMAT_BINARY);}  incardon committed Jun 08, 2017 1264   incardon committed Jun 13, 2017 1265   incardon committed Jun 15, 2017 1266 1267  // Before entering the loop we check if we want to restart from // a previous simulation  incardon committed Jun 13, 2017 1268 1269  size_t i = 0;  incardon committed Jun 15, 2017 1270  // if we have an argument  incardon committed Jun 13, 2017 1271 1272  if (argc > 1) {  incardon committed Jun 15, 2017 1273  // take that argument  incardon committed Jun 13, 2017 1274 1275  std::string restart(argv[1]);