Commit cc8365cf authored by incardon's avatar incardon

Adding missing testing files

parent 81970c3e
# Change Log
All notable changes to this project will be documented in this file.
## [1.1.0]
## [1.1.0] February 2018
### Added
- Interface for Multi-vector dynamic load balancing
- Added Verlet List with balanced Memory and wise memory form
- Increaded performance for grid ghost get
- Introduced forms to increase the performance of the grid iterator in case of stencil code (see example 5_GrayScott)
- EMatrix wrapped eigen matrices compatibles with vector_dist_id
- General tuning for high dimension vector_dist_id (up to 50 dimensions)
- Added Discrete element Method example (8_DEM)
### Fixed
- Installation/detection of PETSC
- 2D Fixing IO in binary for vector
- 1D Fixing grid writer in ASCII mode
### Changed
- VerletList<3, double, FAST, shift<3, double> > is now VerletList<3, double, Mem_fast<>, shift<3, double> >
- Fixing 2D IO in binary for vector
- Fixing 1D grid writer in ASCII mode
## [1.0.0] 13 September 2017 (Codename: Vortex)
......
......@@ -7,7 +7,7 @@ LDIR =
OBJ = main.o
%.o: %.cpp
$(CC) -O0 -g -c --std=c++11 -o $@ $< $(INCLUDE_PATH)
$(CC) -O3 -g -c --std=c++11 -o $@ $< $(INCLUDE_PATH)
ps_cma_es: $(OBJ)
$(CC) -o $@ $^ $(CFLAGS) $(LIBS_PATH) $(LIBS)
......
/*!
* \page PS-CMA-ES Particle swarm CMA Evolution strategy
*
* \page PS_CMA_ES Particle swarm CMA-ES Evolution strategy
*
* [TOC]
*
* [TOC]
*
* # Optimization # {#Opti_cma_es}
* # Optimization {#Opti_cma_es}
*
*
* In this example we show how to code PS-CMA-ES. This is just a simple variation to the
* CMA-ES, where you have multiple CMA-ES running. The the best solution across them is
* used to produce a drift velocity toward that point.
*
* ## Introduction {#ps_cme_es}
*
* In this example we try to find the global optimum of a function. In particular we are
* using the function F15 from the CEC 2005 benchmark test, to validate that PS-CMA-ES work.
* This example contain multiple files:
*
* * f15_cec_const.hpp definitions of constants for the F15 function
* * f15_cec_fun.hpp the function itself
*
* The function is quite complicated and for reference please refere to the function
* F15 "Hybrid Composition" in the CEC 2005 test. The function can be called with
* hybrid_composition<dim>(x) where dim is the dimensionality and x is the point
* where is evaluated the function. The dimensionality can go from 1 to 50.
*
* Considering to have a function \f$ f \f$ from \f$ \mathbb{R}^{dim} \f$ to \f$ \mathbb{R} \f$,
* the algorithm use a set of particles to find in parallel the global optimum of a function.
* The algorithm rather than try to find the global optimum
* sampling point randomly in the space, it uses a set of particles each of them having a gaussian
* sampling distribution \f$ e^{\sigma \cdot x^tCx} \f$ with C a \f$ dim \cdot dim \f$ matrix.
* At each step for each particle p **lambda** points are sampled using the sampling
* distribution centered on the particle position. The covariant matrix and sigma are is subsequently
* adjusted to favor sampling around the best sampled points. In order to do this the algorithm
* need the eigen-value decomposition of \f$ C = B^{t}DB \f$ where \f$ D \f$ is a diagonal
* Matrix and \f$ B \f$ is the Matrix of the Eigen-vector. In order to reduce or increase
* the sampling area the sigma is instead used. The algorithm use the vector **path_s** to
* detect stagnation of the particle movement, and use **path_c**(a transfomed version of **path_s**)
* to refine the sampling covariant matrix from the fact that the particle is "moving" toward that
* direction. PS-CMA-ES is just a variation in which every **N_pos** CMA-Es steps the CMA-ES is
* sampling distribution and position is biased toward the best founded point across all independent
* CMA-ES.
*
* Explain the CMA-ES algorithm in detail is out of the purpose of this tutorial example.
* We will briefly go across the main step. For a full reference of the CMA-ES
* algoritm please refers to <a href="https://arxiv.org/abs/1604.00772">this paper</a>.
* While for PS-CMA-ES refers to <a href="http://mosaic.mpi-cbg.de/docs/Mueller2009a.pdf">this paper</a>.
*
*
* ## Inclusions {#inclusions_and_constants}
*
* In this example we use a set of particles so we will use **vector_dist**, we will use
* Eigen dense matrix. Because the standard dense matrix are not compatible with
* the vector_dist we will use **EMatrix** that are simple wrapper to Eigen::Matrix
* but compatible with vector_dist. Because EMatrix are compatible with all the
* Eigen value functions we can use them in all Eigen functions. For CMA-ES algorithm
* we also need Eigen-value eigen-vector decomposition and Jacobi-Rotation for the
* particle-swarm part.
*
* \snippet Numerics/PS-CMA-ES/main.cpp ps_cma_es_inclusion
*
* PS-CMA-ES require several properties to be stored on the particles, some has been already
* explained. Here we explain the others.
*
* * **Zeta** contain the lambda sampled points (before apply the covariant matrix tranformation)
* so it contain points samples on a gaussian of sigma=1 centered in zero
*
* * **ord** Contain the sequrnce if we want the lambda generated points in order from the best to
* the worst
*
* * **stop** If a flag that indicate that the CMA-ES reached some stop criteria
*
* * **fithist** It contain historical information about the particles to penalize them in case the
* go out of boundary. It is 1:1 taken from cmaes.m (production version)
* <a href="https://www.lri.fr/~hansen/cmaes_inmatlab.html">this paper</a> (or Google it)
*
* * **weight** Same concept of fithist other information to penalize particles going out of
* the boundary
*
* * **validfit** Same concept of fithist other information to penalize particles going out of
* the boundary
*
* * **xold** It contain the previous position of the particles used in several calculations
*
* * **last_restart** CMA-ES The CMA-ES sigma become very small the CMA-ES converged. At this point
* we can do two things, one is to stop the CMA-ES, the other is to restart-it to explore
* better the space. In case it restart. this parameter indicate at which iteration happen the
* last restart.
*
* * **iniphase** Same concept of fithist other information to penalize particles going out of
* the boundary
*
* * **xmean_st** This contain the new position of the particle it will be stored as particle position
* at the end of the CMA-ES step
*
* * **xmean_st** This contain the new position of the particle in a space where we do not apply the
* covariant transformation. (In practice is a weighted sum of the Zeta samples points)
*
* \snippet Numerics/PS-CMA-ES/main.cpp def_part_set
*
* ## Parameters {#ps_cma_par}
*
* CMA-ES and further PS-CMA-ES require some parameters in order to work. refers to the
* papers above to have a full explanation, but here is a short one
*
* * **dim** Dimensionality of the test function
*
* * **lambda** number of sample points taken at each iteration by CMA-ES
* suggested to use \f$ 4+floor(3*log(dim)) \f$
*
* * **mu** only mu best points are considered to adapt the Covariant matrix
*
* * **psoWeight** How much the pso step bias the particle positions
*
* * **N_pso** Number of CMA-ES step before do a PSO step (200 give the possibility
* to the CMA-ES to explore the neighborhood and narrow down at least a funnel)
*
* * **stopTolX** stop criteria for CMA-ES. When the the sampling area is small enough
* stop
*
* * **StopToUpX** stop criteria is the sampling area become too big
*
* * **restart_cma** If the CMA-ES reach a stop criteria reinitialize and restart
*
* * **hist_size** size of the array fit_hist (default should be mainly fine)
*
*/
//! [ps_cma_es_inclusion]
#define EIGEN_USE_LAPACKE
#include "Vector/vector_dist.hpp"
#include "Eigen/Dense"
#include "DMatrix/EMatrix.hpp"
#include <Eigen/Eigenvalues>
#include <Eigen/Jacobi>
#include <limits>
......@@ -26,33 +140,25 @@
#include <f15_cec_fun.hpp>
#include <boost/math/special_functions/sign.hpp>
//! [ps_cma_es_inclusion]
//! [parameters]
// PARAMETERS
constexpr int dim = 10;
// set this to 4+std::floor(3*log(dim))
// when you set dim set also lambda to to 4+std::floor(3*log(dim))
constexpr int lambda = 7;
constexpr int mu = lambda/2;
double psoWeight = 0.7;
// number of cma-step before pso step
int N_pso = 200;
double stopTolX = 2e-11;
double stopTolUpX = 2000.0;
int restart_cma = 1;
size_t max_fun_eval = 30000000;
constexpr int hist_size = 21;
constexpr int sigma = 0;
constexpr int Cov_m = 2;
constexpr int B = 3;
constexpr int D = 4;
constexpr int Zeta = 5;
constexpr int path_s = 6;
constexpr int path_c = 7;
constexpr int ord = 8;
constexpr int stop = 9;
constexpr int fithist = 10;
constexpr int weight = 11;
constexpr int validfit = 12;
constexpr int xold = 13;
constexpr int last_restart = 14;
constexpr int iniphase = 15;
constexpr int xmean_st = 16;
constexpr int meanz_st = 17;
const double c_m = 1.0;
// Convenient global variables (Their value is set after)
double mu_eff = 1.0;
double cs = 1.0;
double cc = 1.0;
......@@ -63,16 +169,32 @@ double stop_fitness = 1.0;
int eigeneval = 0;
double t_c = 0.1;
double b = 0.1;
double psoWeight = 0.7;
// number of cma-step before pso step
int N_pso = 200;
double stopTolX = 2e-11;
double stopTolUpX = 2000.0;
int restart_cma = 1;
size_t max_fun_eval = 1000000;
//! [parameters]
//! [def_part_set]
//////////// definitions of the particle set
constexpr int sigma = 0;
constexpr int Cov_m = 1;
constexpr int B = 2;
constexpr int D = 3;
constexpr int Zeta = 4;
constexpr int path_s = 5;
constexpr int path_c = 6;
constexpr int ord = 7;
constexpr int stop = 8;
constexpr int fithist = 9;
constexpr int weight = 10;
constexpr int validfit = 11;
constexpr int xold = 12;
constexpr int last_restart = 13;
constexpr int iniphase = 14;
constexpr int xmean_st = 15;
constexpr int meanz_st = 16;
typedef vector_dist<dim,double, aggregate<double,
Eigen::VectorXd[lambda],
Eigen::MatrixXd,
Eigen::MatrixXd,
Eigen::DiagonalMatrix<double,Eigen::Dynamic>,
......@@ -90,6 +212,8 @@ typedef vector_dist<dim,double, aggregate<double,
Eigen::VectorXd,
Eigen::VectorXd> > particle_type;
//! [def_part_set]
double generateGaussianNoise(double mu, double sigma)
{
......@@ -622,8 +746,6 @@ double adjust_sigma(double sigma, Eigen::MatrixXd & C)
{sigma = 5.0/sqrt(C(i,i));}
}
//if(any(vd.getProp<sigma>(p)*sqrt(diag) > maxdx)) sigma = minval(maxdx/sqrt(diag));
return sigma;
}
......@@ -668,15 +790,8 @@ void cma_step(particle_type & vd, int step, double & best,
for (size_t j = 0 ; j < lambda ; j++)
{
vd.getProp<Zeta>(p)[j] = generateGaussianVector<dim>();
Eigen::VectorXd & debug4 = vd.getProp<Zeta>(p)[j];
arx[j] = xmean + vd.getProp<sigma>(p)*vd.getProp<B>(p)*vd.getProp<D>(p)*vd.getProp<Zeta>(p)[j];
double & debug6 = vd.getProp<sigma>(p);
Eigen::MatrixXd & debug7 = vd.getProp<B>(p);
// sample point has to be inside -5.0 and 5.0
for (size_t i = 0 ; i < dim ; i++)
{
......@@ -803,12 +918,8 @@ void cma_step(particle_type & vd, int step, double & best,
{vd.getProp<sigma>(p) = smaller;}
//Adapt step-size sigma
Eigen::VectorXd & debug_ps = vd.getProp<path_s>(p);
vd.getProp<sigma>(p) = vd.getProp<sigma>(p)*exp((cs/d_amps)*(vd.getProp<path_s>(p).norm()/chiN - 1));
// auto & v_cl = create_vcluster();
// std::cout << vd.getProp<sigma>(p) << " " << v_cl.rank() << std::endl;
// Update B and D from C
if (calc_bd)
......@@ -834,36 +945,11 @@ void cma_step(particle_type & vd, int step, double & best,
}
Eigen::MatrixXd tmp = vd.getProp<B>(p).transpose();
Eigen::MatrixXd & debug3 = vd.getProp<Cov_m>(p);
double debug_sigma = vd.getProp<sigma>(p);
if (step % 161 == 0)
{
//Adapt step-size sigma
Eigen::VectorXd & debug_ps = vd.getProp<path_s>(p);
Eigen::MatrixXd & debug4 = vd.getProp<Cov_m>(p);
int debug = 0;
}
int debug = 0;
}
Eigen::MatrixXd & debug1 = vd.getProp<B>(p);
Eigen::DiagonalMatrix<double,Eigen::Dynamic> & debug2 = vd.getProp<D>(p);
// Copy the new mean as position of the particle
for (size_t i = 0 ; i < dim ; i++)
{
// Check we do not go out od bound
vd.getPos(p)[i] = xmean(i);
}
// std::cout << "Best solution: " << f_obj.get(0).f << " " << vd.getProp<sigma>(p) << std::endl;
double debug_sigma = vd.getProp<sigma>(p);
Eigen::DiagonalMatrix<double,Eigen::Dynamic> & debug_d = vd.getProp<D>(p);
{vd.getPos(p)[i] = xmean(i);}
vd.getProp<sigma>(p) = adjust_sigma(vd.getProp<sigma>(p),vd.getProp<Cov_m>(p));
......@@ -872,8 +958,6 @@ void cma_step(particle_type & vd, int step, double & best,
bool stop_tolX = true;
for (size_t i = 0 ; i < dim ; i++)
{
double debug1 = std::max(fabs(vd.getProp<path_c>(p)(i)),sqrt(vd.getProp<Cov_m>(p)(i,i)));
stop_tol &= (vd.getProp<sigma>(p)*std::max(fabs(vd.getProp<path_c>(p)(i)),sqrt(vd.getProp<Cov_m>(p)(i,i)))) < stopTolX;
stop_tolX &= vd.getProp<sigma>(p)*sqrt(vd.getProp<D>(p).diagonal()[i]) > stopTolUpX;
}
......@@ -891,9 +975,7 @@ void cma_step(particle_type & vd, int step, double & best,
}
if (vd.getProp<stop>(p) == true)
{
std::cout << "Stopped" << std::endl;
}
{std::cout << "Stopped" << std::endl;}
if (restart_cma && vd.getProp<stop>(p) == true)
{
......@@ -1040,6 +1122,9 @@ int main(int argc, char* argv[])
++it;
}
if (v_cl.rank() == 0)
{std::cout << "Starting PS-CMA-ES" << std::endl;}
double best = 0.0;
int best_i = 0;
......
openfpm_data @ d4dd1167
Subproject commit 92aa87ad50e32a2999b856387e70a0c161a36945
Subproject commit d4dd11672cd49676ec60d500bc55c26202142ca6
......@@ -41,6 +41,50 @@
#define CARTDEC_ERROR 2000lu
/*! \brief It spread the sub-sub-domain on a regular cartesian grid of size dim
*
* \warning this function only guarantee that the division on each direction is
* 2^n with some n and does not guarantee that the number of
* sub-sub-domain is preserved
*
* \param div number of division on each direction as output
* \param n_sub number of sub-domain
* \param dim_r dimension reduction
*
*/
template<unsigned int dim> static void nsub_to_div2(size_t (& div)[dim], size_t n_sub, size_t dim_r)
{
for (size_t i = 0; i < dim; i++)
{
if (i < dim_r)
{div[i] = openfpm::math::round_big_2(pow(n_sub, 1.0 / dim_r));}
else
{div[i] = 1;}
}
}
/*! \brief It spread the sub-sub-domain on a regular cartesian grid of size dim
*
* \warning this function only guarantee that the division on each direction is
* 2^n with some n and does not guarantee that the number of
* sub-sub-domain is preserved
*
* \param div number of division on each direction as output
* \param n_sub number of sub-domain
* \param dim_r dimension reduction
*
*/
template<unsigned int dim> static void nsub_to_div(size_t (& div)[dim], size_t n_sub, size_t dim_r)
{
for (size_t i = 0; i < dim; i++)
{
if (i < dim_r)
{div[i] = std::floor(pow(n_sub, 1.0 / dim_r));}
else
{div[i] = 1;}
}
}
#define COMPUTE_SKIN_SUB 1
/**
......@@ -352,13 +396,13 @@ public:
// calculate the sub-divisions
size_t div[dim];
for (size_t i = 0; i < dim; i++)
div[i] = (size_t) ((bound.getHigh(i) - bound.getLow(i)) / cd.getCellBox().getP2()[i]);
{div[i] = (size_t) ((bound.getHigh(i) - bound.getLow(i)) / cd.getCellBox().getP2()[i]);}
// Initialize the geo_cell structure
ie_ghost<dim,T>::Initialize_geo_cell(bound,div);
// Initialize shift vectors
ie_ghost<dim,T>::generateShiftVectors(domain);
ie_ghost<dim,T>::generateShiftVectors(domain,bc);
}
}
......@@ -512,9 +556,7 @@ public:
\endverbatim
*
*
*
* \param ghost margins for each dimensions (p1 negative part) (p2 positive part)
* ghost margins for each dimensions (p1 negative part) (p2 positive part)
*
*
\verbatim
......@@ -1045,6 +1087,77 @@ public:
}
}
/*! \brief Set the best parameters for the decomposition
*
* It based on number of processors and dimensionality find a "good" parameter setting
*
* \param domain_ domain to decompose
* \param bc boundary conditions
* \param ghost Ghost size
* \param sec_dist Distribution grid. The distribution grid help in reducing the underlying
* distribution problem simplifying decomposition problem. This is done in order to
* reduce the load/balancing dynamic load balancing problem
*
* \param dec_gran number of sub-sub-domain for each processor
*
*/
void setGoodParameters(::Box<dim,T> domain_,
const size_t (& bc)[dim],
const Ghost<dim,T> & ghost,
size_t dec_gran,
const grid_sm<dim,void> & sec_dist = grid_sm<dim,void>())
{
size_t div[dim];
// Create a valid decomposition of the space
// Get the number of processor and calculate the number of sub-domain
// for decomposition
size_t n_proc = v_cl.getProcessingUnits();
size_t n_sub = n_proc * dec_gran;
// Calculate the maximum number (before merging) of sub-domain on
// each dimension
nsub_to_div2(div,n_sub,dim);
/* for (size_t i = 0; i < dim; i++)
{
div[i] = openfpm::math::round_big_2(pow(n_sub, 1.0 / dim));
}*/
if (dim > 3)
{
long int dim_r = dim-1;
do
{
// Check for adjustment
size_t tot_size = 1;
for (size_t i = 0 ; i < dim ; i++)
{tot_size *= div[i];}
// the granularity is too coarse increase the divisions
if (tot_size / n_proc > 0.75*dec_gran )
{break;}
nsub_to_div(div,n_sub,dim_r);
dim_r--;
} while(dim_r > 0);
}
setParameters(div,domain_,bc,ghost,sec_dist);
}
/*! \brief return the parameters of the decomposition
*
* \param div_ number of divisions in each dimension
*
*/
void getParameters(size_t (& div_)[dim])
{
for (size_t i = 0 ; i < dim ; i++)
{div_[i] = this->gr.size(i);}
}
/*! \brief Set the parameter of the decomposition
*
......@@ -1057,7 +1170,11 @@ public:
* reduce the load/balancing dynamic load balancing problem
*
*/
void setParameters(const size_t (& div_)[dim], ::Box<dim,T> domain_, const size_t (& bc)[dim] ,const Ghost<dim,T> & ghost, const grid_sm<dim,void> & sec_dist = grid_sm<dim,void>())
void setParameters(const size_t (& div_)[dim],
::Box<dim,T> domain_,
const size_t (& bc)[dim],
const Ghost<dim,T> & ghost,
const grid_sm<dim,void> & sec_dist = grid_sm<dim,void>())
{
// set the boundary conditions
for (size_t i = 0 ; i < dim ; i++)
......
......@@ -272,9 +272,7 @@ public:
/*! \brief return number of moved vertices in all iterations so far
*
* \param id vertex id
*
* \return vector with x, y, z
* \return number of moved vertices
*
*/
size_t getMaxMovedV()
......
......@@ -422,11 +422,6 @@ BOOST_AUTO_TEST_CASE( Space_distribution_test)
//! [refine with dist_parmetis the decomposition]
}
void print_test_v(std::string test, size_t sz)
{
if (create_vcluster().getProcessUnitID() == 0)
std::cout << test << " " << sz << "\n";
}
BOOST_AUTO_TEST_SUITE_END()
......
......@@ -304,6 +304,7 @@ public:
*
*/
ParMetisDistribution(ParMetisDistribution<dim,T> && pm)
:v_cl(pm.v_cl)
{
this->operator=(pm);
}
......
......@@ -10,6 +10,7 @@
#include "util/mathutil.hpp"
#include "NN/CellList/CellDecomposer.hpp"
#include "Grid/grid_key_dx_iterator_hilbert.hpp"
/*! \brief Class that distribute sub-sub-domains across processors using an hilbert curve
* to divide the space
......@@ -63,6 +64,7 @@ public:
*
*/
SpaceDistribution(SpaceDistribution<dim,T> && pm)
:v_cl(pm.v_cl)
{
this->operator=(pm);
}
......
......@@ -10,6 +10,8 @@
#include "common.hpp"
#include "nn_processor.hpp"
#include "Decomposition/shift_vect_converter.hpp"
/*! \brief structure that store and compute the internal and external local ghost box
*
......@@ -49,6 +51,8 @@ class ie_ghost
//! Temporal buffers to return temporal information
openfpm::vector<size_t> ids;
//! shift converter
shift_vect_converter<dim,T> sc_convert;
/*! \brief Given a local sub-domain i, it give the id of such sub-domain in the sent list
* for the processor p_id
......@@ -153,35 +157,9 @@ protected:
* \param domain box that describe the domain
*
*/
void generateShiftVectors(const Box<dim,T> & domain)
void generateShiftVectors(const Box<dim,T> & domain, size_t (& bc)[dim])
{
shifts.resize(openfpm::math::pow(3,dim));
HyperCube<dim> hyp;
for (long int i = dim-1 ; i >= 0 ; i--)
{
std::vector<comb<dim>> cmbs = hyp.getCombinations_R(i);
for (size_t j = 0 ; j < cmbs.size() ; j++)
{
for (size_t k = 0 ; k < dim ; k++)
{
switch (cmbs[j][k])
{
case 1:
shifts.get(cmbs[j].lin()).template get<0>()[k] = -(domain.getHigh(k) - domain.getLow(k));
break;
case 0:
shifts.get(cmbs[j].lin()).template get<0>()[k] = 0;
break;
case -1:
shifts.get(cmbs[j].lin()).template get<0>()[k] = (domain.getHigh(k) - domain.getLow(k));
break;
}
}
}
}
sc_convert.generateShiftVectors(domain,bc,shifts);
}
/*! \brief Initialize the geo cell list structure
......@@ -196,7 +174,7 @@ protected:
void Initialize_geo_cell(const Box<dim,T> & domain, const size_t (&div)[dim])
{
// Initialize the geo_cell structure
geo_cell.Initialize(domain,div);
geo_cell.Initialize(domain,div,0);
}
/*! \brief Create the box_nn_processor_int (bx part) structure
......@@ -371,7 +349,7 @@ protected:
b_int.lc_proc = lc_proc;
// fill the shift id
b_int.shift_id = nn_p_box_pos.get(k).lin();
b_int.shift_id = convertShift(nn_p_box_pos.get(k));
//
// Updating
......@@ -531,6 +509,20 @@ public:
return shifts;
}
/*! It return the converted shift vector
*
* In high dimensions the number of shifts vectors explode exponentially, so we are
* expecting that some of the boundary is non periodic to reduce the numbers of shift
* vectors
*
* \return the shift vectors
*
*/
size_t convertShift(const comb<dim> & cmb)
{
return sc_convert.linId(cmb);
}
/*! \brief Get the number of Internal ghost boxes for one processor
*
* \param id near processor list id (the id go from 0 to getNNProcessor())
......
......@@ -159,7 +159,7 @@ class ie_loc_ghost
// that must be adjusted, each of this boxes define a shift in case of periodic boundary condition
for (long int i = dim-1 ; i >= 0 ; i--)
{
std::vector<comb<dim>> cmbs = hyp.getCombinations_R(i);
std::vector<comb<dim>> cmbs = hyp.getCombinations_R_bc(i,bc);
for (size_t j = 0 ; j < cmbs.size() ; j++)
{
......
......@@ -178,7 +178,7 @@ class nn_prcs
// that must be adjusted, each of this boxes define a shift in case of periodic boundary condition
for (long int i = dim-1 ; i >= 0 ; i--)
{
std::vector<comb<dim>> cmbs = hyp.getCombinations_R(i);
std::vector<comb<dim>> cmbs = hyp.getCombinations_R_bc(i,bc);
for (size_t j = 0 ; j < cmbs.size() ; j++)
{
......