Replies: 1 comment 2 replies
-
Use the same seed. See hoomd-blue/hoomd/RandomNumbers.h Lines 115 to 131 in daf7303
I have no idea. |
Beta Was this translation helpful? Give feedback.
2 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I am making a code that calculates hydrodynamic interactions between particles with Rotne-Prager matrix (I will call it M here). The velocity of particle is calculated as U = M \cdot F + M^0.5 \cdot P, where U is velocity (unknown), F is force (known), and P is a random vector. The computation is split into two parts by using Ewald summation. The wave-part computation mostly follows the procedures of PPPMForceCompute. The simulation goes well when using single CPU, but fails with multiple CPUs by using MPI.
The issue most likely happens in the computation M^0.5 \cdot P. I have two major problems.
I copy and paste the Lanczos code here. The goal is to find "u", and "psi" is the random vector. "mobilityRealUF(vj, v)" does the matrix-vector multiplication.
`
void TwoStepRPY::brownianLanczos(Scalar * psi,
Scalar * iter_ff_v,
Scalar * iter_ff_vj,
Scalar * iter_ff_vjm1,
Scalar * iter_ff_V,
Scalar * iter_ff_uold,
Scalar * u)
{
unsigned int group_size = m_group->getNumMembers();
unsigned int numel = group_size * 6;
#ifdef ENABLE_MPI
if (m_pdata->getDomainDecomposition())
{
MPI_Barrier(m_exec_conf->getMPICommunicator());
MPI_Allreduce(MPI_IN_PLACE,
&vnorm,
1,
MPI_FLOAT,
MPI_SUM,
m_exec_conf->getMPICommunicator());
}
#endif
vnorm = sqrt(vnorm);
psinorm = vnorm;
#ifdef ENABLE_MPI
if (m_pdata->getDomainDecomposition())
{
MPI_Barrier(m_exec_conf->getMPICommunicator());
MPI_Allreduce(MPI_IN_PLACE,
&alpha_temp,
1,
MPI_FLOAT,
MPI_SUM,
m_exec_conf->getMPICommunicator());
}
#endif
// store alpha_{j}
alpha[j] = alpha_temp;
#ifdef ENABLE_MPI
if (m_pdata->getDomainDecomposition())
{
MPI_Barrier(m_exec_conf->getMPICommunicator());
MPI_Allreduce(MPI_IN_PLACE,
&vnorm,
1,
MPI_FLOAT,
MPI_SUM,
m_exec_conf->getMPICommunicator());
}
#endif
vnorm = sqrt(vnorm);
beta_temp = vnorm;
#ifdef ENABLE_MPI
MPI_Barrier(m_exec_conf->getMPICommunicator());
#endif
m = j + 1;
break;
}
#ifdef ENABLE_MPI
MPI_Barrier(m_exec_conf->getMPICommunicator());
#endif
} // for j
#ifdef ENABLE_MPI
if (m_pdata->getDomainDecomposition())
{
MPI_Barrier(m_exec_conf->getMPICommunicator());
MPI_Allreduce(MPI_IN_PLACE,
&alpha_temp,
1,
MPI_FLOAT,
MPI_SUM,
m_exec_conf->getMPICommunicator());
}
#endif
// store alpha_{j}
alpha[j] = alpha_temp;
#ifdef ENABLE_MPI
if (m_pdata->getDomainDecomposition())
{
MPI_Barrier(m_exec_conf->getMPICommunicator());
MPI_Allreduce(MPI_IN_PLACE,
&vnorm,
1,
MPI_FLOAT,
MPI_SUM,
m_exec_conf->getMPICommunicator());
}
#endif
vnorm = sqrt(vnorm);
beta_temp = vnorm;
#ifdef ENABLE_MPI
MPI_Barrier(m_exec_conf->getMPICommunicator());
#endif
m = j + 1;
break;
}
#ifdef ENABLE_MPI
if (m_pdata->getDomainDecomposition())
{
MPI_Barrier(m_exec_conf->getMPICommunicator());
MPI_Allreduce(MPI_IN_PLACE,
&stepnorm,
1,
MPI_FLOAT,
MPI_SUM,
m_exec_conf->getMPICommunicator());
}
#endif
stepnorm = sqrt(stepnorm);
#ifdef ENABLE_MPI
if (m_pdata->getDomainDecomposition())
{
MPI_Barrier(m_exec_conf->getMPICommunicator());
MPI_Allreduce(MPI_IN_PLACE,
&fnorm,
1,
MPI_FLOAT,
MPI_SUM,
m_exec_conf->getMPICommunicator());
}
#endif
fnorm = sqrt(fnorm);
stepnorm = stepnorm / fnorm;
#ifdef ENABLE_MPI
MPI_Barrier(m_exec_conf->getMPICommunicator());
#endif
} // while
`
Beta Was this translation helpful? Give feedback.
All reactions