大协方差矩阵的并行计算

计算科学 矩阵 并行计算 显卡
2021-11-30 07:05:15

我们需要计算大小不等的协方差矩阵10000×10000100000×100000. 我们可以使用 GPU 和集群,我们想知道加速这些计算的最佳并行方法是什么。

2个回答

首先要认识到你可以使用 BLAS 来做到这一点。如果您的数据矩阵是X=[x1x2x3...]Rm×n(每个x是对应于一次测量的列向量;行是试验),那么您可以将协方差写为:

Cij=E[xi,xj]E[xi]E[xj]=1nkxikxjk1n2(kxik)(kxjk)
我们可以这样写:
C=1nXTX1n2(1TX)T(1TX)
在哪里(1T)是所有元素为 1 的行向量,所以(1TX)是列和的行向量X. 这可以完全写成 BLAS,其中XTX要么是GEMM,要么更好的是SYRK / HERK,你可以得到(1TX)=b带有GEMVbTb再次使用 GEMM 或 SYRK/HERK ,以及使用SCAL的前置因子。

您的数据和结果矩阵可能在 64GB 左右,因此您无法安装在单个节点或一个节点的 GPU 上。对于非 GPU 集群,您可能想查看PBLAS,感觉就像 scalapack。对于 GPU,多节点库还不存在。 Magma有某种底层并行 BLAS 实现,但它可能对用户不友好。我认为CULA还没有做多节点,但它值得关注。 CUBLAS是单节点。

我还建议您强烈考虑自己实现并行性,特别是如果您熟悉 MPI 并且必须将其挂钩到现有代码库中。这样,您可以轻松地在 CPU 和 GPU BLAS 之间切换,并在您想要的位置开始和结束数据。您应该只需要几个MPI_ALLREDUCE调用。

我使用 CUBlas 和 Cuda Thrust 实现了 @Max Hutchinson 给出的公式,并与在线协方差计算工具进行了比较。看来我的效果不错。下面的代码计划用于 QDA 贝叶斯。所以给定的矩阵可能包含多个类。因此计算了多个协方差矩阵。我希望它对某人有用。

//! Calculates one or more than one coVarianceMatrix given data.
//  There can be many classes since many covariance matrixes.
/*!
    \param inMatrix This vector contains matrix data in major storage. 
    Forexample if inMatrix=[1 2 3 4 5 6] and trialSizes=[2] this means matrix we will work on a matrix like :
        |1 4 |
        |2 5 |
        |3 6 | -> 2 Trials, 3 Features. Columns contains feature rows contains trials (samples)
    \param trialSizes There can be many classes since many covariance matrixes. Samples from all classes will be given with inMatrix.
    But we need to know how many trials(samples) we have for each class. 
    For example if inMatrix=[1 2 3 4 5 6 7 8 9 10 11 12] and trialSizes=[2,2] 
    this means matrix we will work on a matrix like :
        |1 4 |  |7 10 |
        |2 5 |  |8 11 |
        |3 6 |  |9 12 |  --> Total number of trials(samples which is total rowCount) 2 + 2 = 4 , 
                             So colSize = inMatrix.size()/4 = 3(feature vector size)
                         --> There is two element in trialSize vec so each vector has to samples
*/
void multiQDACovianceCalculator(std::vector<float>& inMatrix, std::vector<int>& trialSizes)
{
    cublasHandle_t handle; // CUBLAS context
    int classCount = trialSizes.size();
    int rowSize = std::accumulate(trialSizes.begin(), trialSizes.end(), 0);
    int dimensionSize = inMatrix.size() / rowSize;
    float alpha = 1.0f;
    float beta = 0.0f; // bet =1

    thrust::device_vector<float> d_cov1(dimensionSize * dimensionSize);
    thrust::device_vector<float> d_cov2(dimensionSize * dimensionSize);
    thrust::device_vector<float> d_covResult(dimensionSize * dimensionSize);

    thrust::device_vector<float> d_wholeMatrix(inMatrix);
    thrust::device_vector<float> d_meansVec(dimensionSize); // rowVec of means of trials
    float *meanVecPtr = thrust::raw_pointer_cast(d_meansVec.data());
    float *device2DMatrixPtr = thrust::raw_pointer_cast(d_wholeMatrix.data());
    auto maxTrialNumber = *std::max_element(trialSizes.begin(), trialSizes.end());
    thrust::device_vector<float> deviceVector(maxTrialNumber, 1.0f);

    cublasCreate(&handle);
    // Inside of for loop  one covariance matrix calculated each time
    for (int i = 0; i < trialSizes.size(); i++)
    {
        // X*transpose(X) / N
        alpha = 1.0f / trialSizes[i];
        cublasSgemm(handle, CUBLAS_OP_N, CUBLAS_OP_T, dimensionSize, dimensionSize, trialSizes[i], &alpha,
            device2DMatrixPtr, dimensionSize, device2DMatrixPtr, dimensionSize, &beta,
            thrust::raw_pointer_cast(d_cov1.data()), dimensionSize);

        // Mean vector of each column
        alpha = 1.0f;
        cublasSgemv(handle, CUBLAS_OP_N, dimensionSize, trialSizes[i], &alpha, device2DMatrixPtr,
            dimensionSize, thrust::raw_pointer_cast(deviceVector.data()), 1, &beta, meanVecPtr, 1);

        // MeanVec * transpose(MeanVec) / N*N
        alpha = 1.0f / (trialSizes[i] * trialSizes[i]);
        cublasSgemm(handle, CUBLAS_OP_T, CUBLAS_OP_N, dimensionSize, dimensionSize, 1, &alpha,
            meanVecPtr, 1, meanVecPtr, 1, &beta,
            thrust::raw_pointer_cast(d_cov2.data()), dimensionSize);

        alpha = 1.0f;
        beta = -1.0f;
        //  (X*transpose(X) / N) -  (MeanVec * transpose(MeanVec) / N*N)
        cublasSgeam(handle, CUBLAS_OP_N, CUBLAS_OP_N, dimensionSize, dimensionSize, &alpha,
            thrust::raw_pointer_cast(d_cov1.data()), dimensionSize, &beta, thrust::raw_pointer_cast(d_cov2.data()), 
            dimensionSize, thrust::raw_pointer_cast(d_covResult.data()), dimensionSize);

        // Go to other class and calculate its covarianceMatrix
        device2DMatrixPtr += trialSizes[i] * dimensionSize;
    }
    printVector(d_covResult);
    cublasDestroy(handle);
}