用于 tensorflowjs 的 WASM 后端在 Reactjs 中抛出“未处理的拒绝(运行时错误):索引越界”错误

IT技术 javascript reactjs webassembly tensorflow.js
2021-05-21 11:40:11

我正在尝试为 React 应用程序中的 blazeface 人脸检测模型设置 WASM 后端。尽管带有 vanillajs 的演示可以运行它几个小时而没有任何错误,但在将凸轮打开超过 3-5 分钟后,它会抛出“未处理的拒绝(运行时错误):索引越界错误”。

整个应用程序因此错误而崩溃。从下面的错误日志来看,也许它与我猜测的disposeData()disposeTensor()函数有关,它们与垃圾收集有关。但我不知道这是否是 WASM 库本身的错误。你知道为什么会发生这种情况吗?

下面我也提供了我的渲染预测功能。

  renderPrediction = async () => {
    const model = await blazeface.load({ maxFaces: 1, scoreThreshold: 0.95 });
    if (this.play) {
      const canvas = this.refCanvas.current;
      const ctx = canvas.getContext("2d");
      const returnTensors = false;
      const flipHorizontal = true;
      const annotateBoxes = true;
      const predictions = await model.estimateFaces(
        this.refVideo.current,
        returnTensors,
        flipHorizontal,
        annotateBoxes
      );

      if (predictions.length > 0) {
        ctx.clearRect(0, 0, canvas.width, canvas.height);
        for (let i = 0; i < predictions.length; i++) {
          if (returnTensors) {
            predictions[i].topLeft = predictions[i].topLeft.arraySync();
            predictions[i].bottomRight = predictions[i].bottomRight.arraySync();
            if (annotateBoxes) {
              predictions[i].landmarks = predictions[i].landmarks.arraySync();
            }
          }
          try {
          } catch (err) {
            console.log(err.message);
          }
          const start = predictions[i].topLeft;
          const end = predictions[i].bottomRight;
          const size = [end[0] - start[0], end[1] - start[1]];

      


          if (annotateBoxes) {
            const landmarks = predictions[i].landmarks;

            ctx.fillStyle = "blue";
            for (let j = 0; j < landmarks.length; j++) {
              const x = landmarks[j][0];
              //console.log(typeof x) // number
              const y = landmarks[j][1];
              ctx.fillRect(x, y, 5, 5);
            }
          }
        }
      }
      requestAnimationFrame(this.renderPrediction);
    }
  };

错误的完整日志:

Unhandled Rejection (RuntimeError): index out of bounds
(anonymous function)
unknown
./node_modules/@tensorflow/tfjs-backend-wasm/dist/tf-backend-wasm.esm.js/</tt</r</r._dispose_data
C:/Users/osman.cakir/Documents/osmancakirio/deepLearning/wasm-out/tfjs-backend-wasm.js:9



disposeData
C:/Users/osman.cakir/Documents/osmancakirio/deepLearning/src/backend_wasm.ts:115

  112 | 
  113 | disposeData(dataId: DataId) {
  114 |   const data = this.dataIdMap.get(dataId);
> 115 |   this.wasm._free(data.memoryOffset);
      | ^  116 |   this.wasm.tfjs.disposeData(data.id);
  117 |   this.dataIdMap.delete(dataId);
  118 | }

disposeTensor
C:/Users/osman.cakir/Documents/osmancakirio/deepLearning/src/engine.ts:838

  835 |     'tensors');
  836 | let res;
  837 | const inputMap = {};
> 838 | inputs.forEach((input, i) => {
      | ^  839 |     inputMap[i] = input;
  840 | });
  841 | return this.runKernelFunc((_, save) => {

dispose
C:/Users/osman.cakir/Documents/osmancakirio/deepLearning/src/tensor.ts:388
endScope
C:/Users/osman.cakir/Documents/osmancakirio/deepLearning/src/engine.ts:983
tidy/<
C:/Users/osman.cakir/Documents/osmancakirio/deepLearning/src/engine.ts:431

  428 | if (kernel != null) {
  429 |     kernelFunc = () => {
  430 |         const numDataIdsBefore = this.backend.numDataIds();
> 431 |         out = kernel.kernelFunc({ inputs, attrs, backend: this.backend });
      | ^  432 |         const outInfos = Array.isArray(out) ? out : [out];
  433 |         if (this.shouldCheckForMemLeaks()) {
  434 |             this.checkKernelForMemLeak(kernelName, numDataIdsBefore, outInfos);

scopedRun
C:/Users/osman.cakir/Documents/osmancakirio/deepLearning/src/engine.ts:448

  445 | // inputsToSave and outputsToSave. Currently this is the set of ops
  446 | // with kernel support in the WASM backend. Once those ops and
  447 | // respective gradients are modularised we can remove this path.
> 448 | if (outputsToSave == null) {
      | ^  449 |     outputsToSave = [];
  450 | }
  451 | const outsToSave = outTensors.filter((_, i) => outputsToSave[i]);

tidy
C:/Users/osman.cakir/Documents/osmancakirio/deepLearning/src/engine.ts:431

  428 | if (kernel != null) {
  429 |     kernelFunc = () => {
  430 |         const numDataIdsBefore = this.backend.numDataIds();
> 431 |         out = kernel.kernelFunc({ inputs, attrs, backend: this.backend });
      | ^  432 |         const outInfos = Array.isArray(out) ? out : [out];
  433 |         if (this.shouldCheckForMemLeaks()) {
  434 |             this.checkKernelForMemLeak(kernelName, numDataIdsBefore, outInfos);

tidy
C:/Users/osman.cakir/Documents/osmancakirio/deepLearning/src/globals.ts:190

  187 |     const tensors = getTensorsInContainer(container);
  188 |     tensors.forEach(tensor => tensor.dispose());
  189 | }
> 190 | /**
  191 |  * Keeps a `tf.Tensor` generated inside a `tf.tidy` from being disposed
  192 |  * automatically.
  193 |  */

estimateFaces
C:/Users/osman.cakir/Documents/osmancakirio/deepLearning/blazeface_reactjs/node_modules/@tensorflow-models/blazeface/dist/blazeface.esm.js:17
Camera/this.renderPrediction
C:/Users/osman.cakir/Documents/osmancakirio/deepLearning/blazeface_reactjs/src/Camera.js:148

  145 | const returnTensors = false;
  146 | const flipHorizontal = true;
  147 | const annotateBoxes = true;
> 148 | const predictions = await model.estimateFaces(
      | ^  149 |   this.refVideo.current,
  150 |   returnTensors,
  151 |   flipHorizontal,

async*Camera/this.renderPrediction
C:/Users/osman.cakir/Documents/osmancakirio/deepLearning/blazeface_reactjs/src/Camera.js:399

  396 |         // }
  397 |       }
  398 |     }
> 399 |     requestAnimationFrame(this.renderPrediction);
      | ^  400 |   }
  401 | };
  402 | 



1个回答

使用张量进行预测后,您需要从设备内存中释放张量,否则它会累积并导致您遇到潜在错误。这可以简单地使用tf.dispose()手动指定要处理张量的位置来完成您在对张量进行预测后立即执行此操作。

const predictions = await model.estimateFaces(
        this.refVideo.current,
        returnTensors,
        flipHorizontal,
        annotateBoxes
      );
          
tf.dispose(this.refVideo.current);          

您也可以使用tf.tidy()which 自动为您执行此操作。有了它,您可以将处理图像张量的函数包装起来以进行预测。github上的这个问题很好地解决了它,但我对实现不太确定,因为它只能用于同步函数调用。

或者您可以将处理图像张量的代码包装在以下代码中,这也将清理任何未使用的张量

tf.engine().startScope()
// handling image tensors function
tf.engine().endScope()