TensorFlow 和 OpenCV 实时分类

数据挖掘 Python 分类 张量流 成立之初
2021-09-22 17:27:34

我正在测试机器学习水域并使用TS inception模型重新训练网络以对我想要的对象进行分类。

最初,我的预测是在本地存储的图像上运行的,我意识到从文件中取消持久化图形需要 2-5 秒,而运行实际预测需要大约相同的时间。

此后,我调整了我的代码以合并来自 OpenCV 的摄像头馈送,但在上述时间,视频延迟是不可避免的。

在初始图形加载期间预计会出现时间命中;这就是为什么initialSetup()提前运行,但 2-5 秒只是荒谬的。我觉得我目前的申请;实时分类,这不是最好的加载方式。还有另一种方法吗?我知道移动版本 TS 建议缩小图表。瘦下来会是这里的路吗?万一这很重要,我的图表目前是 87.4MB

除此之外,有没有办法加快预测过程?

import os
import cv2
import timeit
import numpy as np
import tensorflow as tf

camera = cv2.VideoCapture(0)

# Loads label file, strips off carriage return
label_lines = [line.rstrip() for line
               in tf.gfile.GFile('retrained_labels.txt')]

def grabVideoFeed():
    grabbed, frame = camera.read()
    return frame if grabbed else None

def initialSetup():
    os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2'
    start_time = timeit.default_timer()

    # This takes 2-5 seconds to run
    # Unpersists graph from file
    with tf.gfile.FastGFile('retrained_graph.pb', 'rb') as f:
        graph_def = tf.GraphDef()
        graph_def.ParseFromString(f.read())
        tf.import_graph_def(graph_def, name='')

    print 'Took {} seconds to unpersist the graph'.format(timeit.default_timer() - start_time)

def classify(image_data):
    print '********* Session Start *********'

    with tf.Session() as sess:
        start_time = timeit.default_timer()

        # Feed the image_data as input to the graph and get first prediction
        softmax_tensor = sess.graph.get_tensor_by_name('final_result:0')

        print 'Tensor', softmax_tensor

        print 'Took {} seconds to feed data to graph'.format(timeit.default_timer() - start_time)

        start_time = timeit.default_timer()

        # This takes 2-5 seconds as well
        predictions = sess.run(softmax_tensor, {'Mul:0': image_data})

        print 'Took {} seconds to perform prediction'.format(timeit.default_timer() - start_time)

        start_time = timeit.default_timer()

        # Sort to show labels of first prediction in order of confidence
        top_k = predictions[0].argsort()[-len(predictions[0]):][::-1]

        print 'Took {} seconds to sort the predictions'.format(timeit.default_timer() - start_time)

        for node_id in top_k:
            human_string = label_lines[node_id]
            score = predictions[0][node_id]
            print('%s (score = %.5f)' % (human_string, score))

        print '********* Session Ended *********'

initialSetup()

while True:
    frame = grabVideoFeed()

    if frame is None:
        raise SystemError('Issue grabbing the frame')

    frame = cv2.resize(frame, (299, 299), interpolation=cv2.INTER_CUBIC)

    # adhere to TS graph input structure
    numpy_frame = np.asarray(frame)
    numpy_frame = cv2.normalize(numpy_frame.astype('float'), None, -0.5, .5, cv2.NORM_MINMAX)
    numpy_final = np.expand_dims(numpy_frame, axis=0)

    classify(numpy_final)

    cv2.imshow('Main', frame)

    if cv2.waitKey(1) & 0xFF == ord('q'):
        break

camera.release()
cv2.destroyAllWindows()

编辑 1

在调试了我的代码之后,我意识到创建会话是一项既耗费资源又耗费时间的操作。

在之前的代码中,除了运行预测之外,还为每个 OpenCV 提要创建了一个新会话。将 OpenCV 操作封装在单个会话中可以大大缩短时间,但这仍然会在初始运行时增加大量开销;预测需要 2-3 秒。之后,预测需要大约 0.5 秒,这使得相机馈送仍然滞后。

import os
import cv2
import timeit
import numpy as np
import tensorflow as tf

camera = cv2.VideoCapture(0)

# Loads label file, strips off carriage return
label_lines = [line.rstrip() for line
               in tf.gfile.GFile('retrained_labels.txt')]

def grabVideoFeed():
    grabbed, frame = camera.read()
    return frame if grabbed else None

def initialSetup():
    os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2'
    start_time = timeit.default_timer()

    # This takes 2-5 seconds to run
    # Unpersists graph from file
    with tf.gfile.FastGFile('retrained_graph.pb', 'rb') as f:
        graph_def = tf.GraphDef()
        graph_def.ParseFromString(f.read())
        tf.import_graph_def(graph_def, name='')

    print 'Took {} seconds to unpersist the graph'.format(timeit.default_timer() - start_time)

initialSetup()

with tf.Session() as sess:
    start_time = timeit.default_timer()

    # Feed the image_data as input to the graph and get first prediction
    softmax_tensor = sess.graph.get_tensor_by_name('final_result:0')

    print 'Took {} seconds to feed data to graph'.format(timeit.default_timer() - start_time)

    while True:
        frame = grabVideoFeed()

        if frame is None:
            raise SystemError('Issue grabbing the frame')

        frame = cv2.resize(frame, (299, 299), interpolation=cv2.INTER_CUBIC)

        cv2.imshow('Main', frame)

        # adhere to TS graph input structure
        numpy_frame = np.asarray(frame)
        numpy_frame = cv2.normalize(numpy_frame.astype('float'), None, -0.5, .5, cv2.NORM_MINMAX)
        numpy_final = np.expand_dims(numpy_frame, axis=0)

        start_time = timeit.default_timer()

        # This takes 2-5 seconds as well
        predictions = sess.run(softmax_tensor, {'Mul:0': numpy_final})

        print 'Took {} seconds to perform prediction'.format(timeit.default_timer() - start_time)

        start_time = timeit.default_timer()

        # Sort to show labels of first prediction in order of confidence
        top_k = predictions[0].argsort()[-len(predictions[0]):][::-1]

        print 'Took {} seconds to sort the predictions'.format(timeit.default_timer() - start_time)

        for node_id in top_k:
            human_string = label_lines[node_id]
            score = predictions[0][node_id]
            print('%s (score = %.5f)' % (human_string, score))

        print '********* Session Ended *********'

        if cv2.waitKey(1) & 0xFF == ord('q'):
            sess.close()
            break

camera.release()
cv2.destroyAllWindows()

编辑 2

在摆弄之后,我偶然发现了图量化图变换,这些都是得到的结果。

原始图:87.4MB

量化图:87.5MB

转换图:87.1MB

八位计算:22MB,但在使用时遇到了这个问题

1个回答

我有自己的项目,工作方式类似(但模型更简单),实时运行我的预测需要大约 0.1 秒。您通过重用会话做了正确的事情——我也是这样做的。我的猜测是你的瓶颈是模型的大小。据我所知,Inception 模型非常庞大。在模型复杂性和运行时预测速度之间总会有一个权衡。Inception 模型是准确的,但没有人说它很快。