Dear community.
I’m currently using a Google Coral Dev Board, and have started deploying the raspberry pi examples from the following Git:
To do this I followed the steps recommended by @khanhlvg in the following link:
Update the Mendel OS and pip, create a virtual environment, activate it, clone the git “object_detection”, proceed to install the requirements:
argparse
numpy>=1.20.0 # To ensure compatibility with OpenCV on Raspberry Pi.
opencv-python~=4.5.3.56
tflite-support>=0.4.0
I also installed Flask to make a small video streaming application with python, and it worked very well with a USB webcam.
After a few bugs with Flask and additional installations, my virtual environment has the following:
(tflite) mendel@elusive-jet:~/DevBoard$ pip3 freeze
absl-py==1.1.0
cffi==1.15.0
click==8.1.3
Flask==2.1.2
flatbuffers==1.12
importlib-metadata==4.11.4
itsdangerous==2.1.2
Jinja2==3.1.2
MarkupSafe==2.1.1
numpy==1.21.6
opencv-python==4.5.3.56
pkg_resources==0.0.0
protobuf==3.20.1
pybind11==2.9.2
pycparser==2.21
sounddevice==0.4.4
tflite-support==0.4.1
typing_extensions==4.2.0
Werkzeug==2.1.2
zipp==3.8.0
The next step was to modify the detect.py script, where I adapted it to work with Flask.
"""Main script to run the object detection routine."""
from flask import Flask
from flask import render_template
from flask import Response
import argparse
import sys
import time
import cv2
from tflite_support.task import core
from tflite_support.task import processor
from tflite_support.task import vision
import utils
app = Flask(__name__)
def run(model: str, camera_id: int, width: int, height: int, num_threads: int,
enable_edgetpu: bool) -> None:
"""Ejecute inferencias de forma continua en las imágenes adquiridas de la cámara.
Argumentos:
model: Nombre del modelo de detección de objetos TFLite.
camera_id: la identificación de la cámara que se pasará a OpenCV.
width: El ancho del cuadro capturado desde la cámara.
height: La altura del cuadro capturado desde la cámara.
num_threads: el número de subprocesos de la CPU para ejecutar el modelo.
enable_edgetpu: Verdadero/Falso si el modelo es un modelo EdgeTPU.
"""
# Variables para calcular FPS
counter, fps = 0, 0
start_time = time.time()
# Comience a capturar la entrada de video de la cámara
cap = cv2.VideoCapture(camera_id)
cap.set(cv2.CAP_PROP_FRAME_WIDTH, width)
cap.set(cv2.CAP_PROP_FRAME_HEIGHT, height)
# Parámetros de visualización
row_size = 20 # pixels
left_margin = 24 # pixels
text_color = (0, 0, 255) # red
font_size = 1
font_thickness = 1
fps_avg_frame_count = 10
# Inicializar el modelo de detección de objetos
base_options = core.BaseOptions(
file_name=model, use_coral=enable_edgetpu, num_threads=num_threads)
detection_options = processor.DetectionOptions(
max_results=3, score_threshold=0.3)
options = vision.ObjectDetectorOptions(
base_options=base_options, detection_options=detection_options)
detector = vision.ObjectDetector.create_from_options(options)
# Capture continuamente imágenes de la cámara y ejecute la inferencia
while cap.isOpened():
success, image = cap.read()
if not success:
sys.exit(
'ERROR: Unable to read from webcam. Please verify your webcam settings.'
)
counter += 1
image = cv2.flip(image, 1)
# Convierta la imagen de BGR a RGB según lo requiera el modelo TFLite.
rgb_image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
# Cree un objeto TensorImage a partir de la imagen RGB.
input_tensor = vision.TensorImage.create_from_array(rgb_image)
# Ejecute la estimación de detección de objetos utilizando el modelo.
detection_result = detector.detect(input_tensor)
# Dibujar puntos clave y bordes en la imagen de entrada
image = utils.visualize(image, detection_result)
# Calcular los FPS
if counter % fps_avg_frame_count == 0:
end_time = time.time()
fps = fps_avg_frame_count / (end_time - start_time)
start_time = time.time()
# Mostrar los FPS
fps_text = 'FPS = {:.1f}'.format(fps)
text_location = (left_margin, row_size)
image=cv2.putText(image, fps_text, text_location, cv2.FONT_HERSHEY_PLAIN,
font_size, text_color, font_thickness)
# Mostrar resultado en JPG mediante la web de FLASK
(flag, encodedImage) = cv2.imencode(".jpg", image)
if not flag:
continue
yield(b'--image\r\n' b'Content-Type: image/jpeg\r\n\r\n' +
bytearray(encodedImage) + b'\r\n')
"""# Detener el programa si se presiona la tecla ESC.
if cv2.waitKey(1) == 27:
break
cv2.imshow('object_detector', image)
"""
cap.release()
#cv2.destroyAllWindows()
#CODIGO FLASK
@app.route("/")
def index():
return render_template("index.html")
@app.route("/video_feed")
def video_feed():
return Response(run('efficientdet_lite0_edgetpu.tflite',1,640,480,4,True),
mimetype = "multipart/x-mixed-replace; boundary=image")
if __name__ == '__main__':
app.debug = True
app.run(host="0.0.0.0") #ACCESIBLES PARA TODAS LAS DIRECCIONES
But after several attempts, I get the following error.
192.168.1.85 - - [20/Jun/2022 20:28:29] "GET /video_feed HTTP/1.1" 200 -
* Detected change in '/home/mendel/DevBoard/detect.py', reloading
* Restarting with stat
* Debugger is active!
* Debugger PIN: 135-396-847
192.168.1.85 - - [20/Jun/2022 20:52:48] "GET / HTTP/1.1" 200 -
Debugging middleware caught exception in streamed response at a point where response headers were already sent.
Traceback (most recent call last):
File "/home/mendel/tflite/lib/python3.7/site-packages/werkzeug/wsgi.py", line 462, in __next__
return self._next()
File "/home/mendel/tflite/lib/python3.7/site-packages/werkzeug/wrappers/response.py", line 50, in _iter_encoded
for item in iterable:
File "/home/mendel/DevBoard/detect.py", line 68, in run
detector = vision.ObjectDetector.create_from_options(options)
File "/home/mendel/tflite/lib/python3.7/site-packages/tensorflow_lite_support/python/task/vision/object_detector.py", line 83, in create_from_options
options.base_options.to_pb2(), options.detection_options.to_pb2())
TypeError: create_from_options(): incompatible function arguments. The following argument types are supported:
1. (arg0: tflite::python::task::core::BaseOptions, arg1: tflite::task::processor::DetectionOptions) -> tensorflow_lite_support.python.task.vision.pybinds._pywrap_object_detector.ObjectDetector
Invoked with: <MagicMock name='mock.do_not_generate_docs()()' id='281473195160968'>, <MagicMock name='mock.do_not_generate_docs()()' id='281473195160968'>
192.168.1.85 - - [20/Jun/2022 20:52:48] "GET /video_feed HTTP/1.1" 200 -
Please, could you help me what options should I try to correct this error message?
My Version de Coral Dev Board is:
(tflite) mendel@elusive-jet:~/DevBoard$ cat /etc/os-release
PRETTY_NAME="Mendel GNU/Linux 5 (Eagle)"
NAME="Mendel GNU/Linux"
ID=mendel
ID_LIKE=debian
HOME_URL="https://coral.ai/"
SUPPORT_URL="https://coral.ai/"
BUG_REPORT_URL="https://coral.ai/"
VERSION_CODENAME="eagle"
Thank you very much, greetings from Peru