I am receiving data for my neural network over WiFi. For example, I get one image, and I directly put it into the model. There is no queue implementation or any wait.
I am running into an issue where my TensorFlow performs in bursts.
For example, I will receive one image, put it in my model, TensorFlow will process it in 50 milliseconds, then next image, put it in my model, TensorFlow will process it in 50 milliseconds, then TensorFlow will pause for 0.5 seconds
Then I will receive four images, after receiving each image, TensorFlow will take about 50 milliseconds to process them, and then after the 4 images are done, TensorFlow will pause for 0.4 seconds
Then three images again, TensorFlow will pause for 0.4 seconds
And so on…
I tried the latest version of TensorFlow 2.17 (CPU only cause Windows) and also 2.10.1, the last Windows version to support GPU. (Yes, processing takes place on the GPU, I checked).
I also shared my code. I am not doing anything with the outputted data right now.
If I comment the “result = model(savedimage)”, then I rapidly start receiving images from over the network so this is not a network issue. It’s only that one line of code where I put data in my model, it performs in bursts and slows down performance
Why is this happening?
Is there some kind of optimization to make TensorFlow work in bursts? Is there a way to disable it?
Regards,
Lolcocks
Also posted on GitHub issues:
import tensorflow as tf
import numpy as np
from keras.models import load_model, Model
from tensorflow.keras.utils import load_img, img_to_array
from PIL import Image
import os
import socket
import cv2
from io import BytesIO
model = load_model("PartitionedModel/YOLOv3Model.h5", compile = False)
SERVER_HOST = "0.0.0.0"
SERVER_PORT = 5001
BUFFER_SIZE = 8192
while(1):
s = socket.socket()
s.bind((SERVER_HOST, SERVER_PORT))
s.listen(1)
client_socket, address = s.accept()
image_data = BytesIO()
while True:
bytes_read = client_socket.recv(BUFFER_SIZE)
if not bytes_read:
break
image_data.write(bytes_read)
client_socket.close()
s.close()
savedimage = Image.open(image_data)
result = model(savedimage)
Hi @Lolcocks, Is there any specific reason for sending a single image for prediction. If possible could you please try to collect a batch of images and inference them for better performance. Thank You.