How to deploy object detection on a website using tensorflow js

Hello, im new here, im also new to tensorflow, tensorflowjs or anything related to machine learning.

I need to create a website to detect objects that is hosted locally, I was able to train my model using yolov5; upon researching, the only way to deploy my model to a website is convert my trained model to tensorflow js, and i got only a multiple bin files and a json file.

uspon a blog post (Build Custom Object Detection Web Application Using TensorFlow.js | by Kosta Malsev | The Startup | Medium) that tells how to deploy a model but im not sure if this tutorial also applies for my trained models.

Upon following the steps from the site i told above and running the code, the output doesnt indicate class names in the bounding box.

I have no experience in tensorflow and even javascript, hopefully someone can help me on how to deploy my trained model. Thank you.

Welcome to the forum and the TensorFlow.js community of course :slight_smile:

You may find Hugo Zanini’s tutorial very robust for converting Python YOLO model to TensorFlow.js and getting it working well in the browser at a decent FPS:

If you know some basic JavaScript you may also find my edX course useful (free to take) - later chapters go through how to use our command line converter and more to go from Python to JS along with example code on how to load and run such models.

Hi @Ryle! You can deploy YOLOv5 models exported to tensorflow.js format using NPM yolov5js package: GitHub - SkalskiP/yolov5js: Effortless YOLOv5 javascript deployment

1 Like

Nice find @SkalskiP! Have you tried it out yourself yet any FPS stats?

Hi, @Jason I actually build it. :wink: That is a very good point. I’ll add the fps benchmark to my to-do list! As for what I can say right now - on my MacBook Pro 2020 (Intel) inference takes around 50-60ms.

Oh awesome! An honor to meet :slight_smile:

By default TensorFlow.js runs with WebGL context, I am curious what GPU you are running there or are you running via WASM backend for CPU execution?

1 Like

I already see that this conversation is bringing lots of ideas for the next steps in the package. I’m running the default backend. And to my knowledge, this MacBook does not have any real GPU.

If I understand correctly using the default backend if I don’t have GPU is not the best choice. It would be better to go with the WASM backend on CPU-only machines?

You can try tf.setBackend(‘wasm’) to force web assembly to see how that fairs

Please see the other thread somewhere on this forum on the differences between op parity though between different backends. I think this thread has the link to the spreadsheet: http://discuss.ai.google.dev/t/tensorflow-js-op-support-matrix-if-you-are-having-python-model-conversion-issues-check-this-first/4000

1 Like