Hello,
I’m trying to resolve this error, I created a codesandbox. This is a model I trained on my own, I thought it was because of video (html element) dimensions but that didnt solve issue. Can anyone help?
Hi @Bass_Guitarist
It is not clear to me what your issue is.
You seem to be saying you trained a model. Issue arises at prediction stage?
Can you share the code of your model?
import React from "react";
import { useState, useEffect, useRef } from "react";
import * as tf from "@tensorflow/tfjs";
function MLownModel() {
const [status, setStatus] = useState("Awaiting TF.js load");
const [disableButton, setDisableButton] = useState(true);
const [invisible, setInvisible] = useState(false);
const [boxStyle, setBoxStyle] = useState("");
const [pText, setPText] = useState();
const [divBoxStyle, setDivBoxStyle] = useState();
const videoWebCamRef = useRef(null);
const liveViewRef = useRef(null);
const demosSectionRef = useRef(null);
const enableWebCamRef = useRef(null);
const pRef = useRef(null);
const divRef = useRef(null);
var model = undefined;
useEffect(() => {
setStatus("Loaded TensorFlow.js - version: " + tf.version.tfjs);
setBoxStyle();
webCamSupported();
loadCoco();
// enableCam();
});
// Check if webcam access is supported.
function getUserMediaSupported() {
return !!(navigator.mediaDevices && navigator.mediaDevices.getUserMedia);
}
// If webcam supported, add event listener to button for when user
// wants to activate it to call enableCam function which we will
// define in the next step.
function webCamSupported() {
if (getUserMediaSupported()) {
setDisableButton(false);
} else {
console.warn("getUserMedia() is not supported by your browser");
}
}
async function loadCoco() {
const model_url = "https://rockycsumb.github.io/tfjs/model.json";
model = await tf.loadLayersModel(model_url);
setInvisible(false);
}
// Placeholder function for next step. Paste over this in the next step.
//Enable live webcam
function enableCam() {
//only continue when coco has loaded
if (!model) {
return;
}
//hide when button clicked
enableWebCamRef.current.remove();
//force video not audio
const constraints = {
video: { width: 320, height: 320 },
};
//activate webcam stream
navigator.mediaDevices.getUserMedia(constraints).then(function (stream) {
videoWebCamRef.current.srcObject = stream;
videoWebCamRef.current.addEventListener("loadeddata", predictWebcam);
});
}
function predictWebcam() {
model.summary();
console.log( tf.browser.fromPixels(videoWebCamRef.current))
model
.predict(tf.browser.fromPixels(videoWebCamRef.current).shape)
.then(function (predictions) {
console.log(predictions);
//do some work with predictions
window.requestAnimationFrame(predictWebcam);
});
}
return (
<div>
<h1>TensorFlow.js Hello World</h1>
<p>{status}</p>
<h1>
Multiple object detection using pre trained model in TensorFlow.js
</h1>
<p>
Wait for the model to load before clicking the button to enable the
webcam - at which point it will become visible to use.
</p>
<section
id="demos"
ref={demosSectionRef}
className={invisible ? "invisible" : ""}
>
<p>
Hold some objects up close to your webcam to get a real-time
classification! When ready click "enable webcam" below and accept
access to the webcam when the browser asks (check the top left of your
window)
</p>
<div id="liveView" ref={liveViewRef} className="camView">
<div
ref={divRef}
style={{ divBoxStyle }}
className="highlighter"
></div>
<p ref={pRef} style={{ boxStyle }}>
{pText}
</p>
<button
id="webcamButton"
ref={enableWebCamRef}
onClick={() => enableCam()}
disabled={disableButton}
>
Enable Webcam
</button>
<video
id="webcam"
ref={videoWebCamRef}
autoPlay
muted
width="320"
height="320"
></video>
</div>
</section>
</div>
);
}
export default MLownModel;
thank you. (disclaimers: i have never used js.TF)
predict most likely expects a tensor while your are passing it the shape of your tensor.
can you replace: model.predict(tf.browser.fromPixels(videoWebCamRef.current).shape)
with model.predict(tf.browser.fromPixels(videoWebCamRef.current))?
Hi Tagoma,
I got this error now: expected input_1 to have 4 dimension(s), but got array with shape [320,320,3]
The input layer in the summary shows input_1 (InputLayer) [null,320,320,3] , I’m trying to figure out how to add the null at beginnng of array to see if that resolves problem.
You need to expand the dimensions so it is a batch size of 1. ML Models typically expect a batch of tasks to do so when you have just 1 thing to do you need to make it in a batch of one.
In TensorFlow.js you can do that using tf.expandDims()
I cover this and a lot more in my course over on YouTube:
Wow Jason, I’ve been taking your course and its a highlight to get a reply from you, I think particularly I have to review your vids 3.4 - 3.6.2, but I’ll dive deeper.
No problem! Glad you are enjoying the course. Indeed it can be easy to forget to expand the dims until you get into the mindset of doing that. Caught me out the first time I learnt this stuff too.