I had an early implementation of bodypix I was rendering the segment pixel data through webGL. To render the mask with a video texture. I need to update this to the new version. I can upload a test if needed.
The “drawMask” seems to be using software rendering still. I’m not sure why all the effort to use hardware and webassembly and then force to do software render to canvas of the mask and video.
How can I use the data returned from segmentPeople to render a mask on top of a video texture with webgl. There is no example. Is this how I get pixel data to render to a webgl texture as the mask ? I tried using the data from binaryMask but it didnt work. The output is ImageData. In bodypix I would get pixel data to use with webgl.
Video printing to canvas via software is resource intensive and less efficient than webgl.
Could I use the webgl context directly somehow window.exposedContext ?
const segmentation = await segmenter.segmentPeople(localVideo, {
flipHorizontal: false,
multiSegmentation: false,
segmentBodyParts: true,
});
const gl2 = window.exposedContext;
if (gl2) {
gl2.readPixels(
0, 0, 1, 1, gl.RGBA, gl.UNSIGNED_BYTE, new Uint8Array(4));
}
const data = await bodySegmentation.toBinaryMask(
segmentation, {r: 0, g: 0, b: 0, a: 0}, {r: 0, g: 0, b: 0, a: 255},
false, 1);
And then something like I had before
gl.activeTexture(gl.TEXTURE1);
gl.texImage2D(
gl.TEXTURE_2D, // target
0, // level
gl.ALPHA, // internalformat
segmentation.width, // width
segmentation.height, // height
0, // border, "Must be 0"
gl.ALPHA, // format, "must be the same as internalformat"
gl.UNSIGNED_BYTE, // type of data below
data // pixels
);
gl.viewport(0, 0, metadata.width, metadata.height);
gl.activeTexture(gl.TEXTURE0);
gl.texImage2D(gl.TEXTURE_2D, 0, gl.RGB, gl.RGB, gl.UNSIGNED_BYTE, localVideo);
gl.uniform1i(frameTexLoc, 0);
gl.uniform1i(maskTexLoc, 1);
gl.uniform1f(texWidthLoc, metadata.width);
gl.uniform1f(texHeightLoc, metadata.height);
gl.drawArrays(gl.TRIANGLE_FAN, 0, 4);