I’m (successfully) using pose estimation to draw a skeleton over the person in real time in a browser.
I found the easy hack to mirror the video horizontally, which is needed to “mirror” the subject.
video {
transform: scaleX(-1);
}
But that also mirrors everything downstream, so the text I’m drawing on the live image is also mirrored. I considered mirroring the video, copying the frames to a canvas, then running pose estimation on THAT, then flipping it back… but that seems like a lot of overhead. Is there a parameter I’m missing in the latest movenet/singlepose/lightning wrapper that handles mirroring?
The 2018 post API example has “Flip horizontal” which was nice.