How can we use the pre-trained BlazePose GHUM 3D model to apply transfer learning to the addition of custom set of images because I want to include people using wheelchairs and people using walkers and crutches as well?
Using Tensorflow.js, I’d like to also track the different joints of a simulated 3D human
character displayed on a webpage. As of now, BlazePose GHUM models will not work as they were
only trained on humans.
How can we use transfer learning in this context so that I can train 3D pose detection for virtual human characters displayed on the web page using BlazePose GHUM?
Maybe you can somehow combine two models - one for pose detection and one for object detection - to solve your problem? If the bounding box for specific objects (wheelchairs, etc.) overlaps with specific key points of a human body identified by the pose model, you can be more or less sure what the image contains.
Unfortunately, you will hardly find code examples that you can use out-of-the-box. What I suggested was that you probably could take a pretrained object detection model, which is suited for transfer learning, and tune it to detect specialised equipment like wheelchairs and crutches.
Then you can take an image and get two predictions from two models: pose detection model will give you exact coordinates of the key points of a person, and fine-tuned object detection model will say if the image also contains the equipment and where it is located. And you’ll be able to identify positions of a person and equipment relative to each other.
Here is a list of object detection models that could be tuned: Find Pre-trained Models | Kaggle
Hope that helps.