So from what I understand TFJS Node uses the eager mode TF C++ API behind the scenes for model training where by only certain ops are parallelizable. If those ops are not being used then you will see the single core usage as shown in your screenshot. I believe TF Python uses eager mode by default too if I remember correctly to avoid issues.
It should be noted however that when using a saved model in TFJS Node, the act of performing inference however can utilize inter-op parallelization as graph mode is used so saved model usage is still very fast for models that support parallel op execution in Node JS too. The above only applies to the training.
For more details on graph/eager mode for execution check this interesting blog post (even though this is about Python I believe similar rules apply here too):