Tensorflow appears to be designed for ML engineering projects (end-to-end backprop training), however non-backprop learning algorithm research requires a) the ability to specify precisely which parts of a model are being trained at a given time (ie localised training), and b) the precise training objective (class targets), where the class targets may be a function of the output of another part of the network, not just some predefined training/test set. I have encountered a number of exploratory algorithms which appear impossible to implement with the current framework (without copying weights across independent graphs).
@bairesearch Welcome to the TF Forum!
While TensorFlow is widely used for end-to-end backpropagation training, it offers flexibility for non-backprop learning algorithms as well. Here are ways to address your requirements:
-
Localised Training: Manual Gradient Calculation: For complete control, leverage
tf.GradientTape
'sgradient
method with custom computations. Layer-wise Trainability:** Settrainable=False
for layers you want to freeze during training. -
Precise Training Objectives: Custom Loss Functions:** Define any loss function using TensorFlow operations, even those dependent on other parts of the network. Modify loss calculation within training loops based on model outputs or other conditions.
-
Advanced Techniques for Non-Backprop Algorithms: Custom Training Loops:** For full control over training steps, create custom loops using
tf.GradientTape
and optimizer’sapply_gradients
method.
TensorFlow allows experimenting with different gradient calculation methods, not limited to backpropagation. Explore gradient-free optimization methods (e.g., evolutionary algorithms, reinforcement learning) with TensorFlow’s core operations and functions.
Let us know if this helps!