Tensorflow warning in sequential network

Hi, I am running a 2-layer FF-network with 10 fold cross validation and I got this warning and also tensorflow does not utilize GPU. Days ago there wan’t any warning and the code was running 10 times faster and it was utilizing GPU.

WARNING:tensorflow:AutoGraph could not transform <function validate_parameter_constraints at 0x000002D8707F8E50> and will run it as-is.
Cause: for/else statement not yet supported
To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert

And here is my code:

  if __name__ == '__main__':
      
      inputs = np.load('C:/Users/ASUS/Thesis/Data/encoded_matrix_60.npy')
      targets = np.load('C:/Users/ASUS/Thesis/Data/labels_nozerogene.npy')
      
      dim_no = inputs.shape[1]
      class_no = targets.shape[1]
      hidden_nerons=0.2*60
      
      kfold = KFold(n_splits=num_folds, shuffle=True)
      fold_no = 1
      
      for train, test in kfold.split(inputs, targets):
          model = Sequential()
          model.add(Dense(hidden_nerons, input_dim=dim_no, activation='relu'))
          model.add(Dense(class_no, activation='sigmoid'))
          adam=keras.optimizers.Adam(learning_rate=0.001)
          model.compile(loss=bp_mll_loss, optimizer=adam, metrics=[coverage,ranking_loss,average_precision])
      
          tf.config.run_functions_eagerly(True)
          tf.data.experimental.enable_debug_mode()
          history = model.fit(inputs[train], tf.cast(targets[train], tf.float32),shuffle=True,batch_size=10, epochs=100)
          scores = model.evaluate(inputs[test], targets[test], verbose=0)
          print(f'Score for fold {fold_no}: {model.metrics_names[0]} of {scores[0]}; {model.metrics_names[1]} of {scores[1]}; {model.metrics_names[2]} of {scores[2]}; {model.metrics_names[3]} of {scores[3]}')
          
          all_history[20]=history.history
          loss_per_fold_20.append(scores[0])
          coverage_per_fold_20.append(scores[1])
          ranking_loss_per_fold_20.append(scores[2])
          average_precision_per_fold_20.append(scores[3])
      
      fold_no = fold_no + 1

TensorFlow 2.16 + Python 3.12 – JARaaS Hybrid RAG - 6/16/2024

The warning message you are encountering indicates that TensorFlow’s AutoGraph, which tries to convert Python control flow to TensorFlow graph operations, encountered a construct it cannot transform. In this case, it’s the for/else statement inside the function validate_parameter_constraints. This issue shouldn’t stop your code from running but may affect performance or correctness if the AutoGraph cannot optimize this function.

To address the warning, you can use the suggested decorator to prevent AutoGraph from attempting to convert this specific function. However, to resolve the GPU utilization issue, you may need to check your TensorFlow and GPU setup. Here’s a step-by-step guide:

  1. Check TensorFlow GPU Installation:
    Ensure that TensorFlow is correctly installed with GPU support. Use the following commands to verify:

    import tensorflow as tf
    print("Num GPUs Available: ", len(tf.config.experimental.list_physical_devices('GPU')))
    
  2. Update TensorFlow and Dependencies:
    Ensure all your TensorFlow-related packages are up to date. You can use pip to update:

    pip install --upgrade tensorflow tensorflow-gpu
    
  3. Check GPU Driver and CUDA Installation:
    Confirm that your GPU drivers and CUDA toolkit are correctly installed and compatible with your TensorFlow version. You can follow the TensorFlow GPU guide to ensure everything is configured properly.

  4. Modify Code for Better Performance:
    Consider removing tf.config.run_functions_eagerly(True) and tf.data.experimental.enable_debug_mode() if they are not needed, as they can significantly impact performance. They are typically used for debugging and may slow down execution.

Here’s an updated version of your code with the debugging options removed and the AutoGraph issue handled:

import numpy as np
import tensorflow as tf
from tensorflow.keras import Sequential
from tensorflow.keras.layers import Dense
from sklearn.model_selection import KFold

if __name__ == '__main__':
    inputs = np.load('C:/Users/ASUS/Thesis/Data/encoded_matrix_60.npy')
    targets = np.load('C:/Users/ASUS/Thesis/Data/labels_nozerogene.npy')
    
    dim_no = inputs.shape[1]
    class_no = targets.shape[1]
    hidden_neurons = int(0.2 * 60)
    
    num_folds = 10
    kfold = KFold(n_splits=num_folds, shuffle=True)
    fold_no = 1
    
    for train, test in kfold.split(inputs, targets):
        model = Sequential()
        model.add(Dense(hidden_neurons, input_dim=dim_no, activation='relu'))
        model.add(Dense(class_no, activation='sigmoid'))
        
        adam = tf.keras.optimizers.Adam(learning_rate=0.001)
        model.compile(loss='binary_crossentropy', optimizer=adam, 
                      metrics=['coverage', 'ranking_loss', 'average_precision'])

        history = model.fit(inputs[train], tf.cast(targets[train], tf.float32), 
                            shuffle=True, batch_size=10, epochs=100)
        scores = model.evaluate(inputs[test], targets[test], verbose=0)
        
        print(f'Score for fold {fold_no}: {model.metrics_names[0]} of {scores[0]}; '
              f'{model.metrics_names[1]} of {scores[1]}; '
              f'{model.metrics_names[2]} of {scores[2]}; '
              f'{model.metrics_names[3]} of {scores[3]}')

        all_history[20] = history.history
        loss_per_fold_20.append(scores[0])
        coverage_per_fold_20.append(scores[1])
        ranking_loss_per_fold_20.append(scores[2])
        average_precision_per_fold_20.append(scores[3])
        
        fold_no += 1

Please ensure you replace the loss function 'binary_crossentropy' and metrics placeholders ('coverage', 'ranking_loss', 'average_precision') with your actual implementations or compatible ones.

Sources:

  • Custom Kernel Installation: errors.md (internal document)
  • Repository Configuration: pip.md (internal document)
  • PHP Versions & Repository Management: tensorflow for Java (internal document)
  • C++ APIs: c++.md (internal document)