How to get Leaky Relu Backward?

Hello everyone, I’m trying to build a single layer neural network in the function below. For different reasons I’m trying to avoid using Keras because I need to run over 400K instances of this network. Loading 400K keras objects is very slow. So I’m doing my own implementation. I have forward propogation finished.

I don’t quite remember how do backpropogation. I can figure out most of it, but I’m not sure if it’s different with a leaky_relu activation. Would anyone be able to help me leaky_relu backward?

Thank you!

import tensorflow as tf
import numpy as np

@tf.function
def predict(x, b, w):
z = tf.tensordot(w, x, axes=1) + b
G = tf.nn.leaky_relu(tf.tensordot(w, x, axes=1) + b)
return G

weights = tf.random.uniform((9, 9), dtype=tf.float32)
bias = tf.random.uniform([9], dtype=tf.float32)
signal = tf.random.uniform([9], dtype=tf.float32)
model_predicted =

for i in range(446436):
model_predicted.append(predict(signal, bias, weights))

Hi @JohnnyWaffles,

Sorry for the delay in response.
We can get leaky relu backwards after a forward pass by computing gradient of the loss with respect to the inputs(z) of the leaky relu function and applying chain rule to propagate these gradients back through the activation function using its derivative.I’ve added a sample gist of above implementation for your reference.

Thank You.