The tf.random.uniform() function is not inherently slow, but calling it multiple times within a loop, as shown in your example, can lead to performance degradation due to the repeated overhead of generating random numbers. Instead, you can generate random numbers once and use them multiple times within the loop for data augmentation.
The reason for the observed latency difference could be that when using tf.random.uniform(()) > 0.5 , the random number is generated for each condition check independently. This means that you are generating two random numbers for every data point processed, whereas when using a True statement, the random number is generated only once.
You can find the documentation for TensorFlow’s image augmentation functions in the official TensorFlow API documentation. Here’s the link to the tf.image module documentation.