We know SLI can combine two identical GPU cards into one and get better performance due to parallel computation. But for two GPUs that are not SLI-capable and not identical, can TensorFlow distribute training computation workload among these two GPUs, and then gain some acceleration from training in parallel?
Hi @zzzhhh, You can perform distributed training on non-SLI and non identical GPU’s using tensorflow distributed strategy. Thank You.