I had to install tf-nightly[and-cuda] which solved one of my issues (cuda) on my system running WSL2 and ubuntu 22.04. Now, running a jupyter notebook and simply trying to import the MNIST dataset as follows:
from tensorflow.keras.datasets import mnist
I get this error:
ImportError: cannot import name '_initialize_variables' from 'keras.src.backend'
I tried several other hints I found searching elsewhere, which did not help:
import os
os.environ["KERAS_BACKEND"] = "tensorflow"
from keras import initializers
Has anyone else had a similar issue and how did you resolve it? This is what I currently have installed:
And so it appears that the issue exists anytime you use tensorflow in the import statement. If instead you use import keras as keras or from keras.datasets import mnist then it works for most statements until I arrive at my original cuda problem (see below…):
For example, instead of:
y_train = tensorflow.keras.utils.to_categorical(y_train, num_categories)
y_valid = tensorflow.keras.utils.to_categorical(y_valid, num_categories)
...
from tensorflow.keras.models import Sequential
...
from tensorflow.keras.layers import Dense
...
use:
y_train = keras.utils.to_categorical(y_train, num_categories)
y_valid = keras.utils.to_categorical(y_valid, num_categories)
...
from keras.models import Sequential
...
from keras.layers import Dense
...
and so on. That is, until you get to:
history = model.fit(
x_train, y_train, epochs=5, verbose=1, validation_data=(x_valid, y_valid)
)
which produces:
WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
I0000 00:00:1708348871.971183 150951 service.cc:145] XLA service 0x7f1ad4002f00 initialized for platform CUDA (this does not guarantee that XLA will be used). Devices:
I0000 00:00:1708348871.971259 150951 service.cc:153] StreamExecutor device (0): NVIDIA GeForce GTX 960M, Compute Capability 5.0
2024-02-19 08:21:12.004627: I tensorflow/compiler/mlir/tensorflow/utils/dump_mlir_util.cc:268] disabling MLIR crash reproducer, set env var `MLIR_CRASH_REPRODUCER_DIRECTORY` to enable.
2024-02-19 08:21:13.329554: E external/local_xla/xla/stream_executor/cuda/cuda_dnn.cc:458] Loaded runtime CuDNN library: 8.2.4 but source was compiled with: 8.9.6. CuDNN library needs to have matching major version and equal or higher minor version. If using a binary install, upgrade your CuDNN library. If building from sources, make sure the library loaded at runtime is compatible with the version specified during compile configuration.
2024-02-19 08:21:13.348512: E external/local_xla/xla/stream_executor/cuda/cuda_dnn.cc:458] Loaded runtime CuDNN libra
ry: 8.2.4 but source was compiled with: 8.9.6. CuDNN library needs to have matching major version and equal or higher minor version. If using a binary install, upgrade your CuDNN library. If building from sources, make sure the library loaded at runtime is compatible with the version specified during compile configuration.
2024-02-19 08:21:13.357030: W tensorflow/core/framework/op_kernel.cc:1839] OP_REQUIRES failed at xla_ops.cc:580 : FAILED_PRECONDITION: DNN library initialization failed. Look at the errors above for more details.
2024-02-19 08:21:13.357160: W tensorflow/core/framework/local_rendezvous.cc:404] Local rendezvous is aborting with status: FAILED_PRECONDITION: DNN library initialization failed. Look at the errors above for more details.
[[{{node StatefulPartitionedCall}}]]
...
Looks like I may need to start from scratch and try different versions of Tensorflow. This is so frustrating!
I backed out my changes of installing tf-nightly and reinstalled the latest official tensorflow. However, I then upgraded the nvidia-* packages to avoid the issue that “forced” me to try the tf-nightly. So, this issue no longer exists for me. I now face a different issue that, although is a nuisance and still allows the code to complete, I’d like to understand how to resolve it. I’ll open a different case for that. Thank you.