Ask and answer technical questions here. Say hello to the community!
Hey. I wanna start tomorrow the exam for the TensorFlow developer certificate. It is tensorflow 2.5 required. I have a Mac with M1 and only tensorflow-macos runs on the machine. Does somebody knows if it is still possible to use tensorflow-macos with installing the plugin or is it only possible to use tensorflow 2.5 . If I can use only tensorflow 2.5 somebody knows a solution how to run this version on an M1? Thank you.
Hi to all - be new and need help. does anyone have an idea where I can find information to get tensorflow running on a chromebook. I want to do object recognition with it. I also ordered a corla usb accelerator, but it won’t be delivered before March 2022.
A post was merged into an existing topic: i have made dog breed identification it’s working till last month today i try to run again i getting error uable to solve this Help
Hey, I wanted to use Visual Studio Code to use tensor flow, and I installed tensor flow 2. However when I run my code using python I keep getting a warning that interferes with the output, it says I beed to upload AVX2 and FMA, but that is not available on my Mac. Does anyone know a way to go around this warning and receive a response? I am also using a virtual environment and not directly installing it to my device.
hello, I am trying to take output from intermediate layers and attach to another model. How to do that, i tried using functional and sub classing but failed in both in both approach’s, Function approach shows functional object error and subclassing doesn’t recognize if is take any pretrained model as layer. Help in this regards
Hello!!! have a good time today…I am new here tf community…Salaam and best wishes for all community members
Hello,
Can anyone help me that how can I get the latest version version updates from google Play store for my wordPress wesbsite?
Hi to everyone! I am new to the forum. I just want to know that; can we design Digital Twin with Machine learning algorithms for cyber physical system with TensorFlow?
Hi Carlos
I have heterogenous dataset - example ;amazon reviews which also contains review text column plus product id,product type, rating etc . Now I wish to create Random Forest using deep neural decision forest(DNDF) to solve classification problem for sentiment analysis for each product.
Do i need to preprocess -review text column by converting it into word embeddings and then append to original dataset or preproprocessing is not required for DNDF ?
Please reply
Thanks
Riya
Hi,
I want to use bodySegmentation model in react-native(0.72.4) project. I installed expo-gl(13.2.0),expo-gl-cpp(11.4.0),tensorflow/tfjs(4.11.0), react(18.2.0) but when i’m trying to install tensorflow/tfjs-react-native(0.8.0) library it’s giving following error: Could not resolve dependency: npm ERR! peer expo-gl@“^7.0.0” from @tensorflow/tfjs-react-native@0.8.0 npm ERR! node_modules/@tensorflow/tfjs-react-native npm ERR! @tensorflow/tfjs-react-native@“*” from the root project
Please help me to resolve this issue.
I am using a combined transformer and CNN model to classify image data, the model builds and compiles, but during training, it fails on the first epoch with an error. Can someone please help in identify where I am going wrong?
Error -
KeyError Traceback (most recent call last)
in <cell line: 41>()
39
40 modelCombined = create_combined_model(input_shape, 2, 12)
—> 41 history = run_experiment(modelCombined)
42
43
12 frames
in run_experiment(model)
21 )
22
—> 23 history = model.fit(
24 x=x_train,
25 y=y_train,
/usr/local/lib/python3.10/dist-packages/keras/src/utils/traceback_utils.py in error_handler(*args, **kwargs)
111 def error_handler(*args, **kwargs):
112 if not is_traceback_filtering_enabled():
→ 113 return fn(*args, **kwargs)
114
115 filtered_tb = None
/usr/local/lib/python3.10/dist-packages/keras/src/backend/tensorflow/trainer.py in fit(self, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_batch_size, validation_freq)
323 for step, iterator in epoch_iterator.enumerate_epoch():
324 callbacks.on_train_batch_begin(step)
→ 325 logs = self.train_function(iterator)
326 callbacks.on_train_batch_end(
327 step, self._pythonify_logs(logs)
/usr/local/lib/python3.10/dist-packages/tensorflow/python/util/traceback_utils.py in error_handler(*args, **kwargs)
151 except Exception as e:
152 filtered_tb = _process_traceback_frames(e.traceback)
→ 153 raise e.with_traceback(filtered_tb) from None
154 finally:
155 del filtered_tb
/usr/local/lib/python3.10/dist-packages/keras/src/backend/tensorflow/trainer.py in one_step_on_iterator(iterator)
116 “”“Runs a single training step given a Dataset iterator.”“”
117 data = next(iterator)
→ 118 outputs = self.distribute_strategy.run(
119 one_step_on_data, args=(data,)
120 )
/usr/local/lib/python3.10/dist-packages/keras/src/backend/tensorflow/trainer.py in one_step_on_data(data)
104 def one_step_on_data(data):
105 “”“Runs a single training step on a batch of data.”“”
→ 106 return self.train_step(data)
107
108 if not self.run_eagerly:
/usr/local/lib/python3.10/dist-packages/keras/src/backend/tensorflow/trainer.py in train_step(self, data)
55 with tf.GradientTape() as tape:
56 if self._call_has_training_arg:
—> 57 y_pred = self(x, training=True)
58 else:
59 y_pred = self(x)
/usr/local/lib/python3.10/dist-packages/keras/src/utils/traceback_utils.py in error_handler(*args, **kwargs)
111 def error_handler(*args, **kwargs):
112 if not is_traceback_filtering_enabled():
→ 113 return fn(*args, **kwargs)
114
115 filtered_tb = None
/usr/local/lib/python3.10/dist-packages/keras/src/layers/layer.py in call(self, *args, **kwargs)
812 outputs = super().call(*args, **kwargs)
813 else:
→ 814 outputs = super().call(*args, **kwargs)
815 # Change the layout for the layer output if needed.
816 # This is useful for relayout intermediate tensor in the model
/usr/local/lib/python3.10/dist-packages/keras/src/utils/traceback_utils.py in error_handler(*args, **kwargs)
111 def error_handler(*args, **kwargs):
112 if not is_traceback_filtering_enabled():
→ 113 return fn(*args, **kwargs)
114
115 filtered_tb = None
/usr/local/lib/python3.10/dist-packages/keras/src/ops/operation.py in call(self, *args, **kwargs)
54 return self.quantized_call(*args, **kwargs)
55 else:
—> 56 return self.call(*args, **kwargs)
57
58 def symbolic_call(self, *args, **kwargs):
/usr/local/lib/python3.10/dist-packages/keras/src/models/functional.py in call(self, inputs, training, mask)
192 if mask is not None:
193 x._keras_mask = mask
→ 194 outputs = self._run_through_graph(
195 inputs, operation_fn=lambda op: operation_fn(op, training=training)
196 )
/usr/local/lib/python3.10/dist-packages/keras/src/ops/function.py in _run_through_graph(self, inputs, operation_fn)
157 output_tensors =
158 for x in self.outputs:
→ 159 output_tensors.append(tensor_dict[id(x)])
160
161 return tree.pack_sequence_as(self._outputs_struct, output_tensors)
KeyError: 138703080200352
Thanks,
Shantanu
hey community member, how are you hope all are good here , i am new bie and my question is How can I implement transfer learning with TensorFlow to fine-tune a pre-trained model for a specific task. anyone can ans for this