I have a custom dataset implemented using tfds, that returns image, label
image is image of license plate shape (64,160,3)
label is int from 0 to 36 representing character position in list of allowed characters for license plate (0…9,A…Z) shape (8)
I am creating train batch / test batch with size 8
ds_train_batch = ds_train.cache()
ds_train_batch = ds_train_batch.batch(BatchSize)
ds_train_batch = ds_train_batch.prefetch(tf.data.AUTOTUNE)
ds_test_batch = ds_test.cache()
ds_test_batch = ds_test_batch.batch(BatchSize)
ds_test_batch = ds_test_batch.prefetch(tf.data.AUTOTUNE)
when i am constructing the model it looks like this
model = Model(
inputs={"image": inputs},
outputs={
"char0": y0,
"char1": y1,
"char2": y2,
"char3": y3,
"char4": y4,
"char5": y5,
"char6": y6,
"char7": y7,
},
)
I need to be able to compute weights per each character, therefore my model.compile looks like this
# compile the model
model.compile(
optimizer=Adam(),
loss={
"char0": "sparse_categorical_crossentropy",
"char1": "sparse_categorical_crossentropy",
"char2": "sparse_categorical_crossentropy",
"char3": "sparse_categorical_crossentropy",
"char4": "sparse_categorical_crossentropy",
"char5": "sparse_categorical_crossentropy",
"char6": "sparse_categorical_crossentropy",
"char7": "sparse_categorical_crossentropy",
},
# set the loss weights
loss_weights={
"char0": 2.0,
"char1": 1.0,
"char2": 1.0,
"char3": 1.0,
"char4": 1.0,
"char5": 1.0,
"char6": 2.0,
"char7": 2.0,
},
# select the metrics to evaluate the model
metrics={
"char0": ["sparse_categorical_accuracy"],
"char1": ["sparse_categorical_accuracy"],
"char2": ["sparse_categorical_accuracy"],
"char3": ["sparse_categorical_accuracy"],
"char4": ["sparse_categorical_accuracy"],
"char5": ["sparse_categorical_accuracy"],
"char6": ["sparse_categorical_accuracy"],
"char7": ["sparse_categorical_accuracy"],
},
)
my training/fit looks like this
# fit the model
history = model.fit(
# training data
x={"image": ds_train_batch[0]},
# training target
y={
"char0": ds_train_batch[1][:, 0],
"char1": ds_train_batch[1][:, 1],
"char2": ds_train_batch[1][:, 2],
"char3": ds_train_batch[1][:, 3],
"char4": ds_train_batch[1][:, 4],
"char5": ds_train_batch[1][:, 5],
"char6": ds_train_batch[1][:, 6],
"char7": ds_train_batch[1][:, 7],
},
epochs=200,
batch_size=BatchSize,
validation_data=(
{"image": ds_test_batch[0]},
{
"char0": ds_test_batch[1][:, 0],
"char1": ds_test_batch[1][:, 1],
"char2": ds_test_batch[1][:, 2],
"char3": ds_test_batch[1][:, 3],
"char4": ds_test_batch[1][:, 4],
"char5": ds_test_batch[1][:, 5],
"char6": ds_test_batch[1][:, 6],
"char7": ds_test_batch[1][:, 7],
},
),
callbacks=[checkpoint_save],
verbose=1,
)
I am traing to feed dictionary of char0 … char1 by slicing my training dataset label
However I am getting following error
TypeError Traceback (most recent call last)
Cell In[16], line 4
1 # fit the model
2 history = model.fit(
3 # training data
----> 4 x={"image": ds_train_batch[0]},
5 # training target
6 y={
7 "char0": ds_train_batch[1][:, 0],
8 "char1": ds_train_batch[1][:, 1],
9 "char2": ds_train_batch[1][:, 2],
10 "char3": ds_train_batch[1][:, 3],
11 "char4": ds_train_batch[1][:, 4],
12 "char5": ds_train_batch[1][:, 5],
13 "char6": ds_train_batch[1][:, 6],
14 "char7": ds_train_batch[1][:, 7],
15 },
16 epochs=200,
17 batch_size=BatchSize,
18 validation_data=(
19 {"image": ds_test_batch[0]},
20 {
21 "char0": ds_test_batch[1][:, 0],
22 "char1": ds_test_batch[1][:, 1],
...
31 callbacks=[checkpoint_save],
32 verbose=1,
33 )
TypeError: 'PrefetchDataset' object is not subscriptable
when i remove prefetching i get the following error
TypeError: ‘BatchDataset’ object is not subscriptable
How should I approach this to make it work.
Thank you in advance