Hey there,
I am super new to Tensorflow so please forgive any missteps.
I have been given 2 versions of a model, the difference between the 2 being the input/output shapes, and I am seeing some differences with how Tensorflow handles predictions between the 2 models.
An excerpt of the input layer shapes:
v1:
feature_1 (None, 1) <dtype: 'string'>
feature_2 (None, 1) <dtype: 'string'>
feature_3 (None, 1) <dtype: 'float32'>
feature_4 (None, 1) <dtype: 'string'>
feature_5 (None, 1) <dtype: 'string'>
v2:
feature_1 (None,) <dtype: 'string'>
feature_2 (None,) <dtype: 'string'>
feature_3 (None,) <dtype: 'float32'>
feature_4 (None,) <dtype: 'string'>
feature_5 (None,) <dtype: 'string'>
To make a request to each model using .predict()
- both versions accept a dictionary of format:
{
"<feature_name>": np.array([<value>])
}
v1_p = v1.predict(dict) # [0.000062629]
v2_p = v2.predict(dict) # [[[0.00062629]]]
However, when using a model server, the format of the request differs:
v1:
{
"instances": [
{
"feature_1": ["a_string"],
"feature_2": ["a_string"],
"feature_3": [91.5],
"feature_4": ["a_string"],
"feature_5": ["a_string"],
}
]
}
v2:
{
"instances": [
{
"feature_1": "a_string",
"feature_2": "a_string",
"feature_3": 91.5,
"feature_4": "a_string",
"feature_5": "a_string",
}
]
}
I have been trying to determine why the input changes for the model server but I am missing something (likely obviously fundamental) to better understand.
- What is the reason for the model server requiring a different input, but tensorflow accepting a numpy array for both?
- What is the difference between a shape of
(None,)
and a shape of(None,1)