Hi.
I have a test dataset with 285 columns.
As I understand it is mean that I need 284 input as X and 1 column for Y as result.
I have only 40 output options, so NN has 40 output shape.
This is my code in python:
def get_x(dataset):
X = dataset.iloc[:,1:285].values
return X
def get_y(dataset):
y = dataset.iloc[:,285:286].values
return y
def make_nn():
output = 40
model = Sequential()
model.add(Dense(284, input_dim=284, activation='softmax'))
model.add(Dense(284, activation='softmax'))
model.add(Dense(output, activation='softmax'))
for i in range(250):
model.add(Dense(284, activation='softmax'))
model.add(Dense(40, activation='softmax'))
model.summary()
model.compile(loss='categorical_crossentropy', optimizer='adamax', metrics=['accuracy'])
return model
def train_nn(dataset, model):
x=get_x(dataset)
y=get_y(dataset)
X_train,X_test,y_train,y_test = train_test_split(x,y,test_size = 0.1,random_state = 0)
output = 40
y_train_cat = keras.utils.to_categorical(y_train, output)
y_test_cat = keras.utils.to_categorical(y_test, output)
model.fit(X_train, y_train_cat ,validation_data = (X_test,y_test_cat), epochs=1000, batch_size=6068)
dataset=get_dataset('test.csv')
model=make_nn()
for i in range(50):
train_nn(dataset, model)
print("Save model")
model.save('model/model.test.h5')
But when I run it, I see information like this:
Epoch 1/10
2/2 [==============================] - 31s 8s/step - loss: 3.6861 - accuracy: 0.0721 - val_loss: 3.6820 - val_accuracy: 0.0597
I haven’t seen that accuracy exceeded 0,14. As I understand, it is too small number.
Loss was also not less than 2ю
What am I doing wrong? Any advice, please?
Also, what is better: big batch_size or small?