I have read that deep learning models outperforms than the traditional machine learning models.
I have a time-series classification problem where the output is 0
or 1
. I used LSTM to classify my timeseries as follows.
model = Sequential()
model.add(Conv1D(10, kernel_size=3, input_shape=(25,4)))
model.add(Conv1D(10, kernel_size=2))
model.add(GlobalMaxPooling1D())
model.add(Dense(10))
model.add(Dense(1, activation='sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
Unfortunately, my deep learning model gives very bad results (e.g., 0.333333). I am worried why this happens. Then I tried a machine learning model (using randomforest
) and it gave accuracy about 0.6.
I am upsetting why the deep learning model gives that bad results. I would like to get your feedback on why this happens, and is there a way to avoid this.
I am happy to provide more details if needed.