We usually trained a model using balanced datasets. Even when we do not have a balanced datasets, we will use methods such as SMOTE to create a balanced dataset for training.
The question is - how reliable will be the trained model when it is implemented on an imbalanced datasets (e.g in the real world scenario, anomalies are usually rare)? Why can't we just train and test the model with an imbalanced dataset?