From what I've read I've understood that undersampling the majority class with Tomek Links or Edited Nearest Neighbours with 1 neighbour should yield the same result. However, I've tried it on this library I've been working with called imbalanced-learn, and I got different outputs. In the documentation both methods are described here: 3.2.2 Cleaning under-sampling techniques.
This is what I tried:
Counter(y_train)
Out[45]: Counter({0: 91, 1: 26})
Using Tomek Links:
X_res1, y_res1 = TomekLinks(ratio='all').fit_sample(X_train_std, y_train)
Counter(y_res1)
Out[44]: Counter({0: 88, 1: 23})
Using Edited Nearest Neighbours:
X_res1, y_res1 = EditedNearestNeighbours(n_neighbors=1).fit_sample(X_train_std, y_train)
Counter(y_res1)
Out[50]: Counter({0: 73, 1: 26})
Is my interpretation right or are there any mistakes in the code?