1

I'd like to fine-tune Google's BERT model for entity-level sentiment, similar to Google's Natural Language API (demo). I'm using a sample fine-tuning Colab implementation found here. My current approach is to fine-tune BERT for span detection along with its property by creating tag subclasses with labelled sentiments. Here's the first sentence:

|    | Sentence #   | Word          | POS   | Tag   |
|---:|:-------------|:--------------|:------|:------|
|  0 | Sentence: 1  | Thousands     | NNS   | O     |
|  1 | nan          | of            | IN    | O     |
|  2 | nan          | demonstrators | NNS   | O     |
|  3 | nan          | have          | VBP   | O     |
|  4 | nan          | marched       | VBN   | O     |
|  5 | nan          | through       | IN    | O     |
|  6 | nan          | London        | NNP   | B-geo |
|  7 | nan          | to            | TO    | O     |
|  8 | nan          | protest       | VB    | O     |
|  9 | nan          | the           | DT    | O     |
| 10 | nan          | war           | NN    | O     |
| 11 | nan          | in            | IN    | O     |
| 12 | nan          | Iraq          | NNP   | B-geo |
| 13 | nan          | and           | CC    | O     |
| 14 | nan          | demand        | VB    | O     |
| 15 | nan          | the           | DT    | O     |
| 16 | nan          | withdrawal    | NN    | O     |
| 17 | nan          | of            | IN    | O     |
| 18 | nan          | British       | JJ    | B-gpe |
| 19 | nan          | troops        | NNS   | O     |
| 20 | nan          | from          | IN    | O     |
| 21 | nan          | that          | DT    | O     |
| 22 | nan          | country       | NN    | O     |
| 23 | nan          | .             | .     | O     |

Where B-geo for London would be -0.3, Iraq -0.1, and British 0.0, where -0.3, -0.1, and 0.0 are arbitrary sentiment values that I assigned. Would appending these sentiment values to Tag be the right approach?

The only related question I could find was this, asked 8+ years ago.

mmz
  • 111
  • 1
  • And what is exactly your question? What you say you do sounds right to me. Is there any issue? – Jindřich Dec 14 '21 at 08:56
  • @Jindřich my more specific question: is this in line with NLP best practices? And are there alternative approaches? One potential problem is that it becomes difficult to balance the classifications. For example, there more tags closer to neutral (0) than extreme (+/- 1) – mmz Dec 14 '21 at 13:53

0 Answers0