You can use Welch t-tests for the pairwise comparisons, and something like Bonferoni's correction or Tukey's HSD to correct for multiple comparisons (alternatively, you could not correct it --- the choice would depend on if you're more concerned about Type 1 error or Type 2 error). If this is an exploratory analysis --- i.e. you don't have any particular hypothesis about which groups should differ --- then you should test all pairwise comparisons (using something like Tukey's HSD) and be sure to get descriptive statistics on the groups (mean and SD, probably also min, max, and anything else that might be relevant) so you have some context for interpreting the pattern of results.
If you want to directly test the hypothesis that G3 differs from the other groups and they do not differ from each other, then planned contrasts, such as Helmert contrasts are a more elegant solution. In this case, you would test the following three contrasts (or something like it):
G3 vs. mean(G1, G2, G4)
G4 vs. mean(G1, G2)
G2 vs. G1
If you were to see a significant difference for the first contrast but not the second two, you might conclude that G3 is the "culprit" (although this kind of interpretation is common, note the danger of interpreting a non-sig difference as "no difference", though, especially in underpowered designs!).
Note that in the specific example you give, though, I personally don't see evidence that G3 is different from all of the rest. G1 and G2 are significantly different, and G3 does not differ from G1. If that's your hypothesis, it looks like it's not cleanly supported by the data. If you haven't already, I recommend using a boxplot to visualize the differences among the groups, since it appears your expectations don't quite line up with the data.