1

If I see that AI not only makes great pics but also reasons and proves points better than me, should I use it in online discussions as a "logic tool"? How different is it from just googling stuff and then using it in an argument? Is it ethical?

Groovy
  • 6,632
  • 7
  • 24

3 Answers3

3

I mean that depends on what you see as the goal and purpose of the discussion.

Like Scott Rowe, already gave you an example: Taking the path as the goal. Arguing that one aim of philosophy could be an exercise in sharpening one's mind. So if you compare it to googling, it's kind of the difference between typing your argument into google and pasting a link with your claim in the headline and actually putting in the work to figure out whether the source is trustworthy, whether the article even supports your argument or whether it's clickbait, whether the article itself is coherent and makes sense, ...

And he also mentions another motivation, which is "to win" the discussion. So if you see these discussions as competition you may legitimately get into the question of how ethical it is as that basically amounts to cheating. Whether you should have a burden of proof to react to something that cost the other person next to nothing to produce. But also with regards to politics and human interaction in general, like if your comment expresses a notion for or against something, forces people to react to it, because it intentionally or unintentionally feeds into the narrative of an existing movement and so on. So while facts are often neutral, the act of expressing them often isn't.

Also who deserves the credit for an idea, like technically "AI" is often just a front for hiding plagiarism. Like does ChatGPT give full credit to all the people on Github other source platforms.

And to add one of myself. Maybe the goal is actually the goal. Like maybe you ask a question because you actually care for the answer. In that case you can utilize all kinds of tools and techniques to give you an edge, because your competitor isn't another human being, but a lack of understanding itself. So you criticize the "opponents" claim not because you don't like them, but because you think they are inapt to solving the problem.

Though in each case that is the dangerous part:

reasons and proves points better than me

The thing is if you don't understand what you read and post, you're not "utilizing a tool" ... "you become a tool". If the goal is to understand a thing, have you come closer by copying something that looks like an explanation but that you can't grasp? No, you likely haven't. If your goal is to become smarter, does having a problem solver help? No, it just obfuscate that you can't do it yourself. Only if your goal is to win it might give you an edge, but also only if the other side taps out because if they don't you're not going to be able to elaborate. You might not even grasp their rebuttals and the entire discussion derails.

Also as David Gudeman points out, if you don't understand it, it might be because it doesn't make sense. So be careful copying things you don't understand.

haxor789
  • 9,449
  • 13
  • 37
3

"How different is it from just googling stuff and then using it in an argument? Is it ethical?"

Depends on whether you understand and can verify the truth/validity/relevance of the material you find on Google or are given by an LLM. Neither Google, nor LLMs understand the material or know whether it is relevant to your purposes or whether there are important caveats that ought to be mentioned.

Unfortunately we live in the Bullshitocene era, where arguing (especially on on-line forums) using any information that serves our purposes and without caring whether it is true/valid is unbiquitous, and if not actually socially acceptable, there is very little push back from society. LLMs are likely to make this phenomenon more common, when we even delegate fact checking to LLMs that don't understand context etc.

In short, the onus is still on you, not the AI to ensure that your contributions to discussions are correct, but we have to contend with human nature.

1

There are different types of AI of course. Some are based on formal logic and prove theorems accordingly. Their results seem safe to use in an argument.

As already pointed out, LLMs are significantly less trustworthy and often make stuff up. At the same time, I don't see how saying "ChatGPT said X" in a discussion is fundamentally different from saying "Person Y said X." In both cases, you could reasonably be called out for not having verified it yourself. In both cases, the people to whom you say it should be skeptical, taking into account who or what said it. ChatGPT produces false statements in different ways than humans do. But person Y also produces false statements in different ways than person Z. In each case, the source should be taken into account.

present
  • 2,668
  • 1
  • 10
  • 23