Discussion on AI policy
[edit]I would like to bring forward any proposal for a suitable AI policy going forward Globetrotter30 (talk) 12:52, 16 May 2026 (UTC)
- @Ikan Kekek @Sbb1413
- Your thoughts? Globetrotter30 (talk) 14:16, 16 May 2026 (UTC)
- I guess the draft guideline can be summed up like this, "You can use AI on Wikivoyage, but any use of AI must be disclosed. Repeated failures to disclose AI use may be treated as disruptive editing, which may lead to blocks from editing." Sbb1413 (he) (talk • contribs) 14:34, 16 May 2026 (UTC)
- I think referring to disruptive editing is good. –LPfi (talk) 15:15, 16 May 2026 (UTC)
Some comments:
- If we are to have a Definitions section, it should contain no guidelines, as that makes the structure unclear. Cf User:LPfi/AI guidelines.
- History, culture, climate etc. are things I absolutely don't want an AI to handle. We want those to be described as relevant to the traveller and in our style. There are lots of subtle decisions made when writing such sections. An Understand that looks complete but misrepresents or misses important points can be as bad as telling about a train that doesn't run or a restaurant that isn't there. The latter are things that the reader should know can change.
- AI allows fast creation of content. We cannot put the burden of verification on other editors.
- The wording on fact checking is quite vague. The contributor needs more robust advice.
- The tasks labelled as "mechanical" can be problematic.
- Translations can contain massive text volumes, where the style may not be right and errors can have been introduced – machine translations are nowhere near perfect.
- Spelling and grammar corrections involve the choice of language variant, and "correcting" that can be highly disruptive. "Language refinement" is even more problematic: we don't want the AI's style forced upon us.
- A human editor being accountable helps only as long as the editor is indeed part of the community. We cannot leave corrections to be the responsibility of a long-gone pass-by editor.
- We have had one user who posted AI generated comments (that I know of). The style was highly disruptive, mostly by using flowery language. Disclosure of the tool helps as little as the "I was drunk" excuse. The only AI uses I see as potentially beneficial in discussions are:
- writing help, such as spelling and grammar aids, perhaps translation, and physical help such as speech-to-text tools;
- as a research tool; and
- summaries posted as part of a comment, such as a table or graph of recent edits, where those are the subject of the thread.
- I don't have any opinion yet on the merits of different forms of disclosure or requirements of disclosure.
–LPfi (talk) 15:51, 16 May 2026 (UTC)
- I'm opposed to AI edits, period. Since it's probably not possible to ban all AI edits, my opinion is that any noticeable or questionable AI edit should be banned from the site. Ikan Kekek (talk) 16:17, 16 May 2026 (UTC)
- @Globetrotter30: FYI, draft pages shouldn't be hosted in projectspace since it can give the illusion that they're actual policies or guidelines. Unless anyone has any objections, I'm going to move this into your userspace tomorrow. //shb (t | c | m) 04:47, 17 May 2026 (UTC)
A complication
[edit]Only in the US & not the Supreme Court there, but US judge: Art created solely by artificial intelligence cannot be copyrighted.
So what happens if someone uploads AI-created material here or to Commons? The software assigns copyright to the contributor; is that invalid or illegal? Pashley (talk) 19:05, 16 May 2026 (UTC)
- The legal situation is unclear – we don't know what AI-assisted contributions might be copyright protected – but material not under copyright gets uploaded all the time. I get attributed for deleting a comma or adding a phone number, and Commons has templates for non-eligible files, such as {{textlogo}} and {{PD-USGov}}. –LPfi (talk) 20:11, 16 May 2026 (UTC)