4 reasons proofreading tools mark your posts as AI-generated
Tools for checking whether articles are created by AI are often very bad. They frequently judge human-written articles as AI-generated content for reasons such as grammar, word choice, and style, leading to major problems for students and others who rely on these articles. written words.
So if these tools keep detecting your posts as AI-generated, here’s why and how you can fix it.
Your grammar is too elaborate
One of the ways AI testing tools flag AI-generated writing is the level of grammatical refinement, as well as whether the writing primarily uses standard or common sentence structures. Theoretically, AI doesn’t make grammatical errors, but even the best writers can make small mistakes when writing. Likewise, if your writing is unstylish and grammatically generic, it will be judged as lacking personal writing style, which can trigger AI content detection tools .
To illustrate, below is an article written by ChatGPT. The content was then pasted into GPTZero and the article was determined to have an extremely high probability of being written by AI at 100%.
If there are a few minor grammatical errors (deleting some commas and adding typos), along with a few minor stylistic changes, the GPTZero score will drop significantly to 81%.
You use common words that AI uses
When reading an article, many of us have developed an intuition about whether it was written by an AI, such as the smooth passages and words that AIs often use, such as ” delve”, “highlight”, “underscore”, “pose”, “the world of”, “strive” and countless other words and phrases. This is evidenced by a significant increase in the use of the word “delve” in academic research articles in 2023, coinciding with the release of ChatGPT.
Using the same text as the previous example, if a few small changes are made to the most suspected AI-generated segments, the GPTZero score drops to 49% – a score that can be considered “genuine”. humans write”. Apparently, AI checkers can be easily fooled with just a few small tweaks, which is one of many examples that highlight their ineffectiveness.
When creating large language models, AI companies often outsource data annotation to countries where English is a common second language. Therefore, some of the words we associate with AI text, such as “delve”, may be the result of data annotations done by people who use English as a second language. these words in their vocabulary.
This is combined with the fact that many people who become fluent in English as a second language often know and understand grammar rules better than native speakers, who have a more intuitive approach. Therefore, users of English as a second language may face the double risk of receiving a false positive result by the testing tool, due to their grammatical and vocabulary choices matching the assessment criteria. price.
Using an AI writing assistant can enable review criteria
Both of the above problems can be seen even without using Generative AI tools. If you actually try to write original articles and use writing assistant tools like Grammarly, your article is more likely to be marked as written by AI. This is definitely a gray area in academia, as these tools are often technically Generative AI assistants, and students often use them as a replacement for studying rather than as a useful supplement.
Therefore, when using tools like Grammarly, be careful if you rely too much on them and use Grammarly’s suggestions as learning opportunities instead of accepting them mindlessly.
Copy results from ChatGPT
Finally, and most obviously, if you actually use ChatGPT and do nothing to modify its output, the testing tool will almost certainly mark your content as AI-generated; this is not a false positive. However, even if you are seriously trying to write completely original, unaided work, perfect grammar and certain vocabulary and expression choices can get you results. false positive results.
Post Comment