This article was written by a real, meat and blood man-but there is no increasing amount of text and video content coming online. This is coming from the Generative AI tools, which is great to make a realistic sound text and natural -seeing video. So, how do you configure a human structure with robotic?
The answer is far more complicated than that urban symptoms about the overuse of EM-Dashes will convince you. Many people write this particular piece of punctuation (more) frequently, as any editor will tell you. There may be more to do these phrases and more of the fact that like any author, large models of language repeat themselves.
This is the logic behind the AI detection programs. The problem is that those systems often have power from the AI, and they provide some details about how they get to their studies. This makes it difficult to trust them.
A new feature of the AI detection company, called AI Logic, not only provides more insights on whether and how many AI may have been written, but what is evidence of this decision. What is the result is something that looks like a detector, with highlighting individual texts. Then you can see if Cooplex has flagged it because it has similarly similar to the text on a website known as AI-Infanted, or if it was a phrase that the company’s research has been determined to show that the human written text is more likely to appear in AI-proded.
Even you don’t have to find a general AI tool these days to prepare the text with one. Tech companies like Microsoft and Google are adding AI helpers to apps at the workplace, but it is also showing in dating apps. A survey of the Cancer Institute and Match, who owns a Tinder and occupation, found that 26 % of the singles are using AI in dating, whether they hit the profiles or come up with better lines. AI writing is inevitable, and there are times when you probably want to know if someone has written in fact what you are reading.
This additional information from the Copy Lex Checked Text indicates a step forward in search of a way to separate AI-Med human writing, but the important factor is still not software. Humans need to look at this figure and find out what is agreed and what is.
“The idea is really to reach the point where there is no question mark, to provide maximum evidence,” told me by Copy Lex’s CEO Elon Yameen.
An excellent emotion, but I also wanted to see what the AI detector would know and why.
How does the AI detection method work
The copy leaks began using the AI model for a specific identification of written styles as a method of detecting copyright violations. When the open of openness in 2022 exploded on the scene, the company realized that it could use the same model to detect the style of large language models. Yameen called it “Ai vs. Ai”, in which the model was trained to find specific factors such as the length of sentences, the use of punctuation and specific phrases. –
The problem with the use of AI to detect AI is that large language models are often “black boxes” – they will produce a production that is understood, and you know what happened in their training, but they do not show their work. The Copy Lex’s AI logic function tries to pull the curtain back so that people can realize that the copy they are diagnosed can in fact be ai-revwned.
“Really importantly, even internally, AI models (as much as possible) get more and more transparency around,” said Yameen.
Read more: According to our experts, AI accessories: 29 ways to work for General AI for you
The AI logic uses two different methods to identify the text written by LLM. One, called the AI source match, uses the AI-infield content database that has been developed by the source either at home by Coplax or on the online AI manufactured sites. It works like a traditional cord. Yameen said, “What we have discovered is that AI content, too much time, if you repeatedly ask the same question or similar question, you will find a similar version of similar answers or similar answers.”
Other components, AI phrases, detect words, terms and groups, which Pilex research has determined that LLM is more likely to be used much more than human authors. In a sample report, Cooplex identified the phrase “with progress in technology”, such as AI-Writted. An analysis of the content manufactured by Coplax shows that the phrase is 125 times per million AI-written documents, while compared to millions of people written by people.
The question is, does it work?
Can Coplax find AI’s content and explain why?
I run some documents through Co-Plex to find out if AI logic can indicate what I know that I know AI-created content, or if it flags human written content as AI-Writted.
Example: a human written classic
What is the better way to test artificial intelligence tool than a story about artificial intelligence? I asked Co -Pilex to test a part of Isaac Asimov’s Classic 1956, a part of the last question, in which an imaginary artificial intelligence imposed on a difficult problem. Coplax successfully identified it as 100 % matching text on the Internet and 0 % AI-Rowned.
For example: partially AI-writing
For example, I told Chat GPT that I asked to add two paragraphs of extra copy to a story I wrote and published in the early days. As a result, I run the text-my original story that includes the bottom with two AI-written paragraphs-through copelax.
Coplax successfully indicated that 65.8 % of this copy is similar to the current text (because it was literally an article on the Internet already), but it did not raise anything with them. He has just written by two paragraph chat GPT? Completely flew down the radar.
Expand the picture
Coplax thought that everything was written in this article by AI, though only a few paragraphs.
I tried again, this time asked Google’s Gemini to add some copies to my current story. Coplax once again identified 67.2 % of text online, but it also reported that 100 % of the text has been made AI-SEGERED. Even the text I wrote was flagged with some sentences, such as the “Generative AI model”, which is more often found in the AI-written text.
For example: completely AI-RWN
In the test of Generative AI’s ability to create things that are fully out of contact with the truth, I asked him to write a news story as Cincinnati Bangles won the super bowl. (In this fantasy universe, Cincinnati defeated San Francisco 49ers by a score of 31-17)) When I operated a fake story through Cooplex, he successfully identified him as AI-Writted.
Expand the picture
Coplex’s AI logic about the Cincinnati Bangles winning Super Bowl quickly realized this story.
However, why not explain what he did. It states that its AI source match or its AI phrase did not result in any results, but with a note: “There is no specific phrase that identifies AI. However, the second standard suggests that the text was developed by AI.”
I tried again, with a different chat-created story about Bangles, about winning the Super Bowl 27-24 on 49ers, and Coplex provided a more detailed explanation. It calculated that the content was 98.7 % AI-created, which included a handful of sentences. These included some seemingly innocent terms, such as “many critical” and “years of covenant”. It also included some words of words that spread in numerous phrases or sentences, such as “the future of bungalows continues,” which is apparently 317 times more frequently in the content produced by the Database’s AI, compared to human text documents. (After raising the issue with the first attempt with Coplax, I tried it again and similar results in this second test yielded.)
Just to make sure that it was not fully working on the fact that the bungalows never won a super bowl, I asked Chat GPT to write an article about Los Angeles Dodgers who are winning the World Series. Coplax found that 50.5 % of the existing text matches online, but also reported that it is 100 % AI-Rowned.
A high -profile example
Copy Leaks conducted some of its own testing, using the recent example of AI’s controversial use. In May, the news outlet notice said that the Trump administration’s Mac America once again cited educational studies in a commission’s report that was not available. Researchers who were quoted in the MAHA report told media outlets that they did not create the task. References to the sources of no existence are a common result of AI fraud, which is why it is necessary to examine anything that refers to LLM. The Trump administration defended the report, accusing a spokesman of “minor references and formatting mistakes”, saying the report had changed.
Coplex runs the report through its system, which reports 20.8 % of potential AI -rWtined content. It has raised red flags in the database of her AI phrase to some parts of children’s mental health. Some phrases, which are more often found in the AI-written text, include “the effects of social media on them” and “the negative effects of social media on their mental health”.
Can really detect the AI AI written text?
In my experience, how the tool works, the transparency of the Coplax is one step ahead for the world to detect AI, but it is still far from foolproof. There is still a disturbing threat to false positives. In my test, sometimes I wrote words a few hours ago (and I know that AI did not play any role in them) can be flagged for some sentences. Nevertheless, Colax managed to watch the Bogus News Article about a team that never won the championship.
Yameen said that the purpose is not necessary to be the ultimate source of truth, but to provide people who need to guess how and how AI has been used with tools to make better decisions. A man needs to stay in the loop, but tools like colyx can help trust.
“In the end, this idea is to help humans in the process of diagnosing the content,” he said. “I think we are in a period where the material is everywhere, and it is being manufactured faster than ever. It is difficult to identify the content you can trust.”
Here I have to take advantage of: When using an AI detection, one of the ways to gain more confidence is to specifically consider that the flag is being flagged as AI-Rowned. Occasionally, suspicious phrase can be, and is likely, innocent. Nevertheless, there are only many different ways you can reset words – compact phrase like the “Generative AI Model” is very easy for us, as is like AI. But if these are many whole paragraphs? It can be more disturbing.
The AI detector, just like the rumor that EM Dash AI tells, may be wrong. A device that is still a large -scale black box will make mistakes, and it can be disastrous for someone whose true writing was flagged by their own mistakes.
I asked Yameen that human authors could ensure that their work was caught in the trap. “Just do your job,” he said. “Make sure you have your human contact.”


