it's the same issue in education. can humans or their "AIs" detect AI-generated content? so far it seems like the answer is no.
Long before ChatGPT, when I was teaching, I'd always have students write their tests in class. It was the only way I knew that what they produced was their own. Students with disabilities were given more time in a proctored space. If it was composition, I could judge their mastery of grammar, mechanics, etc. Otoh, if it were a literature or philosophy class, I could examine their thoughts, which were more important than syntax. I.e., if I could understand what they were trying to say, that was good. If their grammar got in the way so much that I couldn't understand them, that was different.
However, for take home exams, I'd encourage them to use every tool a professional writer might use, including an editor and copywriter. If ChatGPT had existed, I'd have encouraged them to use it as much as wikipedia or the encyclopedia. But, what about students who cheat and just use an AI, or copy from an encyclopedia, or have someone else write an essay for them?
Yeh, that's a problem. They don't want the education that comes from doing the work themselves. My point is that it's not a new problem. Alas, nowadays, many students can only print; so, writing in class is an issue. Imo, I'll take a fifteen minute sit-down with a student to an essay.
"A man is rich when he has time and freewill. How he chooses to invest both will determine the return on his investment."