I think this is the opposite of how most people tend to use LLMs, and I actually think my way is the "better" way. My issue has never been the act of writing well, or clearly expressing what I mean... it has been the inertia of putting words on a page at all.
(and an LLM had nothing to do with this comment :P)
The thing that worries me most is that it's going to redefine the way we write. We absorb language. To compensate for all this AiSpeak I consume, I need to read more literature.
What’s human writing going to look like in a few years if this trend doesn’t stop? I believe that the LLMs will catch up soon and introduce more variance and fewer words designed for impact in their language, delivering us from this AiVerse into one where AI writing is almost indistinguishable from human writing. But until then, we must read more.
This depends on what you consider AI writing. If I dictate what the AI must write word by word verbatim, is it considered AI writing? Is it something to do about the percentage of the text generated? Does it have to do with the vocabulary the AI knows? What if I don't know any other words than the AI does? Does it have to do with the efficiency of communication?
Nevertheless, I don't think AI writing can ever be human writing. No matter if it uses the same words as a human and it's indistinguishable. This is because humans participate in a society as independent conscious actors and thus communication has meaning. The only way text can become communication is when the writer has intents, they're willing to participate in society.
I'm curious as to what you mean by this. I assume you don't mean it literally, as that would be trivially falsifiable (for example, the text readout on a digital caliper doesn't have "intents", yet it absolutely communicates meaning), but I can't think of another way that you might have meant it. Could you elaborate?