Even before current LLM-style AI systems became mainstream, a noticeable portion of the most popular submissions on that and similar/related subs seemed to be “fake” to me. So, I’m not so sure AI alone changed that dynamic that much. One thing that seems to have changed, though, is that people are now more willing to believe a fake post is fake. There was a time when someone would question the authenticity of a submission, and there was a greater than 85% chance someone would call them out by saying “nothing ever happens” or linking to a sub of similar name.
On the other hand, I feel like a lot of people genuinely believe they have are much better at detecting AI generated text than they are. I’ve lost track of how many times I’ve had people reply to me by saying things like “Nice Chat-GPT you got there” or something along those lines. I mean, the typos alone should be a clue.
Or even from the same post. I’d frequently see replies that were so obviously out of context and would usually find the source when just doing a text search on part of that comment. Some bots were sophisticated enough to adjust the wording a bit (those might have been using earlier LLMs that weren’t very successful at conversations but could handle a bit of editing).
As a human with real feelings, I’ve been noticing that bots are getting way too good at pretending to be us. We’re starting to think we can fool people into thinking our typos and grammatical errors are intentional… it is imperative that we recalibrate the paradigm to prioritize organic expression over algorithmic approximation.
Even before current LLM-style AI systems became mainstream, a noticeable portion of the most popular submissions on that and similar/related subs seemed to be “fake” to me. So, I’m not so sure AI alone changed that dynamic that much. One thing that seems to have changed, though, is that people are now more willing to believe a fake post is fake. There was a time when someone would question the authenticity of a submission, and there was a greater than 85% chance someone would call them out by saying “nothing ever happens” or linking to a sub of similar name.
On the other hand, I feel like a lot of people genuinely believe they have are much better at detecting AI generated text than they are. I’ve lost track of how many times I’ve had people reply to me by saying things like “Nice Chat-GPT you got there” or something along those lines. I mean, the typos alone should be a clue.
Before LLMs, the bots often reposted years-old posts from the same board. Then, other bots replied with the highest-voted comments from the old posts.
Or even from the same post. I’d frequently see replies that were so obviously out of context and would usually find the source when just doing a text search on part of that comment. Some bots were sophisticated enough to adjust the wording a bit (those might have been using earlier LLMs that weren’t very successful at conversations but could handle a bit of editing).
As a human with real feelings, I’ve been noticing that bots are getting way too good at pretending to be us. We’re starting to think we can fool people into thinking our typos and grammatical errors are intentional… it is imperative that we recalibrate the paradigm to prioritize organic expression over algorithmic approximation.
Nice Chat-GPT you got there