- Joined
- Jan 20, 2004
- Location
- Pictland
For a while now some of us on the Casinomeister crew have had grave concerns about the looming threat of machine-generated posts and “content” on our forums.
As it happens we’ve already seen the negative effects of AI-content in our complaints (PAB) service. There we get countless repetitions of the same ChatGPT- or Claude-generated nonsense telling us how bad casino behaviour is “against the law” and a “violation of accepted standards” and how we as a "protector of the people” are duty-bound to “uphold" the rights and freedoms of all free persons blah blah blah. None of which has the slightest thing to do with the player’s case. It’s just fluff, machine-generated garbage that’s a waste of useful bandwidth and our time.
And now we’re seeing that rubbish appearing on our forums. A little research tells us we’re not the first to see this and be concerned. Indeed, case studies on Stack Overflow and Reddit have shown that in worst-case scenarios AI-content can severely undermine a site’s appeal to its readers: information community monitors have seen participation slashed by as much as 25% in a single week, 50% or 80% in a matter of weeks.
Needless to say that dire situation is one we’d prefer to avoid so we are taking steps to counter it before it becomes a problem. The good news is that expert knowledge communities like ours with a strong social fabric can and do resist the degrading effects of ChatGPT, Claude and other generative-AI. The bad news is that newcomers to the community are the most likely to be turned away if AI-content is allowed to dominate.
Our first line of defence is to make it easy for readers to flag AI-generated content when they see it:
As you will all know every post on the site has the “Like” link (1) that allows you to “thumbs-up” a post or choose whatever emoji you think is applicable.
We’ve now added a robot head (2) — he may look something like a certain rude fellow named Bender — to allow you to flag what you think is AI-generated content. Of course the post footer will show that you’ve flagged the content (3) and the robot head shows up at the foot of the post (4).
The idea is that if we see an accumulation of robot heads on a particular post we’ll (eventually) be alerted to it and can come and have a look-see to determine if there is a problem that we need to take action on.
(quick side note: giving the Robot to a post is neutral, it neither adds to nor detracts from the person’s Reputation. The point being that you’re just flagging something you think is AI-content, not penalising the person who posted it.)
Thoughts? Comments? Concerns? Let us know, post a comment below and we’ll do what needs doing.
PS. If you get an Alert that you’ve been assigned a Ai-Content-Bot badge ...
… please don’t sweat it. The Badge system on the forums is a bit tempermental when these things are first created. It settles down after a couple days and behaves itself. If the Badge doesn’t disappear on it’s own after a day or so please don’t hesitate to DM me and I’ll get rid of it for you.
As it happens we’ve already seen the negative effects of AI-content in our complaints (PAB) service. There we get countless repetitions of the same ChatGPT- or Claude-generated nonsense telling us how bad casino behaviour is “against the law” and a “violation of accepted standards” and how we as a "protector of the people” are duty-bound to “uphold" the rights and freedoms of all free persons blah blah blah. None of which has the slightest thing to do with the player’s case. It’s just fluff, machine-generated garbage that’s a waste of useful bandwidth and our time.
And now we’re seeing that rubbish appearing on our forums. A little research tells us we’re not the first to see this and be concerned. Indeed, case studies on Stack Overflow and Reddit have shown that in worst-case scenarios AI-content can severely undermine a site’s appeal to its readers: information community monitors have seen participation slashed by as much as 25% in a single week, 50% or 80% in a matter of weeks.
Needless to say that dire situation is one we’d prefer to avoid so we are taking steps to counter it before it becomes a problem. The good news is that expert knowledge communities like ours with a strong social fabric can and do resist the degrading effects of ChatGPT, Claude and other generative-AI. The bad news is that newcomers to the community are the most likely to be turned away if AI-content is allowed to dominate.
Our first line of defence is to make it easy for readers to flag AI-generated content when they see it:
As you will all know every post on the site has the “Like” link (1) that allows you to “thumbs-up” a post or choose whatever emoji you think is applicable.
We’ve now added a robot head (2) — he may look something like a certain rude fellow named Bender — to allow you to flag what you think is AI-generated content. Of course the post footer will show that you’ve flagged the content (3) and the robot head shows up at the foot of the post (4).
The idea is that if we see an accumulation of robot heads on a particular post we’ll (eventually) be alerted to it and can come and have a look-see to determine if there is a problem that we need to take action on.
(quick side note: giving the Robot to a post is neutral, it neither adds to nor detracts from the person’s Reputation. The point being that you’re just flagging something you think is AI-content, not penalising the person who posted it.)
Thoughts? Comments? Concerns? Let us know, post a comment below and we’ll do what needs doing.
PS. If you get an Alert that you’ve been assigned a Ai-Content-Bot badge ...
… please don’t sweat it. The Badge system on the forums is a bit tempermental when these things are first created. It settles down after a couple days and behaves itself. If the Badge doesn’t disappear on it’s own after a day or so please don’t hesitate to DM me and I’ll get rid of it for you.
Last edited: