Hi!
There's spam here (I can't find a button to report it):
Hi!
There's spam here (I can't find a button to report it):
Done that : )
I think the user accounts may be manually deleted, seems to be a slower process for those.
For instance this guy was reported a week ago, but the account is still visible (although the spam posts were near-instantly obliterated when reported : )
Thanks for the report shabaz , we're aware of this after misaz reported it here: How to report abuse of RoadTest review form? , as dave says the best approach at present is to report the account that posted it and it goes through the automated appeals and deletion process which does take a bit of time. I can accelerate this, however I'm not on this occasion so that the development team can work on it as an example to do something about as it's not the first roadtest review that has been posted as spam.
Hi Jan Cumps ,
The moderation system isn't great for detecting spam and uses an almost draconian and ancient set of rules, for example 'x' number of links in a message, 'x' number of forbidden words, 'x and y' number of posts within a number of minutes, whether or not it has an @ symbol and determining that's an e-mail address.
You can quickly see why this is poor and not fit for purpose.
We have enacted a level of the StopForumSpam detection, but this can only go so far to detect user accounts.
Unfortunately we have not been given the budget to enact Akismet's anti-spam protection, which means more spam gets through undetected to some extent.
Then there's how accounts and content is handled. Roadtests and RoadTest Reviews, and Design Challenges are 'custom content', and the contracted creators didn't think to put in place all of the properties required for content to integrate into Verint's systems, which is why notifications don't work very well on them, why you can't report it to moderation and why the content doesn't list properly in widgets. This was a huge oversight in the requirements laid down for its implementation and execution that the in house development team is now working on.
As for how long the account abuse system works, this is documented here:
https://community.telligent.com/community/12/a/user-documentation/UD356/how-does-moderation-and-abuse-work
We have to account for if the user is genuine versus if the user is a bot, and also if the account is a user masquerading as a bot or vice versa.
Content immediately (or should) be pulled from the site when it's reported as abusive, for whatever reason Verint has decided that user profiles should not be, and so they remain until the timer ticks down that their account is then purged.
There can be good reasons for not immediately purging spam accounts, in some cases we want to prevent the user from registering with the same details again and this cool down works well for that, equally it gives the user time to appeal if they're genuine and it was a mistake.
It's a weighing scales of awful.