From what I understand from the article, it’s even rougher than normal content moderation: a lot of these workers were hired to train AIs like ChatGPT away from, well, all the worst that the web could provide, to detoxify it for the end users. They are specifically given the worst stuff that can be dredged up from the depths of the internet, and asked to label it - so that the AI can use that data to identify hatespeech, suicide ideation, racism, CSA, etc etc.
It’s an awful job, and some are paid less than $2 an hour. Workers have PTSD. Are they being offered counselling, support, compensation? Fuck no, they get punished for speaking up.
Good luck to the African Content Moderators Union, I hope they get the protections and compensation that their workers need.