So many apps on social media may soon be paying the price if they fail to oblige to the pledge made in relation to the Online Safety Bill.
The news comes as plenty of changes are carried forward in the bill that demand big tech platforms block all sorts of content that’s racist and sexist. Moreover, it’s like seeing a huge number of fines being outlined thanks to the British Government that opted to make such changes on Monday.
With this new approach kickstarting, many companies like Twitter and even Facebook might give users a chance to avoid displaying content that’s harmful but isn't actually committing a crime or an offense of some sort. This includes topics arising in the field of racism, eating disorders, or even misogyny.
The regulator overlooking such endeavors has been outlined as Ofcom, which would now be given the authority to penalize huge firms for breaches made in the act. Just last year, we saw Meta put up revenues worth $118 billion.
Thankfully, the act did drop off one offense linked to harmful communication after getting plenty of criticism from conservative members about how it was hurting feelings.
So many ministers have ended up scrapping such provisions linked to legal material that’s also harmful at the same time. Such content is controversial but is not committing any major crime. Instead, it’s allowing apps to force the right terms on its users.
In case the content being put forward by apps fails to align with the guidelines mentioned like the theme of abuse, then Ofcom can adequately deal with them through hefty fines.
On the other hand, another major change seen in this bill has to do with digital platforms offering people various ways through which they can avoid the promotion of harmful content on the app. Despite it being legal, content moderation needs to be given attention and users need to be provided with warning screens.
Content that comes under this includes the likes of abuse, racism, disability, intimacy, gender reassignment, and even sexual orientation.
If a firm wishes to remove content or ban users, it would need to clearly pen down in writing a justification for the act, and then users would be given the chance to appeal such a decision made against them.
The bill is all set to return to the UK Parliament by December 5 after it received a temporary pause.
Read next: Automation Might Make Income Inequality Worse According to This Study
The news comes as plenty of changes are carried forward in the bill that demand big tech platforms block all sorts of content that’s racist and sexist. Moreover, it’s like seeing a huge number of fines being outlined thanks to the British Government that opted to make such changes on Monday.
With this new approach kickstarting, many companies like Twitter and even Facebook might give users a chance to avoid displaying content that’s harmful but isn't actually committing a crime or an offense of some sort. This includes topics arising in the field of racism, eating disorders, or even misogyny.
The regulator overlooking such endeavors has been outlined as Ofcom, which would now be given the authority to penalize huge firms for breaches made in the act. Just last year, we saw Meta put up revenues worth $118 billion.
Thankfully, the act did drop off one offense linked to harmful communication after getting plenty of criticism from conservative members about how it was hurting feelings.
So many ministers have ended up scrapping such provisions linked to legal material that’s also harmful at the same time. Such content is controversial but is not committing any major crime. Instead, it’s allowing apps to force the right terms on its users.
In case the content being put forward by apps fails to align with the guidelines mentioned like the theme of abuse, then Ofcom can adequately deal with them through hefty fines.
On the other hand, another major change seen in this bill has to do with digital platforms offering people various ways through which they can avoid the promotion of harmful content on the app. Despite it being legal, content moderation needs to be given attention and users need to be provided with warning screens.
Content that comes under this includes the likes of abuse, racism, disability, intimacy, gender reassignment, and even sexual orientation.
If a firm wishes to remove content or ban users, it would need to clearly pen down in writing a justification for the act, and then users would be given the chance to appeal such a decision made against them.
The bill is all set to return to the UK Parliament by December 5 after it received a temporary pause.
Read next: Automation Might Make Income Inequality Worse According to This Study