Well, folks, Google is stepping up its game when it comes to those tricky AI apps. Starting early next year, they're laying down the law, telling developers that if you want your Android app on their Play Store, you better have a button for reporting or flagging any dodgy AI-generated content. Yep, they're not kidding around.
See, there's been a bit of a wild west situation with AI apps. Some of these apps were creating stuff they shouldn't be. Remember Lensa? Users got it to whip up NSFW pics, and that's just one example. Then there was Remini, which decided to play with people's body features, like some bizarre Photoshop experiment. And let's not forget the trouble with Microsoft and Meta's AI tools - folks were using them to do some pretty weird stuff with famous characters.
But it's not all fun and games. Some bad apples out there have been using AI to create really, really messed up stuff - like child exploitation material. That's seriously dark. Plus, with elections around the corner, there's a worry that AI could be used to cook up fake images, also known as deepfakes, to mess with people's heads.
So, Google is cracking down. Their new policy covers a whole range of AI-generated content, from chatbots to apps that create images based on text or voice commands. And they want developers to follow the rules, like no offensive content and no shady business.
They're not just stopping there, either. They're keeping an eye on apps that ask for access to your photos and videos, making sure they're not up to any funny business. And those pesky full-screen notifications? Google says they'll only pop up when there's a real emergency, not just to sell you stuff.
What's surprising is that Google is the first to lay down the law on AI apps. Usually, it's Apple leading the charge, but they haven't set any official AI or chatbot rules yet. Looks like Google's taking the reins on this one.
So, heads up, AI app developers. You've got until early 2024 to get with the program and make your apps follow the new rules.
Photo: DIW
Read next: Woodpecker Framework In The Spotlight: An Absolute Gamechanger For AI Systems
See, there's been a bit of a wild west situation with AI apps. Some of these apps were creating stuff they shouldn't be. Remember Lensa? Users got it to whip up NSFW pics, and that's just one example. Then there was Remini, which decided to play with people's body features, like some bizarre Photoshop experiment. And let's not forget the trouble with Microsoft and Meta's AI tools - folks were using them to do some pretty weird stuff with famous characters.
But it's not all fun and games. Some bad apples out there have been using AI to create really, really messed up stuff - like child exploitation material. That's seriously dark. Plus, with elections around the corner, there's a worry that AI could be used to cook up fake images, also known as deepfakes, to mess with people's heads.
So, Google is cracking down. Their new policy covers a whole range of AI-generated content, from chatbots to apps that create images based on text or voice commands. And they want developers to follow the rules, like no offensive content and no shady business.
They're not just stopping there, either. They're keeping an eye on apps that ask for access to your photos and videos, making sure they're not up to any funny business. And those pesky full-screen notifications? Google says they'll only pop up when there's a real emergency, not just to sell you stuff.
What's surprising is that Google is the first to lay down the law on AI apps. Usually, it's Apple leading the charge, but they haven't set any official AI or chatbot rules yet. Looks like Google's taking the reins on this one.
So, heads up, AI app developers. You've got until early 2024 to get with the program and make your apps follow the new rules.
Photo: DIW
Read next: Woodpecker Framework In The Spotlight: An Absolute Gamechanger For AI Systems