The Microsoft-owned social network, LinkedIn, has taken a big step. The platform is now using AI for sorting posts – to find out which ones are okay and which are not. This is new and big. Imagine, AI working like a filter. It checks all posts, separating the good from the bad. It's like having a super smart helper, always making sure only good things reach into the user feeds.
Let's look closer. LinkedIn's AI, it's special. It learns from real things, real posts. It's like a detective, cleverly finding posts that break rules. It doesn't just look for obvious blunders; it understands deeper things, like how we talk and what we mean. This new framework as per Abhishek Chandak "helps ensure that content that doesn’t align to our policies does not make it onto the platform".
As per Chandak, LinkedIn's new "framework uses a set of XGBoost models, to predict the probability of a piece of content being violative or clear. XGBoost provides better recall at set precision for us compared to architectures like TF2-based neural networks as our training data is tabular and there is large variation in violation patterns over time." Adding further, "These models are trained on a representative sample of past human labeled data from the content review queue and tested on another out-of-time sample. We leverage random grid search for hyperparameter selection and the final model is chosen based on the highest recall at extremely high precision (R@P). We use this success metric because LinkedIn has a very high bar for trust enforcements quality so it is important to maintain very high precision."
Now, you might think, "What does this mean for me?" It's about making LinkedIn better for you. The aim is to keep your feed clean, only showing things that are right for a professional place. It's like a personal guard, keeping out things that shouldn't be there. The AI does a lot of work, but humans still look at the hard cases. This team of humans and AI, they work together perfectly.
But there's more. It's not just about stopping bad posts. It's also about doing things more efficiently. With AI helping, human moderators can focus on the harder stuff. It's a good strategy, making sure every post is checked properly.
"This new content review prioritization framework is able to make auto-decisions on ~10% of all queued content at our established (extremely high) precision standard.", claims LinkedIn on its blog.
But let's be honest. No system is perfect. AI can make mistakes. But LinkedIn says they have confidence in their system. They think it will make your time on LinkedIn better. It's a big goal, to be a leader in checking content.
So, next time you're scrolling through LinkedIn, take a moment to notice the change. Is your feed cleaner, more professional? Are you seeing less of what you don't want and more of what you do? This is the power of AI at work, a silent guardian shaping your digital world.
In the big picture, what LinkedIn is doing could be an example for other platforms. Imagine if all online places had this kind of system – efficient, quiet, and good at its job. LinkedIn is trying to make this real.
Photo: DIW - AI-gen
Read next: SEO vs CRO: Combine Them for The Best Success
Let's look closer. LinkedIn's AI, it's special. It learns from real things, real posts. It's like a detective, cleverly finding posts that break rules. It doesn't just look for obvious blunders; it understands deeper things, like how we talk and what we mean. This new framework as per Abhishek Chandak "helps ensure that content that doesn’t align to our policies does not make it onto the platform".
As per Chandak, LinkedIn's new "framework uses a set of XGBoost models, to predict the probability of a piece of content being violative or clear. XGBoost provides better recall at set precision for us compared to architectures like TF2-based neural networks as our training data is tabular and there is large variation in violation patterns over time." Adding further, "These models are trained on a representative sample of past human labeled data from the content review queue and tested on another out-of-time sample. We leverage random grid search for hyperparameter selection and the final model is chosen based on the highest recall at extremely high precision (R@P). We use this success metric because LinkedIn has a very high bar for trust enforcements quality so it is important to maintain very high precision."
Now, you might think, "What does this mean for me?" It's about making LinkedIn better for you. The aim is to keep your feed clean, only showing things that are right for a professional place. It's like a personal guard, keeping out things that shouldn't be there. The AI does a lot of work, but humans still look at the hard cases. This team of humans and AI, they work together perfectly.
But there's more. It's not just about stopping bad posts. It's also about doing things more efficiently. With AI helping, human moderators can focus on the harder stuff. It's a good strategy, making sure every post is checked properly.
"This new content review prioritization framework is able to make auto-decisions on ~10% of all queued content at our established (extremely high) precision standard.", claims LinkedIn on its blog.
But let's be honest. No system is perfect. AI can make mistakes. But LinkedIn says they have confidence in their system. They think it will make your time on LinkedIn better. It's a big goal, to be a leader in checking content.
So, next time you're scrolling through LinkedIn, take a moment to notice the change. Is your feed cleaner, more professional? Are you seeing less of what you don't want and more of what you do? This is the power of AI at work, a silent guardian shaping your digital world.
In the big picture, what LinkedIn is doing could be an example for other platforms. Imagine if all online places had this kind of system – efficient, quiet, and good at its job. LinkedIn is trying to make this real.
Photo: DIW - AI-gen
Read next: SEO vs CRO: Combine Them for The Best Success