YouTube is on a mission to upgrade its automated captions feature. And now, they’ve even rolled out a new test that involves allowing users to recommend their own updates with revisions linked to the captions on display.
The company spoke about the new test designed to better the accuracy of such captions. In this way, it would enable users to put up recommendations in the form of suggested edits.
For now, this is limited to those watching content on the app via desktop. Hence, if they wish to suggest an edit, they can simply open up their transcript and select the icon for pencil. This will enable them to add various suggestions, right before pressing the symbol for the ‘check mark’ that signals it's done.
All viewers can put out edits for errors they feel can pop up in the option for automated translation. The updates can then be submitted by simply clicking on the check seen in the box for edits. Moreover, the final icon is going to be the ‘re-play segment’. This just makes it so much easier to hear the same section again to make sure what is being said.
Any particular edit made is sent through to the app but it will not be seen by the creator. Moreover, any edits that are suggested will surface to the same individual that made the edit on the specific transcript. So if you think it's going to be visible to any broader audience, think again.
As of now, the app is calling this out as a huge experiment that is being used to have data collected based on the suggestion received. This way, YouTube can think more about how it can utilize such an experiment for data collection as suggestions come its way. Similarly, it can better improve the feature for automated captions including other features like spam and abuse.
For now, it’s all editing of the ‘crowd-source’ kind. And it’s quite similar to what we’ve seen being tested by TikTok and Twitter.
The whole point is to gather as much feedback as possible from the app’s users and then use that to create an enhanced loop for assessment. Using the right wisdom from the crowd, it’s sure to enhance every process.
We do feel it’s a great idea and may work because the more eyes there are on an issue, the better it can become. There is a greater likelihood of outlining some common words that affect the system and make it trip. Hence, YouTube will now be able to see more errors that it may have skipped out on in the past. Creators will also now benefit from staying more alert in case an issue arises.
And in cases where it’s not misused, we may end up getting directed to new places that put creators on alert. Yes, the risks are plenty but if YouTube normalizes it, and provides great feedback, we may see an expansion soon.
For now, the test is getting rolled out on a small percentage of videos through the desktop application. Therefore, the chances of running into it are less.
Read next: Android Apps Collect Way More Data Than They Admit, Here's How
The company spoke about the new test designed to better the accuracy of such captions. In this way, it would enable users to put up recommendations in the form of suggested edits.
For now, this is limited to those watching content on the app via desktop. Hence, if they wish to suggest an edit, they can simply open up their transcript and select the icon for pencil. This will enable them to add various suggestions, right before pressing the symbol for the ‘check mark’ that signals it's done.
All viewers can put out edits for errors they feel can pop up in the option for automated translation. The updates can then be submitted by simply clicking on the check seen in the box for edits. Moreover, the final icon is going to be the ‘re-play segment’. This just makes it so much easier to hear the same section again to make sure what is being said.
Any particular edit made is sent through to the app but it will not be seen by the creator. Moreover, any edits that are suggested will surface to the same individual that made the edit on the specific transcript. So if you think it's going to be visible to any broader audience, think again.
As of now, the app is calling this out as a huge experiment that is being used to have data collected based on the suggestion received. This way, YouTube can think more about how it can utilize such an experiment for data collection as suggestions come its way. Similarly, it can better improve the feature for automated captions including other features like spam and abuse.
For now, it’s all editing of the ‘crowd-source’ kind. And it’s quite similar to what we’ve seen being tested by TikTok and Twitter.
The whole point is to gather as much feedback as possible from the app’s users and then use that to create an enhanced loop for assessment. Using the right wisdom from the crowd, it’s sure to enhance every process.
We do feel it’s a great idea and may work because the more eyes there are on an issue, the better it can become. There is a greater likelihood of outlining some common words that affect the system and make it trip. Hence, YouTube will now be able to see more errors that it may have skipped out on in the past. Creators will also now benefit from staying more alert in case an issue arises.
And in cases where it’s not misused, we may end up getting directed to new places that put creators on alert. Yes, the risks are plenty but if YouTube normalizes it, and provides great feedback, we may see an expansion soon.
For now, the test is getting rolled out on a small percentage of videos through the desktop application. Therefore, the chances of running into it are less.
Read next: Android Apps Collect Way More Data Than They Admit, Here's How