Google is currently working on providing more sophisticated machine learning (ML) tools to Alphabet’s helper robots, attempting to make them more human-compatible one step at a time.
Alphabet, Google’s parent company, started a project by the name of Everyday Robots, which aimed to create robots that can handle everyday tasks with precision and efficiency. As the project’s website openly states, however, the biggest issues that Alphabet faces are the little problems. Telling the robot to move from point A to B is easy enough; getting it to get a cup of tea is less so. Moreover, the robots are facing difficulties in interpreting anything more than the simplest of commands. If a robot is asked to conduct a task, it will only respond if the verbal command is clear, concise, and fits the AI’s learned keywords. All in all, it’s a very limited piece of hardware that has potential but has yet to muster up much of it. Google’s work with machine learning algorithms is bound to change this, or at least put Everyday Robots on the trajectory towards major improvement.
Google’s currently relying on algorithms that the company itself has built to teach robots the ability to follow through on commands with just a little extra complexity to them. For example, an Everyday Robot can be told to get a glass of water, which it will only do if asked in exactly this manner. What Google is aiming to do with its new ML systems is allow for the comprehension of more verbal nuances. The tech giant’s aim is that an Everyday Robot can get the aforementioned glass if the given command is something along the lines of “fetch a glass of water”, or “I’m thirsty: could you please get me something?”. Sounds impossibly mundane, but if a robot manages to catch up on such variable speed, Google in turn has genuinely managed to catch up to a very technical part of robotics: mimicking human interaction.
Don’t get me wrong, I’m not hailing a robot being able to tie his shoelaces as the rise of Skynet, but the ability to effectively glean context from human speech is difficult to induce in an object that inherently has no attached meaning to the word context. Come to think of it, maybe Skynet is on the way. Well, getting run over by an Everyday Robot doesn’t sound like the worst way to go out.
Read next: Apple Watch Can Save Heart Attack Patients According to This Research
Alphabet, Google’s parent company, started a project by the name of Everyday Robots, which aimed to create robots that can handle everyday tasks with precision and efficiency. As the project’s website openly states, however, the biggest issues that Alphabet faces are the little problems. Telling the robot to move from point A to B is easy enough; getting it to get a cup of tea is less so. Moreover, the robots are facing difficulties in interpreting anything more than the simplest of commands. If a robot is asked to conduct a task, it will only respond if the verbal command is clear, concise, and fits the AI’s learned keywords. All in all, it’s a very limited piece of hardware that has potential but has yet to muster up much of it. Google’s work with machine learning algorithms is bound to change this, or at least put Everyday Robots on the trajectory towards major improvement.
Google’s currently relying on algorithms that the company itself has built to teach robots the ability to follow through on commands with just a little extra complexity to them. For example, an Everyday Robot can be told to get a glass of water, which it will only do if asked in exactly this manner. What Google is aiming to do with its new ML systems is allow for the comprehension of more verbal nuances. The tech giant’s aim is that an Everyday Robot can get the aforementioned glass if the given command is something along the lines of “fetch a glass of water”, or “I’m thirsty: could you please get me something?”. Sounds impossibly mundane, but if a robot manages to catch up on such variable speed, Google in turn has genuinely managed to catch up to a very technical part of robotics: mimicking human interaction.
Don’t get me wrong, I’m not hailing a robot being able to tie his shoelaces as the rise of Skynet, but the ability to effectively glean context from human speech is difficult to induce in an object that inherently has no attached meaning to the word context. Come to think of it, maybe Skynet is on the way. Well, getting run over by an Everyday Robot doesn’t sound like the worst way to go out.
Read next: Apple Watch Can Save Heart Attack Patients According to This Research