US Military Ramps Up AI Use As Pentagon Identifies Airstrike Targets Through Google's Project Maven

Things are getting very interesting in the world of AI as the US military just confirmed the use of Project Maven for identifying airstrike targets.

The military has recently witnessed a large increase in AI tools after a series of attacks took place on Hamas in Israel last October. This was confirmed by media outlet Bloomberg.

The CTO for the American Central Command explained during the report how algorithms based on machine learning were really doing justice in terms of offering the right kind of aid for serious matters like these. But the news has cast doubt on just how accurate that could be.

This made the military rep jump right in to confirm how the targets were not solely under the control of AI but were in fact reaffirmed by humans and only after double-checking through human verification were they allowed to move forward.

Shockingly, a whopping number of targets were identified, we’re talking close to 85 air strikes across the Middle East in February alone.

American bombers as well as fighter jets were enrolled in that particular air strike against a long list of facilities situated in places like Syria and Iraq this past month. They managed to completely destroy and cause serious damage to weapons, drones, missiles, storage locations, and other operation centers designed for use in such endeavors.

Meanwhile, the Pentagon also mentioned how they ended up making use of AI to search for rocket launchers in nations like Yemen and beyond such as those situated in the Red Sea.

They managed to destroy that via a long list of air strikes in that particular month.

The whole machine-learning algorithm made use of very narrowed-down targets that were created through Project Maven. Today, Google has a very defunct relationship with search engine giant Google.

To be more specific on this front, this project entailed using AI technology by American military officials to better analyze footage from drones and call out pictures for review by humans again.

As one can imagine, the news is causing serious alarm among critics and experts who fear this might not be the best way to avoid human loss as poor judgments made through AI technology can lead to devastating consequences.

The use of such algorithms was utilized to ensure targets had been narrowed down via Project Maven. It made use of AI tech from Google to better gauge the kinds of drone footage seen and highlight those pictures for review by humans.

This resulted in a serious uproar throughout the community featuring members of Google.

Thousands were seen rolling out petitions to get rid of its collaboration with the Pentagon after the news broke out. Moreover, it was to such a grave extent that some ended up getting rid of the involvement as a whole. After that particular protest workers, Android maker Google opted to halt the renewal of its contract which expired in 2019.

Speaking to Bloomberg, some insiders from the military did confirm how the American forces have yet to halt their experiments of using AI algorithms and images generated through satellite after Google put an end to that involvement.

The military has long been testing the use of this technology since 2023 began through a series of digital exercises. It utilized phenomena like focusing more on algorithms in real operations after the groundbreaking last straw of Hamas carried out an attack in Israel.

But with human intervention added for verification, it’s going to be interesting to see how critics respond to this.

Photo: Digital Information World - AIgen

Read next: From Syria to Senegal: the 10 countries that searched for VPNs most in 2023
Previous Post Next Post