The ongoing conflict in Gaza has left many questions in people’s minds in terms of the war strategy being implemented by Israel’s Defense Forces.
But new evidence is emerging in terms of how the country is actually making use of AI targeting systems to help intensify the already brutal operations in the region, which have killed thousands.
The latest system dubbed Hasbora is being utilized for specific target selection in the current war on Hamas. These help to not only highlight Hamas targets in the region but also to link the locations to the operations of Hamas. Similarly, it’s used to guess the figures for civilian deaths arising ahead of time.
This has raised serious eyebrows in terms of how advanced AI has become that it’s being used to intensify the already damaging war whose end seems to be nowhere near. Moreover, knowing how capable AI is, it’s a clear message to the world that perhaps this might be the start of destruction and a means to alter wars currently and in the future too.
Militaries continuously utilize remote as well as autonomous systems which are designed to serve as a means that multiplies the force used to carry out the destructions so that Israel and its soldiers remain secure at all times. Similarly, AI systems may transform soldiers into more useful entities while enhancing the lethality of warfare arising as humans change into invisible beings on the battlefield so they’re less seen and retaliated against by the opposing side.
They can further go on to gather intelligence and hit targets from a massively far distance too.
Now the question is that as the world tries to sit down and unite to find a solution for such a huge conflict, why is technology of this kind emerging in the first place that aggravates the dehumanization of top-of-the-line adversaries? It’s getting scarier to think how AI has been underestimated to such an extent that it’s causing a rift between society and conflicts of this kind.
Seeing the effects of AI on all types of war levels including supporting forces, gathering intelligence, and even generating usage of brutal and lethal weapons without any kind of human intervention can really mean danger.
The system is capable of reshaping the whole war and makes it so much simpler to get into such a conflict. Remember, the system is complex and it’s distributed, and seeing it massively enhance an already escalating war is scary, to say the least.
It can even give rise to major misunderstandings during the war, thanks to its tendency to misinterpret and provide misinformation. Moreover, seeing it give rise to dangerous outcomes and increase uncertainty is also major news.
The restrictions linked to AI systems that interact with other kinds of technology and individuals still need to be researched further but it’s a huge deal and we could never know more details about who authored the output, even though the objective might be quite evident from the start.
Seeing the pace at which the current conflict in Gaza continues to escalate and how the speeds for warfare also remain at an all-time high means altering the manner through which military deterrence is comprehended.
Humans were always assumed to be the leading actors and main intelligence sources for wars but little did we know that it’s actually the brutal pace of machine learning that has led to this. And as militaries continue to alter the decisions via the OODA loop, they’re bidding farewell to spending long hours spent on deliberating what the next or right move in the war could be.
Seeing it bid farewell to chances of ethical deliberations and produce a never-ending cycle of destructions across the board is just mind-blowing. Who knew that the many benefits of AI could soon be overshadowed by its serious drawbacks where no human interactions may end up giving you disaster beyond repair thanks to machine learning of this kind.
And the perfect example that’s present before our eyes is Israel’s war where targeting software continues to delineate bombing targets via Habsora alongside recommendations generated in real time for attacks.
So now what is the question on our minds and how can we take back control of AI systems where heavy reliance during a time where so many of us are reliant on the technology that gets assistance via learning algorithms?
As it is, past experiences prove how controlling AI in any particular location is hard, especially through the use of laws. So many experts feel the right way to combat the situation would be to establish better laws for its governance. But even then, regulating algorithms getting support from machine learning isn’t easy.
Photo: DIW-AIgen
Read next: ChatGPT Enhances Emergency Management by Identifying Distress Locations via Social Media Data
But new evidence is emerging in terms of how the country is actually making use of AI targeting systems to help intensify the already brutal operations in the region, which have killed thousands.
The latest system dubbed Hasbora is being utilized for specific target selection in the current war on Hamas. These help to not only highlight Hamas targets in the region but also to link the locations to the operations of Hamas. Similarly, it’s used to guess the figures for civilian deaths arising ahead of time.
This has raised serious eyebrows in terms of how advanced AI has become that it’s being used to intensify the already damaging war whose end seems to be nowhere near. Moreover, knowing how capable AI is, it’s a clear message to the world that perhaps this might be the start of destruction and a means to alter wars currently and in the future too.
Militaries continuously utilize remote as well as autonomous systems which are designed to serve as a means that multiplies the force used to carry out the destructions so that Israel and its soldiers remain secure at all times. Similarly, AI systems may transform soldiers into more useful entities while enhancing the lethality of warfare arising as humans change into invisible beings on the battlefield so they’re less seen and retaliated against by the opposing side.
They can further go on to gather intelligence and hit targets from a massively far distance too.
Now the question is that as the world tries to sit down and unite to find a solution for such a huge conflict, why is technology of this kind emerging in the first place that aggravates the dehumanization of top-of-the-line adversaries? It’s getting scarier to think how AI has been underestimated to such an extent that it’s causing a rift between society and conflicts of this kind.
Seeing the effects of AI on all types of war levels including supporting forces, gathering intelligence, and even generating usage of brutal and lethal weapons without any kind of human intervention can really mean danger.
The system is capable of reshaping the whole war and makes it so much simpler to get into such a conflict. Remember, the system is complex and it’s distributed, and seeing it massively enhance an already escalating war is scary, to say the least.
It can even give rise to major misunderstandings during the war, thanks to its tendency to misinterpret and provide misinformation. Moreover, seeing it give rise to dangerous outcomes and increase uncertainty is also major news.
The restrictions linked to AI systems that interact with other kinds of technology and individuals still need to be researched further but it’s a huge deal and we could never know more details about who authored the output, even though the objective might be quite evident from the start.
Seeing the pace at which the current conflict in Gaza continues to escalate and how the speeds for warfare also remain at an all-time high means altering the manner through which military deterrence is comprehended.
Humans were always assumed to be the leading actors and main intelligence sources for wars but little did we know that it’s actually the brutal pace of machine learning that has led to this. And as militaries continue to alter the decisions via the OODA loop, they’re bidding farewell to spending long hours spent on deliberating what the next or right move in the war could be.
Seeing it bid farewell to chances of ethical deliberations and produce a never-ending cycle of destructions across the board is just mind-blowing. Who knew that the many benefits of AI could soon be overshadowed by its serious drawbacks where no human interactions may end up giving you disaster beyond repair thanks to machine learning of this kind.
And the perfect example that’s present before our eyes is Israel’s war where targeting software continues to delineate bombing targets via Habsora alongside recommendations generated in real time for attacks.
So now what is the question on our minds and how can we take back control of AI systems where heavy reliance during a time where so many of us are reliant on the technology that gets assistance via learning algorithms?
As it is, past experiences prove how controlling AI in any particular location is hard, especially through the use of laws. So many experts feel the right way to combat the situation would be to establish better laws for its governance. But even then, regulating algorithms getting support from machine learning isn’t easy.
Photo: DIW-AIgen
Read next: ChatGPT Enhances Emergency Management by Identifying Distress Locations via Social Media Data