This Sunday, a big step was taken by the United States, Britain, and 15 other countries. They brought out a 20-page (and 5000+ words) agreement about artificial intelligence (AI) safety. It's about making AI that is safe right from the start.
The agreement says AI should be made and used in a way that keeps everyone safe. It's not a strict rule, but more like good advice "for providers of any systems that use artificial intelligence". It talks about watching AI for wrong use, keeping data safe, and checking who makes the software.
Jen Easterly, who leads the U.S. Cybersecurity and Infrastructure Security Agency, thinks this is very important. She talked to media outlets and said AI shouldn't just be about new features or being cheap. Safety in AI design is what's most important.
"The guidelines are broken down into four key areas within the AI system development lifecycle: secure design, secure development, secure deployment, and secure operation and maintenance", reads the document.
This is part of a bigger effort by many countries to guide AI future. AI is getting more important in work and in our lives. The countries that agreed include Germany, Italy, and others like Australia and Singapore.
The agreement focuses on stopping hackers from using AI in bad ways. But it doesn't answer big questions about how AI should be used or how data for AI is collected.
AI is growing and people worry about it. They fear it could hurt democracy, increase fraud, or cost many jobs.
Europe is doing more about AI rules than the U.S. Countries like France and Germany agree that AI should have rules about how it's used. They want AI models to follow certain codes of conduct.
In the U.S., President Biden wants laws for AI, but Congress is divided and slow to act. The White House, in October, tried to lower the risks of AI to people and national security.
In this effort, everyone is trying to make sure AI is safe and good for everyone. But it's a big challenge and needs everyone to work together.
Photo: DIW - AIgen
Read next: Navigating the New Normal — 73% of Freelancers Integrating AI into Their Workflows
The agreement says AI should be made and used in a way that keeps everyone safe. It's not a strict rule, but more like good advice "for providers of any systems that use artificial intelligence". It talks about watching AI for wrong use, keeping data safe, and checking who makes the software.
Jen Easterly, who leads the U.S. Cybersecurity and Infrastructure Security Agency, thinks this is very important. She talked to media outlets and said AI shouldn't just be about new features or being cheap. Safety in AI design is what's most important.
"The guidelines are broken down into four key areas within the AI system development lifecycle: secure design, secure development, secure deployment, and secure operation and maintenance", reads the document.
This is part of a bigger effort by many countries to guide AI future. AI is getting more important in work and in our lives. The countries that agreed include Germany, Italy, and others like Australia and Singapore.
The agreement focuses on stopping hackers from using AI in bad ways. But it doesn't answer big questions about how AI should be used or how data for AI is collected.
AI is growing and people worry about it. They fear it could hurt democracy, increase fraud, or cost many jobs.
Europe is doing more about AI rules than the U.S. Countries like France and Germany agree that AI should have rules about how it's used. They want AI models to follow certain codes of conduct.
In the U.S., President Biden wants laws for AI, but Congress is divided and slow to act. The White House, in October, tried to lower the risks of AI to people and national security.
In this effort, everyone is trying to make sure AI is safe and good for everyone. But it's a big challenge and needs everyone to work together.
Photo: DIW - AIgen
Read next: Navigating the New Normal — 73% of Freelancers Integrating AI into Their Workflows