There are a lot of AI models right now, but are AI companies really transparent about the "technical underpinnings" of their large language models (LLMs)? According to a new report from Americans for Responsible Innovation (ARI), the organization which advocates for AI regulation, many AI startups are not really open and transparent about the technical details of their AI models as compared to tech giants. Tech giants are also not very open, but they still have some transparency as compared to closed models. The company made this conclusion after analyzing different AI models from Anthropic, xAI, OpenAI, Google, Meta and 21 other companies.
The policy analyst of ARI, David Robusto, said that there are a lot of factors why many companies do not tend to be open and transparent about each AI update. To make detailed documentation about every update, it takes a lot of time, effort and resources. There is always also a chance that company rivals try to reverse-engineer the work based on details on the documents. When companies are secretive about the technical details of their models or other tech devices, it gives them a competitive advantage over other companies. That's why they do not find it necessary to give all the details about updates.
The report says that third parties and policy makers need technical details to understand how the models work, especially in defense and healthcare areas. As some big foundation models are not transparent, it makes the decision making process difficult. There should be some regulations and industry-wide standards for the issues regarding transparency of AI models. There should be some mandatory details that companies should have to disclose no matter what. If we do not know the details about LLMs, we cannot make comparisons between the models even despite the industry benchmarks.
According to the report, LLama 3.2 is the most transparent, with detailed information about training procedures, model architecture and computational requirements. GPT-4o and Gemini 1.5 were also somewhat transparent. The model with least transparency was Grok-2. The area where AI models were the least transparent was in technical transparency. The report also found that user-facing documentation was the best scoring category, with an average score of 3.19 out of 4.0. In systematic risk evaluations, almost all models scored good except Grok-2. All the models scored low on security, as many of the companies didn't provide much information about how they are protecting the systems.
Read next:
• Downloading Cracked Software? Beware of the Hidden Malware Stealing Your Info
• Privacy Concerns Rise as Hackers Threaten to Expose Data from Top Apps Used by Millions
The policy analyst of ARI, David Robusto, said that there are a lot of factors why many companies do not tend to be open and transparent about each AI update. To make detailed documentation about every update, it takes a lot of time, effort and resources. There is always also a chance that company rivals try to reverse-engineer the work based on details on the documents. When companies are secretive about the technical details of their models or other tech devices, it gives them a competitive advantage over other companies. That's why they do not find it necessary to give all the details about updates.
The report says that third parties and policy makers need technical details to understand how the models work, especially in defense and healthcare areas. As some big foundation models are not transparent, it makes the decision making process difficult. There should be some regulations and industry-wide standards for the issues regarding transparency of AI models. There should be some mandatory details that companies should have to disclose no matter what. If we do not know the details about LLMs, we cannot make comparisons between the models even despite the industry benchmarks.
According to the report, LLama 3.2 is the most transparent, with detailed information about training procedures, model architecture and computational requirements. GPT-4o and Gemini 1.5 were also somewhat transparent. The model with least transparency was Grok-2. The area where AI models were the least transparent was in technical transparency. The report also found that user-facing documentation was the best scoring category, with an average score of 3.19 out of 4.0. In systematic risk evaluations, almost all models scored good except Grok-2. All the models scored low on security, as many of the companies didn't provide much information about how they are protecting the systems.
Read next:
• Downloading Cracked Software? Beware of the Hidden Malware Stealing Your Info
• Privacy Concerns Rise as Hackers Threaten to Expose Data from Top Apps Used by Millions