A team of Stanford University researchers have found that major AI foundation models like ChatGPT, Claude, Bard and LlaM-A-2 are becoming less transparent. This lack of transparency poses challenges for businesses, policymakers, and consumers alike. Companies such as OpenAI, Anthropic, and Google have different views on openness and transparency, with OpenAI believing that it is necessary to maintain secrecy. MIT research from 2020 supports OpenAI’s core philosophy. To address the issue, the researchers devised the Foundation Model Transparency Index (FMTI) which evaluates the transparency of companies when it comes to their AI models. The results were less than stellar, with the highest scores ranging from 47 to 54 on a scale of 0 to 100. The need for properly transparent AI development has been voiced by policymakers globally. As AI models become more integrated into various sectors, transparency becomes paramount, not only for ethical considerations, but also for practical applications and trustworthiness.

This is an important issue for businesses, policymakers, and consumers alike, as it is necessary for trustworthiness and practical applications. The results of the FMTI are concerning, as the highest scores are still relatively low. These findings emphasize the need for transparency in AI models.

#AI #transparency #foundationmodels #FMTI #StanfordHAI

You can read more about this topic here: Decrypt: AI Model Transparency Is Getting Worse, Researchers Warn

Want more Byte Syze Crypto news?

Invalid email address
We promise not to spam you. You can unsubscribe at any time.