China sets stricter rules for training generative AI models

Regulation

China has released draft security regulations for companies providing generative artificial intelligence (AI) services, encompassing restrictions on data sources used for AI model training.

On Wednesday, Oct. 11, the proposed regulations were released by the National Information Security Standardization Committee, comprising representatives from the Cyberspace Administration of China (CAC), the Ministry of Industry and Information Technology and law enforcement agencies.

Generative AI, as exemplified by the accomplishments of OpenAI’s ChatGPT, acquires the ability to perform tasks through the analysis of historical data and generates fresh content, such as text and images, based on this training.

Screenshot of the National Information Security Standardization Committee (NISSC) publication. Source: NISSC

The committee recommends performing a security evaluation on the content used to train publicly accessible generative AI models. Content exceeding “5% in the form of unlawful and detrimental information” will be designated for blacklisting. This category includes content advocating terrorism, violence, subversion of the socialist system, harm to the country’s reputation and actions undermining national cohesion and societal stability.

The draft regulations also emphasize that data subject to censorship on the Chinese internet should not serve as training material for these models. This development comes slightly over a month after regulatory authorities granted permission to various Chinese tech companies, including the prominent search engine Baidu, to introduce their generative AI-driven chatbots to the general public.

Since April, the CAC has consistently communicated its requirement for companies to provide security evaluations to regulatory bodies before introducing generative AI-powered services to the public. In July, the cyberspace regulator released a set of guidelines governing these services, which industry analysts noted were considerably less burdensome compared to the measures proposed in the initial April draft.

Related: Biden considers tightening AI chip controls to China via third parties

The recently unveiled draft security stipulations necessitate that organizations engaged in training these AI models obtain explicit consent from individuals whose personal data, encompassing biometric information, is employed for training. Additionally, the guidelines include comprehensive instructions on preventing infringements related to intellectual property.

Nations worldwide are wrestling with the establishment of regulatory frameworks for this technology. China regards AI as a domain in which it aspires to compete with the United States and has set its ambitions on becoming a global leader in this field by 2030.

Magazine: ‘AI has killed the industry’: EasyTranslate boss on adapting to change

Articles You May Like

Deribit Moves $783M in Ethereum To Cold Storage: A Bullish Signal for ETH?
Is Ethereum Undervalued? Investors Hold Firm While Price Targets Rise
Massive Ethereum Buying Spree – Taker Buy Volume hits $1.683B In One Hour
Ethereum Sees Neutral Netflow On Binance: What Does This Signal?
Ethereum Price Repeats ‘Bullish Megaphone’ Pattern From 2017 – Why $10,000 Is Possible