Skip to content

Generative AI Triggers New Thinking on Governance

In the past few months, ChatGPT has pressed acceleration keys for artificial intelligence technology. This chat program has proven the feasibility of super large parameter models and universal artificial intelligence. Giant companies such as Google and Amazon have launched independently developed related platforms, and more than ten domestic companies have also made layouts in this field.
They are collectively referred to as “generative artificial intelligence”. Simply put, this type of AI can generate new content through algorithms, including images, text, music, and even videos, code, and more.
This is both exciting and worrying. Data security, personal privacy, information fraud, algorithmic discrimination… Regulatory authorities in various countries have responded quickly and continuously to a series of potential risks.
At the end of March, the Italian Personal Data Protection Agency announced a temporary ban on the use of ChatGPT, and multiple EU countries followed up to consider specific regulatory measures. In April, the United States Department of Commerce publicly solicited opinions on whether the new AI model should go through certification procedures before release. The National Cyberspace Administration of China recently released the “Management Measures for Generative Artificial Intelligence Services (Draft for Soliciting Opinions)”, which clearly stipulates entry thresholds, data sources, labeling rules, etc.


Compared to previous AI technologies, generative AI can directly use natural language and computers for interaction, with low usage barriers and high quality generated content. “Wang Xinrui, a partner and lawyer at Shihui Law Firm, told Science and Technology Daily, This makes it more confusing and potentially damaging than previous artificial intelligence technologies once abused or misused. Moreover, it may become a base technology that affects various industries
In addition to traditional issues such as data security and personal information protection, Wu Shenkuo, Deputy Director of the Research Center of the China Internet Association, emphasized that generative AI also has an impact and impact on content governance and social operations, which poses great challenges to the agility, coverage, and transparency of existing governance mechanisms.
According to Wang Xinrui, currently, countries such as Italy, Spain, Germany, Canada, and the United Kingdom are investigating ChatGPT and its developers for data and privacy protection considerations. In addition, there is a possibility that generative AI such as ChatGPT may be included in the high-risk artificial intelligence list outside the region, and it is hoped that legislation can be adopted to prevent its risks.
Wu Shenkuo said that the supervision of generative AI in the major countries mentioned above is still from the perspective of classic data governance and data security. In their view, the core logic of generative AI lies in the collection, processing, processing and rich computing output of massive data. Therefore, it is logical to adopt the idea of data governance.
But back in China, it is not enough just to start from data governance. “Our concerns involve content governance, consumer rights and interests, transparency, fairness, intellectual property protection and other aspects, basically covering the main risk types currently related to generative AI, so we adopt a multi-level comprehensive governance idea of risk prevention and risk intervention.” Wu Shenkuo said.
On the one hand, we need to strengthen the protection of data privacy and personal information security, and standardize the processing and data protection of personal information throughout the entire lifecycle of generative AI from training to use. On the other hand, starting from the potential risks that generative AI may bring, we need to further establish and improve the governance framework and accountability mechanism of all parties in the generative AI industry based on the existing legal framework, and implement the rights and obligations of relevant entities in generative AI Wang Xinrui said that the next step is to further refine and implement the platform’s compliance mechanism, strengthen scientific and technological ethical requirements, ensure algorithm security, establish and improve the accountability mechanism, and promote the legal and compliant circulation and sharing of high-quality data.
Wu Shenkuo proposed that the clarity and certainty of governance rules need to be continuously improved, and efforts should be made to create a real-time, agile, and comprehensive regulatory mechanism, maximizing the guiding role of regulation and the fundamental value of red line protection.
Generative AI is booming, involving many interests and concerns. In response, Wang Xinrui suggested that it is necessary to fully observe the interaction between the industry, regulatory agencies, and mass media, and establish a multi-party governance mechanism. At the same time, strengthen public education and participation, enhance public awareness and understanding of generative AI, and jointly promote the healthy development of generative AI.