American AI developers held secret negotiations with China about the dangers of new technologies

American companies OpenAI, Anthropic and Cohere, working in the field of artificial intelligence, are holding secret negotiations with Chinese security experts in this field. It comes amid widespread concern that AI algorithms could be used to spread misinformation and threaten social cohesion.. The Financial Times writes about this, citing its own informed sources.

Image source: Gerd Altmann /

The report says that in July and October last year, meetings were held in Geneva with the participation of North American experts and scientists specializing in the political component of AI development, as well as representatives of Tsinghua University and a number of other institutions supported by the Chinese government. A knowledgeable source said that during these meetings, the parties were able to discuss the risks associated with new technologies, as well as stimulate investment in research in the field of security in the field of AI. It is noted that the main goal of these meetings was to find a safe way to develop more complex AI technologies.

“We do not have the opportunity to set international safety standards and harmonize AI developments without reaching an agreement between the participants of this group. If they agree, it will be easier to attract others,” said a knowledgeable source..

The publication notes that these unpublicized talks are a rare sign of Sino-American cooperation amid a race for supremacy between the two powers in advanced technologies such as artificial intelligence and quantum computing.. As for the negotiations themselves, they were organized by the consulting company Shaikh Group and their conduct was known in the White House, as well as the governments of Great Britain and China.

“We saw an opportunity to bring together key players from the US and China working in the field of artificial intelligence. Our main goal was to highlight the vulnerabilities, risks and opportunities associated with the widespread adoption of AI models that are used around the world. Recognition of these facts, in our opinion, can become the basis for joint scientific work, which will ultimately lead to the development of global safety standards for AI models,” commented Salman Shaikh, executive director of the Shaikh Group, on this issue.

Negotiators discussed opportunities for technical cooperation between the parties, as well as more specific policy proposals, which formed the basis of discussions during the UN Security Council meeting on AI in July 2023 and the UK AI Summit in November 2023. According to sources, the success of the past meetings has led to the development of a plan for further negotiations, which will explore specific scientific and technological proposals aimed at bringing the field of AI into line with legal codes, as well as the norms and values of each society.