AI Regulation – A Perspective from Israel

Yaron Gamburg
Senior Diplomat and International Affairs Expert

Speaking at Tel Aviv University last June, Sam Altman, CEO of OpenAI, made two headlines that caught the attention of the Israeli media. Addressing the risks of AI development, he emphasized the need to address the existential threats of AI by creating an international regulatory body to ensure responsible use by all countries, similar to nuclear power control organizations. Complimenting the local audience, Altman expressed his confidence that Israel's tech ecosystem will play a "huge role" in the artificial intelligence revolution, transforming the world. "There are two things I have observed that are particular about Israel: the first is talent density, and the second is the relentlessness, drive, and ambition of Israeli entrepreneurs," said Altman at the event. 

There are good reasons for Altman's words of appreciation. Israel's Artificial Intelligence sector is growing fast and places it among the leading countries in the field. According to the AI Index Report 2022 of Stanford University, Israel ranks fifth in the Relative AI Skill Penetration rate in 2015-2021 and fourth in private investment in AI in 2021, with more than $2.4 billion. In absolute terms, 2.200 companies in the country use Artificial Intelligence, as reported by Israel Innovation Authority in May. In the last months, there was a sharp increase in Israel's start-ups entering the field of generative AI.[i] As could be expected, Israeli companies use AI in the areas of cyber security, fintech, agritech, and organization software. One of the critical goals of Israel's AI community is the development of the "National LLM," a language model that will function in Hebrew and Arabic. A significant presence of big technology companies and highly-ranked academic institutions provides a solid platform for convening international discussions on the future of AI, like the last conference, "Data Sciences," that attracted professionals from all over the globe, including from USC. 

But a central message of OpenAI executives to world leaders and global public opinion is a need to handle the future development of Artificial Intelligence with due caution. Sam Altman and OpenAI chief scientist Ilya Sutskever compare AI to nuclear energy, which may sound like a stern warning. In his congressional testimony in May, Altman suggested a regulatory body to oversee the licensing and use of AI "above a certain threshold." Regarding the international community, he believes a new international organization should regulate the development and use of Artificial Intelligence in the way the International Atomic Energy Agency (IAEA) controls nuclear power. 

As someone involved in the discussions on AI regulation a few years ago, seeing the CEO of OpenAI urging such regulation was encouraging. Sam Altman is not the first technology leader to express concern about the risks of AI - Elon Musk did it as early as 2014, contemplating the need for national and international regulation publicly - but he is the first leader of the AI sector to encourage and shape the conversation about AI regulation actively. His recent road shows to Europe, Asia, and the Middle East constitute an effort to generate a more informed discussion on AI risks among decision-makers. This effort is a welcome initiative. 

However, we should remember that AI regulation is not a new topic for international organizations and national governments. Multiple international bodies, including UNESCO, the Council of Europe, and OECD, delved into the issue, hoping to forge a broad consensus among the countries. The debates started three years before ChatGPT's launch, yet those respected bodies still needed to find common ground or propose a regulatory mechanism. The differences between approaches to AI regulation precluded any reasonable consensus between the member states. Concerns for human rights, privacy, and democratic values were of much greater importance for Western countries than for China, to take one example. Based on observation of the current situation of the UN system, it is impossible to expect to reach a global consensus on AI.[i] The only international organization able to achieve such an agreement among its members is the European Union, which will start implementing the AI ACT at the beginning of 2024 or earlier this year. This example provides an essential lesson on AI regulation - reaching a consensus among like-minded countries is the right way to proceed. 

Another vital lesson is the direct responsibility of the government to provide the regulatory framework and the equally unquestionable need to hold an open dialogue with the industry and civil society. Israel's government, for example, consulted with leading entities in the hi-tech sector and with technology experts from Israel and the world, and in November 2022, published a draft policy on AI regulation for public comment.

One last lesson from the field of diplomacy: to get a consensus, we must find a compromise. Like others before him, Sam Altman discovered in his European tour that the European approach is more preventative and potentially less friendly to innovation than the American position. However, as Sam Altman did, once we realize the regulation is crucial, we can and should find a middle ground among like-minded countries. The OECD could be the best platform as it brings together countries from Europe, Asia, and the Americas who share the same values but simultaneously have different cultural perspectives and traditions. Once achieved, this consensus would become a basis for an agreement on AI regulation open for other countries to join. 

What could this consensus look like? In Israel, the emerging approach is that of a "soft" regulation. Instead of a comprehensive legislation framework, the various regulators, each in their field, examine the need to promote concrete regulation while maintaining a uniform government policy. In addition, the regulation will be carried out in appropriate cases using advanced regulatory tools such as voluntary standardization and self-regulation. The policy paper draft also suggests using a modular format and regulatory experimentation tools (such as "sandboxes") and the public's participation in the deliberation process.[i]

Of course, we must examine additional ideas, but most importantly, the regulation should stay high on the agenda of all stakeholders. Procrastination could prove too dangerous, given the current pace of AI development.

————————————

[i] See the partial list of Israeli companies in generative AI here: https://www.calcalistech.com/ctechnews/article/s19ck3w4i

[ii] See more on the efforts of international organizations to reach consensus on AI regulation here: “ In “geopolitics of bits and bytes”, Europe takes an independent approach on AI

 https://www.yarongamburg.com/2021/01/in-geopolitics-of-bits-and-bytes-europe.html

[iii] Ministry of Innovation, Science and Technology, “For the first time in Israel: The principles of the policy for the responsible development of the field of artificial intelligence were published for public commenthttps://www.gov.il/en/departments/news/most-news20221117#:~:text=Artificial%20intelligence%20will%20be%20used,in%20the%20field%20of%20innovation

Yaron Gamburg is a senior diplomat and international affairs expert who joined the Israel foreign service in 1999. His overseas assignments include France, Russia, the United States, and the mission of Israel to international organizations in Paris. From 2005 to 2008, he served as Deputy Consul General in Los Angeles. In his current position, he provides advice on the economic aspects of foreign policy, including economic assistance to Ukraine. As part of Ph.D. studies conducts academic research on Russia.

Originally from Ukraine, he immigrated to Israel at the age of 17 and completed his academic degrees at the Hebrew University of Jerusalem. Gamburg is fluent in English, Russian, Ukrainian, Hebrew, and French.

Previous
Previous

Guiding Policy with Monetized Lifecycle Analysis

Next
Next

The Quantum Universe: Atoms, Humans & Light