AI Safety: A new air outlet?
It is worth noting that since its inception, Anthropic’s vision has been to become a company focused on AI security research, hoping to develop controllable and explainable AI systems. In terms of AI security, Anthropic pioneered the concept system of “constitutional AI”, emphasizing the realization of value alignment in artificial intelligence systems. Anthropic founder Dario Amodi and a group of colleagues decided to leave OpenAI and start Anthropic because of doubts about OpenAI’s approach to technology ethics and safety.
Dario Amodi has previously said that he would rather Claude (Anthropic’s generative artificial intelligence assistant) be boring than dangerous.
The idea coincides with Amazon. Dai Wen, director of the Solution architecture Department of Amazon Cloud Technology Greater China, said at the recent Amazon Cloud Technology re:Inforce2023 China website that generative AI applications are like icebergs on the sea, and it is necessary to pay attention to the glaciers below the sea to safely harness this new technology. Security is an unavoidable and important issue in building generative AI, and enterprises can only better accelerate business innovation with AI by doing a good job of protecting data, models and applications during the AI journey.
In fact, Amazon Cloud Technology has long proposed the “onion type” security protection model, which divides security protection into multiple levels, each level has different security policies and control measures. “Security is always a top priority at Amazon Cloud Technologies. It’s our number one priority.” Dai Wen said.
After the rise of large models, the direction of “AI Safety” has attracted the attention and discussion of many industry experts. In a recent paper published in Nature, the research team of Yoshua Bengio, a Turing Award winner and one of the three giants of deep learning, mentioned that artificial intelligence systems can be an important assistant for scientists to discover new knowledge, but there are also potential security risks. For example, there are problems such as incomplete and deviation of scientific data, which need to be standardized; Data accessibility, privacy, etc., also need to be considered. Standardization of models and data is also necessary.
Max Tegmark, a professor at the Massachusetts Institute of Technology and founder and president of the Future of Life Institute, also said that in conversations with AI researchers and tech ceos, it’s clear that there is widespread anxiety about the risks posed by AI.
In the 2023 Bund conference, the world’s authoritative international industrial organization “Cloud Security Alliance” (CSA) Greater China announced the establishment of “AI Security Working Group”, China Telecom, Ant Group, Huawei, Baidu, Volcano Engine, Xidian University, National Financial Evaluation Center and other more than 30 institutions became the first sponsors. The organization is committed to jointly addressing the security challenges posed by the rapid development of AI technology.
It can be seen that AI security seems to be becoming a new air outlet in the field of artificial intelligence. Amazon at this time “get on the car”, can set off a new round of artificial intelligence climax, let us wait and see.