Free Word Search


Search by Topic

  • Keyword
    Industry
    Purpose
    Expert
    Area

HOME NRI JOURNAL The Looming Security Risks of Generative AI, and the Responses Required of Companies

NRI JOURNAL

Innovation magazine that generates hints for the future

クラウドの潮流――進化するクラウド・サービスと変化する企業の意識

The Looming Security Risks of Generative AI, and the Responses Required of Companies

Masafumi Yamaguchi, General Manager, Consulting Business Management Division and General Manager, Security Consulting Department, North America Regional Headquarters

#AI

#Cyber security

#Information security

Oct. 12, 2023

Ever since the release of ChatGPT on November 30, 2022, the level of attention being given to generative AI services has been growing rapidly, and individuals as well as companies are beginning to consider using these services. Meanwhile, security risks inherent in generative AI have surfaced, and countries around the world are now debating to what extent to utilize and regulate generative AI services. What are the security risks associated with the use of generative AI services? We asked Masafumi Yamaguchi of NRI SecureTechnologies about how to safely utilize generative AI.

Generative AI services, while convenient, also pose security risks

With the worldwide popularity of ChatGPT, Japanese companies have begun to explore the use of generative AI. However, there are many risks associated with the business use of generative AI. The most serious of these are the security risks. Although the potential risks differ depending on the type of generative AI, here I focus on the three main risks caused by malicious attacks and human error.

The first is attacks on AI chatbots: generative AI services such as ChatGPT are equipped with mechanisms to refuse to respond to illegal or unethical input. However, it has been shown that this mechanism can be breached if the content of the input is carefully crafted. In other words, it is quite possible that the AI chatbots of the offered services could be attacked, resulting in the leakage of confidential information.

The second is deep fakes. Using generative AI, realistic images, videos, and sounds can be easily generated. This has been abused to spread false information on social networking sites, or to commit fraud by impersonating someone by disguising biometrics or another type of authentication. Fraud is particularly serious in the cyber domain, and U.S. authorities are strongly concerned about the use of generative AI for cybercrime.

The third is “inadvertent leakage” by employees. At present, not many companies are using generative AI in an organized manner, and security education for employees is not sufficient. Under such circumstances, there have been many cases of employees personally using AI and inputting confidential information. Once the input data is incorporated into the AI’s training, there is a risk that sensitive information may appear in responses to others. In some cases, developers use unaltered code and configuration files extracted by generative AI directly for debugging and function development, which is becoming increasingly dangerous from a cybersecurity perspective.

For these reasons, companies need to consider defensive measures when applying generative AI to their business. Since in-house implementation of security measures still has limitations in terms of know-how, cost, and other factors, it may be safer right now to adopt major services. Furthermore, if your business or remote access uses biometric authentication, you must be aware of the increased risk of it being breached sooner than later.

AI laws and regulations moving forward alongside new security threats

As the use of generative AI advances, there are also concerns about risks such as job loss, security, and copyright infringement. To address these risks, the European Commission created the AI Act, a bill designed to address AI risks that threaten health, safety, and basic human rights, and to enhance AI adoption, investment, and innovation. It adopts a “risk-based approach” that varies the content of regulations according to risk, requiring special checks for the use of AI services that deal with high-risk areas. The Commission aims to fully implement the AI Act for businesses in the second half of 2024, and if passed, it is expected to be a comprehensive AI regulatory law.

In addition, 2022 saw the announcement of a proposed amendment to the Product Liability Directive, which provides for civil liability against the provider company if a consumer is harmed by an AI system, including a generative AI, and also a proposed AI Liability Directive with complementary special provisions. The purpose of the two proposals is to impose on providers of generative AI services the responsibility of providing safe services and to require them explain how their systems are constructed and trained. Essentially, they would give effect to the AI Act mentioned above. To lower the hurdles to litigation against companies that do not comply with the rules, provisions are also included to reduce the burden of proof on consumers and to facilitate requests for disclosure of information to companies.

In conjunction with these laws and regulations, there is a growing trend toward regulation by opposition movements and toward self-regulation on the part of services. In the United States, more than 1,000 intellectuals, including university professors and AI developers, have called for a pause on the development of GPT-4, citing the possibility that it could cause social unrest. Also, after an outbreak of politically and religiously controversial deep-fake images, Midjourney, a leading image generative AI company, announced the suspension of its free trial.

While regulations are moving forward, new threats are emerging. ChatGPT added plug-in functionality in May 2023, which greatly improved convenience, but also opened the possibility of entirely new threats, such as malicious users being able to conduct attacks by directing searches to specific websites.

While attacks and defenses in generative AI services are a “tug-of-war” that is difficult to completely eradicate, AI can also be used as a security measure. AI will take over tasks traditionally performed by security personnel, enabling them to understand in detail in a flowchart what route the attacker took to enter a system and which devices are affected at the time of an incident, and to receive recommendations on the priority of the multiple possible responses. Many cybersecurity vendors are already considering implementing generative AI, and it is likely that we will soon see AI security services in action.

Responsible security risk handling by both user and provider companies

Although generative AI services are generally viewed positively in Japan so far, it cannot be ruled out that similar trends may be seen in other countries in the future. We are finally at the stage where the number of corporate use cases is only just beginning to increase, but we need to keep an eye on future trends. First, the safest bet would be to decide whether to allow in-house use of AI services, and if so, to begin preparations for the presentation of necessary rules and literacy.

Before using a service, the security risks also need to be assessed. How to manage such risks should also be decided, referring to past cases of damage. It is also essential to provide usage rules and security guidelines, as well as information dissemination and literacy education for employees and transaction counterparties. Once the actual implementation is decided, it will also be necessary to confirm where input data will be stored, how it will be identified and deleted, and to create an operations manual. Regular audits of usage rules will also be necessary to ensure ongoing security control over AI usage.

Meanwhile, companies that provide generative AI services and IT and security vendors are required to take safety measures in line with various regulatory bills. In addition to strengthening voice and biometric authentication, a wide range of efforts are required, including gathering information on new attack methods, creating countermeasure plans, and disclosing possible risks when confidential information is inputted. Providing dedicated APIs to companies will also be effective in preventing inputted information from being used for AI training.

Government agencies are required to establish a framework to support such corporate efforts. It is necessary to establish Japan’s own legislation and guidelines with reference to AI-related legislation from international organizations and countries around the world. At the same time, it may be expected that opinions will be exchanged with companies that provide generative AI services and their services will be evaluated, and that research relating to generative AI will be subsidized.

The development of generative AI is so rapid that security aspects tend to lag. However, if security risks are left unchecked, they can lead to major accidents that could shake management. Both companies that use and provide services need to take a sense of responsibility for this issue and promote countermeasures in cooperation with government agencies.

  • Facebook
  • Twitter
  • LinkedIn

What's New