We have launched "AI Red Team," a security diagnosis service, and "AI Blue Team," a monitoring service for generative AI.

The use of generative AI, especially large language models (LLMs*), is increasing in many areas. As expectations for LLMs rise, so do the risks of inherent vulnerabilities, information leaks, and inappropriate content generation. It is crucial for companies utilizing Generative AI to understand these issues and implement appropriate measures.

In December 2023, NRI Secure Technologies launched the "AI Red Team," a security diagnosis service tailored for systems and services that use generative AI. This service diagnoses AI-specific vulnerabilities through automated testing by applications and simulated attacks by experts. One of its features is that it can assess not only the risks of the AI itself but also the risks of the entire system or service that uses AI. In May 2024, we introduced the "AI Blue Team" for continuous security monitoring. Together, these services provide comprehensive security measures.

The NRI Group will continue to promote comprehensive initiatives to realize a convenient, safe, and secure digital society, including study and research, information transfer, solution development, and security construction for better use of AI.

*Large Language Model (LLM) is a type of natural language processing model that achieves advanced language understanding by training on extensive text data.

Related SDGs
Value Creation Sustainability