NEC announced a new technology application that detects Large Language Model hallucinations in real time, promoting safe generative AI use by flagging misinformation.
The proliferation of generative AI presents challenges related to misinformation. NEC's development of a real-time detection technology for LLM hallucinations is crucial for fostering trust and enabling the responsible adoption of AI across various sectors. This is particularly relevant for APAC, where digital transformation is accelerating, and the impact of misinformation can be significant on economies and societies.
NEC developed real-time detection for LLM hallucinations.
Technology aims to promote safe and secure generative AI use.
Addresses the growing concern of AI-generated misinformation.
APAC is a key region for AI adoption and digital transformation. NEC's technology can help mitigate risks associated with AI-generated misinformation, supporting secure digital growth and public trust in AI solutions across the region.
Technology aims to promote safe and secure generative AI use.
Addresses the growing concern of AI-generated misinformation.
Sign in to save notes on signals.
Sign In