Yamato Contact Service, a contact center business under Yamato Holdings, has introduced Oracle Cloud Infrastructure (OCI) Generative AI and Oracle Database 23ai to improve the efficiency of customer support operations. Oracle Japan announced this on July 7. In recent years, with the rapid expansion of e-commerce, the logistics industry has seen an increase in inquiries about the delivery status of packages and various needs for services. In this environment, Yamato Contact Service is required to provide prompt and high-quality responses stably.
However, as the number of FAQs has increased rapidly, further efficiency of operations has become an issue, as expressions specific to the logistics industry affect search accuracy. In light of this background, the company has introduced OCI Generative AI, Oracle Database 23ai, and Oracle AI Vector Search installed in Oracle Autonomous Database with the aim of reforming operations and improving service quality, strengthening response capabilities and streamlining operations. In the new system, AI understands the content of customer inquiries and uses both keyword search and semantic search to instantly present the best answer. This function is used for the FAQ suggestion system in the inquiry form and for in-house FAQs, and knowledge is automatically accumulated and shared by AI, which contributes greatly to improving the quality of operator responses and work productivity.
Also Read: RevComm Inc Launches MiiTel AI and Data Mining Report
Specifically, in email response work, the accuracy of FAQ suggestions for self-solving inquiries has been increased to 85%, about twice as high as before, and AI has succeeded in automatically processing and suppressing about 20% of email inquiries related to parcel delivery. By adopting Oracle AI Vector Search, it has become possible to perform attribute search and semantic search simultaneously in combination. This makes it possible to efficiently narrow down and search FAQs according to the type of service and customer attributes, significantly improving the productivity of application development and the accuracy of searches. In addition, by utilizing the vector search function and search augmentation generation (RAG), the large-scale language model (LLM) can present more accurate answers that are in line with the context.
SOURCE: Yahoo