Acompany Inc., which promotes privacy digital transformation, will begin offering AutoPrivacy AI CleanRoom, Japan’s first security service that uses hardware-based confidential computing to protect data and AI. Through this new service, generative AI will play an even more active role in the business world and become a partner that supports business efficiency. Specifically, by enabling companies to safely handle their own know-how and confidential information, rather than being limited to limited uses such as “information gathering” and “summarizing and proofreading text,” Acompany will promote the use of generative AI in a wider range of fields. In this way, Acompany will contribute to solving AI security issues in the use of generative AI in companies, as well as accelerating business efficiency and new value creation.
Additionally, we are seeking new collaboration partners for the new service, targeting businesses that have a need to expand the use of generative AI for business purposes.
Although the use of generative AI is increasing in the business world, its main uses remain limited to minor operational efficiency improvements, such as “information gathering” and “summarizing and proofreading texts.” One of the reasons for this is that companies are concerned about security issues as they try and consider using generative AI.
In particular, from the perspective of AI security, the inability to input highly confidential data such as customer information or confidential company information into generative AI is a major constraint. In fact, concerns about the security risks of such generative AI are growing, and according to a survey by the Information-Technology Promotion Agency (IPA), 60.4% of companies that use, permit, or plan to use AI in their work feel that there are threats to AI security, and 75.0% recognize the importance of taking measures.
However, the survey revealed that less than 20% of companies have established rules related to AI security, such as “trade secret management” and “measures against internal fraud.” This suggests that challenges regarding AI security will become even more serious in the future.
SOURCE: PRTimes