Divx Inc., founded in 2021 and supporting the promotion of digital transformation in companies and public institutions through the development of services utilizing AI, began offering “DIVX Local LLM,” a closed/air-gapped product compatible with the open-weight large-scale language model “gpt-oss (120B/20B)” released by OpenAI in August 2025, from October 9, 2025.
Large Language Models (LLMs) are AI models specialized for natural language processing, built using massive amounts of text data and advanced deep learning techniques. They have advanced comprehension capabilities that allow them to understand natural human language, and are capable of a variety of tasks, including providing natural responses in context, summarizing text, translation, and sentiment analysis.
DIVX Local LLM reduces the load on running large-scale language models, which previously required processing of massive amounts of calculations, data, and model parameters on the cloud or expensive GPU servers, by using AI quantization (reducing the bit rate of weights and calculations), making it possible to run them on high-end PCs and workstations. This allows you to safely and quickly set up an AI infrastructure within your own network, without relying on the cloud.
こちらもお読みください: NEC Adds AI to ‘Kurumi-e’ for Smarter Fleet Safety
Furthermore, by linking with the AI co-creation development platform “DIVX GAI v2,” which underwent a major update on October 2, 2025, permission settings and audit logs can be centrally managed on a single screen (Admin UI).
While the use of generative AI in business has expanded in recent years, the handling of confidential corporate information (personal information, drawings, technical data, etc.) that cannot be uploaded to the cloud, network requirements such as closed networks (environments isolated from the outside world, such as dedicated lines) and air gaps (environments completely cut off from the internet), and the procurement and operating costs of expensive GPU servers have been bottlenecks in the introduction of generative AI.
In particular, in public institutions and industries that require a high level of governance, such as finance and manufacturing, there are many regulations regarding cloud usage, which makes the initial burden of introducing AI significant.
In response to these voices from the field, ディーディーブイエックス has launched “DIVX Local LLM,” a platform that uses quantization (reducing the bit size of AI models by reducing the weight and calculations) to enable large-scale language models to run on in-house high-end PCs/workstations without relying on a cloud environment. This makes it possible to launch PoCs that meet a wide range of requirements, including costs, in a shorter period of time than conventional implementation methods.
ソース PRタイムズ