ABEJA Inc., which aims to “create a richer world” through collaboration between people and AI, has been selected for the “Post-5G Information and Communications System Infrastructure Strengthening Research and Development Project/Development of Post-5G Information and Communications Systems/Development of a Competitive Generative AI Platform Model (Grant)”, a project led by the New Energy and Industrial Technology Development Organization under the “GENIAC (Generative AI Accelerator Challenge)”, and is currently conducting research and development of LLM and related technologies.
As part of this project, ABEJA has constructed the “ABEJA QwQ-32B Reasoning Model,” a reasoning model with improved inference capabilities based on the 32B compact LLM announced in January 2025. Despite its 32B size, this model has achieved performance that exceeds OpenAI’s “GPT-4o” and “o1-preview” in MT-Bench, a general-purpose language performance index.
ABEJA has adopted the management philosophy of “Implementing a rich world” and is developing a “Digital Platform Business” that develops, introduces, and operates the ABEJA Platform, a platform system, to support the introduction of AI to mission-critical business. The ABEJA Platform is a robust and stable platform system and application group for mission-critical business, and enables the operation of cutting-edge technologies such as generative AI through collaboration between humans and AI. ABEJA has been conducting research and development of the ABEJA Platform since its founding in 2012, and has earned the trust of its client companies as it works to “transform the industrial structure with the power of technology” by promoting numerous introductions.
Also Read: Fujitsu’s “Takane” LLM available on Nutanix Enterprise AI
ABEJA has been selected for GENIAC’s “Development of Competitive Generative AI Platform Models” in October 2024, following the first phase of the “Post-5G Information and Communication System Infrastructure Strengthening Research and Development Project/Development of Post-5G Information and Communication Systems” in February 2024.
ABEJA has believed that the biggest challenge in implementing LLMs in society is the trade-off between accuracy and cost, and has been conducting research and development of LLMs to solve this problem. The 32B compact model has achieved performance that exceeds that of GPT-4 in multiple general-purpose language performance indicators, and ABEJA believes that it has overcome the “challenge of the trade-off between accuracy and cost” that LLMs face.
Reasoning models, which further enhance the inference capabilities of LLM, have strengths in mathematics and coding and integrate multiple inference processes to perform more complex logical thinking. Representative examples include “OpenAI o1” and “DeepSeek-R1”. In terms of size, “OpenAI o1” is estimated to have several hundred billion parameters, while “DeepSeek-R1” has 671 billion parameters.
ABEJA believes that implementing a compact reasoning model into business processes will lead to an expansion of the scope of application, improved output reliability, and versatility, and has therefore developed the “ABEJA QwQ-32B Reasoning Model” based on the 32B compact model.
The “ABEJA QwQ-32B Reasoning Model” achieves performance that exceeds that of OpenAI’s “GPT-4o” and “o1-preview,” while being overwhelmingly small at just 32 billion parameters. This makes it possible to implement the model in a variety of edge environments, such as offices and factories. We believe that the model will provide groundbreaking practicality in terms of accuracy, cost, and convenience.
SOURCE: PRTimes