Netskope Japan, an industry leader in the latest security and network technologies for the cloud and AI era, has announced NewEdge AI Fast Path. This feature optimizes network paths to critical AI destinations, including AI applications on public clouds, private clouds, and neoclouds, and is already being rolled out to Netskope customers. It enables organizations using AI applications and companies deploying agent-based AI to reduce latency, lower costs, optimize performance, enhance resilience, and provide a secure user experience. In addition, Netskope operates a total of four data centers in Japan, two in Tokyo and two in Osaka, and has established a system to securely process AI traffic for domestic users via the shortest possible path.
Resolving the “security vs. speed” dilemma
Recent surveys indicate that in the age of AI, the demands for performance, resilience, and security are increasing, and only 18% of IT infrastructure and operations leaders are confident that their current systems and budgets can meet these demands. While companies want to expand their use of AI, over-reliance on traditional security tools and inadequate network infrastructure forces them to compromise on either security or user experience. As a result, security concerns are delaying AI adoption, and security checks on AI traffic are being avoided. There is also a risk that users, in an attempt to avoid performance degradation, may bypass critical security controls.
Also Read: Japan Strengthens Cyber Defenses Against Advanced AI Threats Including Mythos
Netskope customers don’t have to compromise on either security or user experience. This is made possible by Netskope NewEdge, the carrier-grade private cloud that underpins the Netskope One platform. Netskope NewEdge delivers security, networking, analytics, and AI services. AI Fast Path, a suite of features included in NewEdge, delivers superior performance and efficiency even for the most demanding AI applications. Specific benefits include:
Improved response speed: Reduce inference time from prompt to response, minimizing the Time to First Token (TTFT) of conversational AI.
Optimizing agent-based AI: Accelerating complex multi-prompt agent workflows with high-speed processing required for rapid and iterative AI subtasks.
LLM Performance Optimization: Improve the performance of Large-Scale Language Models (LLMs) when accessing large amounts of distributed data, such as via MCP gateways.
Accelerated RAG: Faster connections between LLM and external data sources to support Search and Enhanced Generation (RAG) with higher quality real-time output.
SOURCE: PRTimes


