Nvidia's annual developer conference kicks off Monday in San Francisco, where CEO Jensen Huang will showcase new AI chip technology and partnerships. The four-day event comes as the AI chipmaker faces growing competition from rivals and customers developing their own processors.

SAN FRANCISCO, March 13 – Nvidia CEO Jensen Huang will take center stage at a packed Silicon Valley hockey arena Monday to launch his company’s annual developer conference, where he’s expected to unveil new products and partnerships designed to maintain the AI chipmaker’s dominance against rising competition.
The Nvidia GTC conference, spanning four days in the heart of Silicon Valley, has evolved into Huang’s signature platform for demonstrating the company’s latest artificial intelligence innovations across chips, data centers, programming software CUDA, digital AI assistants, and robotics technology.
This year’s gathering carries heightened importance as investors look for confirmation that Nvidia’s strategy of reinvesting profits back into AI development is delivering results.
“I expect Nvidia to present a full-stack roadmap update from Rubin to Feynman while emphasizing inference, agentic AI, networking, and AI factory infrastructure,” eMarketer analyst Jacob Bourne explained, referencing Nvidia’s current and upcoming chip generations.
While Nvidia’s processors power hundreds of billions in global data center investments by governments and corporations worldwide, the company now confronts challenges from competing chipmakers and even some customers creating their own processing units.
Industry experts speaking with Reuters anticipate continued growth in the overall AI chip sector, though they predict Nvidia’s market dominance may decline as the industry rapidly evolves toward AI agents that move between computer programs to complete human tasks. This represents a departure from training applications, where AI laboratories connect multiple Nvidia chips to process massive datasets for model development.
These digital agents are projected to become so widespread that humans will require additional AI management systems – what experts term an “orchestration” layer – to coordinate between users and their agent networks.
Analysts note this development benefits Nvidia by demonstrating AI’s growing practical value.
However, these operations, known as “inference” in the AI field, can operate on alternative chip types, including processors that major Nvidia clients like OpenAI and Meta are developing independently. Meta recently announced plans to release new AI chips biannually.
“Nvidia is definitely going to see more competition compared to a year ago,” stated KinNgai Chan, managing director at Summit Insights Group. “Nvidia still has close to over 90% market share in both training and inference markets today.”
“We think Nvidia will begin to see share loss starting in 2027, once in-house ASIC programs gain some scale especially in the inference market,” Chan added, referring to specialized circuits designed for specific functions that offer greater efficiency than standard graphics processors.
Nvidia has been strengthening its competitive position, spending $17 billion in December to acquire Groq, a startup specializing in rapid, cost-effective inference computing. During last month’s earnings call, Huang indicated the company would demonstrate at GTC how Nvidia plans to integrate Groq’s high-speed AI capabilities into their established CUDA platform.
Third Bridge analyst William McGonigle said his organization expects Nvidia to introduce new server systems combining Groq’s processors with Nvidia’s networking solutions to deliver fast, economical products.
Central processing units, or CPUs – the chip category long dominated by Intel and Advanced Micro Devices – present another growing competitive challenge to Nvidia.
Though these processors were overshadowed by Nvidia’s graphics units in recent years, McGonigle said they are “back in focus” and anticipates Nvidia will demonstrate servers using exclusively its CPUs, which Huang promoted during a recent earnings presentation.
“With the rise of agentic AI, the bottleneck is now at the agent orchestration level, which is carried out by the CPUs,” McGonigle explained.
Industry watchers also expect Nvidia to detail its $2 billion investments in both Lumentum and Coherent, companies that manufacture lasers for transmitting data between chips using light beams. Implementing these lasers in co-packaged optics could accelerate connections between Nvidia’s processors within large data centers, though current production volumes don’t match Nvidia’s annual chip sales.
“Nvidia will likely frame co-packaged optics as key to connecting massive AI clusters more efficiently, but the challenge is making it affordable enough to deploy at scale,” eMarketer’s Bourne observed.
Construction Causes Lane Restrictions on Upland Court Until 5:30 PM
Construction Closes Right Lane on Foulk Road Through This Afternoon
Construction Closes Left Lane on Federal School Lane in New Castle County
Construction Closes Left Lane on Federal School Lane Through Wednesday Evening