(Image courtesy of Nvidia).

Nvidia Profit Jumps 70 Percent on Gaming and Data Center Business

Aug. 21, 2018
Nvidia Profit Jumps 70 Percent on Gaming and Data Center Business

Over the last two years, Nvidia has turned into a moneymaking machine. When one business segment slows down, the Santa Clara, California-based maker of graphics chip always seems to have another source of revenue to pick up the slack.

The company last week announced revenues of $3.12 billion in the second quarter, an increase of 40 percent over the last year. Profits rose almost 70 percent to $1.16 billion since last year’s second quarter, reflecting the booming business of selling chips for running computer graphics and handling machine learning computations in data centers.

The performance was led by the gaming business segment, which jumped from around $1.18 billion to $1.80 billion over the last year as sales of graphics chips for personal computers and other systems continue to grow. On Monday, the company announced a new line of consumer graphics cards based on its new Turing architecture. The chips supports real-time ray tracing, which boosts the realism of computer graphics.

The company’s data center business increased 83 percent over the last year to $760 million, showing how indispensable the chips are to companies training algorithms to understand voices or tell apart stop signs from pedestrians. Google, Amazon and other cloud computing firms are increasingly using graphics chips to run computations that cannot be handled as efficiently as server chips from Intel.

“Fueling our growth is the widening gap between demand for computing across every industry and the limits reached by traditional computing,” said Jensen Huang, Nvidia’s chief executive officer, in a statement. That was underlined last month when it was announced that Oak Ridge National Laboratory’s Summit supercomputer would be powered by more than 27,000 graphics chips sold by Nvidia.

The company still faces competition, including from customers like Google and Baidu, which are increasingly investing in custom silicon. Advanced Micro Devices said that it would release graphics chips based on 7-nanometer technology for machine learning by next year. Intel has also improved its server chips for machine learning to placate customers. Many startups are also shouldering into the market.

Intel is also throwing its hat into the traditional graphics market. Last year, it hired AMD chief architect Raja Koduri to lead its new business targeting graphics chips to be released in roughly two years for personal computers and other applications. He has raided his former employer to bolster the unit in recent months, starting with AMD graphics marketing head Chris Hook. He has also poached senior director of platform engineering Joseph Facca.

To reinforce its stronghold over the markets for computer graphics and machine learning, Nvidia is putting more money where its mouth is. Operating expenses jumped from $614 million to $818 million over the last year. Research, development and other expenses could cost the company an estimated $870 million in the third quarter. Nvidia currently holds $7.94 billion in cash.

There was disappointing news. Nvidia lowered its revenue guidance for the third quarter was $3.25 billion, less than analyst estimates of $3.34 billion, as sales of cryptocurrency chips fade. Revenue from original equipment manufacturer and intellectual property, which includes chips used to process digital currencies, was $116 million, down 54 percent over the last year.

“Our revenue outlook had anticipated cryptocurrency-specific products declining to approximately $100 million, while actual crypto-specific product revenue was $18 million,” said Colette Kress, Nvidia’s chief financial officer, in a statement. “Whereas we had previously anticipated cryptocurrency to be meaningful for the year, we are now projecting no contributions going forward.”

Other businesses will fill the void, she said. The company is betting on its new generation of chips to reignite the professional visualization category, which reported revenues $281 million last quarter, above estimates of $257 million. The new chip design is more than five times faster than its Pascal, its previous architecture. Graphics cards based on the new chips will be available in the fourth quarter, Kress said.

Many analysts say the company holds an almost insurmountable lead in chips for training algorithms. But it has less command over the market for chips that apply what they have learned—a process called inferencing—where Intel is hanging tough. Nvidia’s rival reported that it sold $1 billion of processors for artificial intelligence last year, not including its field-programmable gate arrays (FPGAs).

Nvidia is working to improve how graphics chips understand voices, identify faces in photographs, and handle other inferencing processes. These computational chores can take place inside electronic devices or in the cloud, suggesting that the market for inferencing chips could outgrow training. Nvidia recently released new software, TensorRT, to improve the process in data centers and embedded devices, such as cars and drones.

“Inference is going to be a big market for us,” Huang said. “We are actively working with just about every single internet service provider in the world to incorporate inference acceleration into their stack. And the reason for that is because they need high throughput and low latency. Voice recognition is only useful if it responds in a relatively short period of time.”

Nvidia could also be bolstered by the shift toward autonomous cars. To enhance safety, the software inside these vehicles needs to be trained on highways and city streets simulated in the cloud. The driving can practice for billions of simulated miles without risking pedestrians and other drivers on the road. That could boost sales of server chips based on Nvidia’s Volta architecture.

Revenues from its automotive business ramped up to $161 million in the second quarter on strong sales of infotainment modules and other systems using its autonomous driving chipset, Xavier. But the results were overshadowed by Tesla’s recent disclosure that it would abandon Nvidia’s hardware chips for its own machine learning silicon.

“It’s super hard to build a Xavier and the software stack on top of it,” Huang said on a conference call with financial analysts, noting that its autonomous driving hardware is in production. “And if it doesn’t turn out for whatever reasons, it doesn’t turn out for them, they can give me a call, and I’d be more than happy to help.”

Voice your opinion!

To join the conversation, and become an exclusive member of Supply Chain Connect, create an account today!