TNS
VOXPOP
Terraform or Bust?
Has your organization used or evaluated a Terraform alternative since new restrictions were placed on its licensing?
We have used Terraform, but are now piloting or using an open source alternative like OpenTofu.
0%
We never used Terraform, but have recently piloted or used alternatives.
0%
We don't use Terraform and don't plan to use or evaluate alternatives.
0%
We use Terraform and are satisfied with the results
0%
We are waiting to see what IBM will do with Terraform.
0%
AI / Cloud Services / Hardware

Google Vaunts New Gemini Code Assist Tool at Cloud Next 2024

At Google Cloud Next '24, Google showed off its continuing dedication to all things AI in the form of several new developer tools and new AI-focused chips.
Apr 10th, 2024 9:05am by
Featued image for: Google Vaunts New Gemini Code Assist Tool at Cloud Next 2024
Feature image via Chris J. Preimesberger.

Google’s Cloud Next 2024 event is playing to 30,000 attendees in Las Vegas through April 11, and that means lots of new cloud-focused news on everything from Gemini, Google’s AI-powered chatbot, to GenAI, DevOps, and security.

This event was the second in-person Cloud Next since COVID-19 slapped the world in 2019. Online versions of the event took place in 2020, 2021 and 2022.

Google Cloud CEO Thomas Kurian and a succession of his lieutenants took the stage to show off their continuing dedication to all things AI in the form of several new developer tools. One in particular, Gemini Code Assist, not only finds the code you’ve been looking for, but it also offers thoughtful suggestions on alternatives.

Other new AI-type tools and services making news at this show are Duet AI for Gmail, an expansion of generative AI to Google’s security product line and other enterprise-focused updates.

‘Redefining Cloud for an AI-Native Era’

“Google is redefining the cloud experience for an AI-native era,” Gartner VP and Chief Analyst Chirag Dekate told The New Stack. “Enterprises are on a journey of sorts to build AI factories of the future that transform all of their processes and empower every one of their employees with GenAI boost.”

This emerging AI-native era is ideally suited for Google in many ways, Dekate said.

“As its core technologies, systems, capabilities and services have originated to address AI everywhere, experiences are now native to many of Google’s products and services (from Google Search to maps and beyond),” Dekate said.

As a result of this, Dekate said, it is no surprise that Google is creating a differentiated capability across its systems and services portfolio including:

  • AI Hypercomputer (a workload-optimized infrastructure that is differentiated from its peers);
  • The industry’s broadest and deepest model catalog (enabling access to first-party, third-party and enterprise-ready open source models all built and optimized on AI Hypercomputer);
  • Gemini for Google Cloud and Gemini for Workspace experiences that are differentiated and aggregate the infrastructure;
  • Gemini model innovations that activate new ecosystem flywheels; and
  • an agents’ ecosystem that brings all of the GenAI innovations to life in an enterprise context, thanks to the broad partner ecosystem integration.

AI Hypercomputer combines Google’s TPUs, GPUs and AI software to provide performance and cost advantages for training and serving models, CEO Kurian said. “Today, leading AI companies and Google Cloud customers like Anthropic, AI21 Labs, Contextual AI, Essential AI and Mistral AI are using our infrastructure,” he said.

Google also announced the general availability of TPU v5p, the latest AI accelerator for training and inference, which has four times the compute power of its previous generation, Kurian said.

New Code Assist Improvements

Google’s unveiling of major improvements for the AI-powered Gemini Code Assist tool earned a lot of attention on Day 1 of the show from developers.

“I think the Gemini Code Assist enhancements in version 1.5 are the most important for developers to hone in on,” Forrester Principal Analyst Devin Dickerson told The New Stack. “Multimodal inputs and support for relatively large inputs in terms of the lines of code that can be analyzed could be helpful for developers and development teams that desire more holistic and application-aware recommendations.”

In the big picture, how important are those advancements for cloud devs?

“I think the expansion in terms of lines of code (30K, according to the announcement during the keynote) pushes code assist in the right direction,” Dickerson said. “As these tools become more aware of larger sets of codebases as well as system-level and application-level concerns, the value proposition will increase, particularly for application modernization use cases at the enterprise.”

Gartner’s Dekate agreed. “Beyond code completion, Gemini Code Assist provides code generation, enabling greater productivity, efficiency and accuracy for developers across enterprises,” he said. “With deep third-party integration and support for broader ecosystems, including GitHub and GitLab, Google Code Assist sets the bar on ecosystem integration in this segment.

“Further, Gemini Code Assist is now powered by Gemini, where the larger context sizes enable unrivaled developer experiences. Google Code Assist should be shortlisted by every enterprise as they seek to leverage GenAI to accelerate developer productivity.”

News in Google Cloud Platform

Google Cloud Next is also the venue used to introduce a bevy of new instance types and accelerators to augment Google Cloud Platform. In addition to new custom Arm-based Axion chips, most of the announcements were about AI accelerators, whether built by Google or AI chipmaker NVIDIA. It seems that enterprises are all exploring the use of AI in many of their systems, and once they get something up and running, they naturally want to accelerate it.

NVIDIA announced its Blackwell platform only a few weeks ago, but Google won’t be making those machines available anytime soon. Support for the high-performance NVIDIA HGX B200 for AI and HPC workloads and GB200 NBL72 for large language model (LLM) training will arrive in early 2025. One interesting factoid from Google on Tuesday: The GB200 servers will be liquid-cooled, which tells you a lot about the power and heat they are expected to produce.

NVIDIA said that its Blackwell chips won’t be publicly available until the last quarter of this year.

Group Created with Sketch.
TNS DAILY NEWSLETTER Receive a free roundup of the most recent TNS articles in your inbox each day.