Google launched Kubernetes back in 2014 as an open source project to help manage containers. Over time, as it has become a cloud-native mainstay, the company has continued to support the open source project, while offering its own commercial version called GKE (short for Google Kubernetes Engine). Today, at Google Cloud Next, the company launched a new enterprise version of GKE.
Chen Goldberg, GM & VP for cloud runtimes at Google, says GKE Enterprise builds on all of the work that Google has been doing in container management over the years. In 2019, the company introduced Anthos, a container platform that lets companies move workloads more easily between cloud platforms. Goldberg says that the new GKE Enterprise combines Anthos with GKE to help enterprises running multiple clusters manage those complex workloads.
It comes out of the box with several advanced features specifically designed to manage more complicated Kubernetes environments, including security and governance tools, service mesh management and a dashboard to get an overview of all the workloads running across a company.
Google is also introducing the concept of managing what it calls “fleets of clusters,” which can operate independently, enabling each development team to move faster, while following a set of common company cluster management guidelines. “They can apply policies. They can create a standard [configuration] for their development environment, staging environment and production environment. They can monitor cost usage and look at vulnerabilities,” she said. And they can do this across multiple Kubernetes projects from a single management tool.
Google is also allowing customers to define the hierarchy of clusters and make more granular sets of rules when needed. “So if I’m a GKE administrator or platform team, I can create fleets of clusters and manage them together. And within that, I can also create a new concept called teams, and provide permissions to teams for those classes,” Goldberg said.
In addition to the management features, the company is also announcing a new chip designed to power AI workloads, the new Cloud TPU v5e. “What’s unique about TPU v5e is that it can scale to tens of thousands of chips, making it ideal for developing more complex AI models.” The new chip will be available in preview in GKE, according to Goldberg.
“So GKE provides scale — we’re the most scalable, managed Kubernetes service. We can support clusters with up to 15,000 nodes with automatic upgrades and workload orchestration, monitoring. A lot of the characteristics of Kubernetes and GKE fit really well with the new innovation around generative AI and using TPUs and GPU,” she said.
Lastly, the company is putting the power of generative AI to work for GKE users by training an LLM on its own documentation. With Duet AI for GKE and Cloud Run, the company is enabling users to simply ask questions and the system can return answers in plain language based on the documentation.
“It will give you examples of scripts and help you to write code faster. And it is important that it is trained on our entire documentation and code examples, increasing the relevance and quality of the results,” she said. But LLMs have known limitations, including hallucinations where the model can make up answers when it doesn’t have the correct information. Even when using a constrained data set like this, it doesn’t eliminate the problem completely.
GKE Enterprise will be available in preview starting in September. The Cloud TPU v5e will be available in preview starting this week, and Duet AI for GKE and Cloud Run will be available as part of the company’s expanded Duet AI in Google Cloud preview.