Supported Container Clouds
|Table of Contents|
A container cloud relies on a container infrastructure that is configured by an administrator outside of Workload Manager. Currently, Workload Manager supports one container cloud: Kubernetes cloud.
Kubernetes cloud configurations require:
Kubernetes version support
- Kubernetes 1.11
A single Kubernetes cluster with an implicit default region
One or more cloud accounts
Cloud settings API endpoint
Instance types (fractional CPU and memory)
Upstream Support and Capability
Workload Manager supports upstream Kubernetes setups. Upstream refers to any bare Kubernetes setup like Google Kubernetes Engine (GKE), Amazon Elastic Container Service for Kubernetes (EKS), Cisco Container Platform, and so forth as these environments expose the Kubernetes APIs to users. This term does not include platforms that only use Kubernetes and then add on their own APIs.
Workload Manager's API layer handles configuration tasks such as application deployment for Kubernetes pods – at the time of application deployment, Workload Manager dynamically creates the application pod information, which can be in Kubernetes as YAML or JSON files. Workload Manager dynamically deploys applications based on the Workload Manager application profile. While you cannot directly modify the application pod information that is dynamically created, you can edit the Workload Manager application profile in JSON format.
When creating an application profile, users define the network service. Workload Manager uses these user-configured network settings to automatically deploy load balancers through Kubernetes. See Container Service > Deploying a Container Service > Network Services for details.
The Firewall Rules in the application profile correspond to a Network Policy Ingress rules in Kubernetes. See Container Service > Deploying a Container Service > Firewall Rules for details.