You (admin) can create an application (see How to Model Applications) using multiple tiers where you can stipulate each tier to use a different Workload Manager-supported , an externally-provided service (third-party services), or a container service. Users can define scripts for the each phase for each type service when you add/edit a service.
What Is a Container Service?
An container service is a service that you can define as a Workload Manager administrator. A container service does not have an associated VM – it is just a service that runs the image provided by the administrator.
A Pod represents a Kubernetes unit of deployment: a single instance of an application in Kubernetes, which can consist of either a single container or a small number of containers. CloudCenter deploys one container per Kubernetes pod for each container service tier in application profile.
The following screenshot highlights the option for adding a Container Service.
A Container Service neither has an Agent nor does it have associated Lifecycle Actions.
In a Container Service, the Image field (a new field, which is not found in the other service types) refers to the image that is running in a container. Provide the Image URL or relative path – if you provide the relative path, Kubernetes pulls the image from the registry that is already configured in the cluster. For example, if your Nginx service is called nginx and it resides in the root folder, use nginx as the relative path – the path is relative to docker.io because the image is pulled from the Docker hub.
If you are configuring a specific version of this service, add the version number as well. For example, nginx:1.0. Kubernetes picks up the exact version of this image when it launches the container. The following screenshot highlights the Image field and the Container Ports field.
In a Container Service, the Container Ports field (a new field, which is not found in the other service types) refers to the exact port and protocol that the container listens on and must be exposed for external access.
The Container Service can expose more than one port. For example, a Web Server container can expose both Port 80 and Port 443.
Lifecycle Hooks are actions that you can define for container-based services in a service definition. These scripts are executed inside the container when the application profile tier that has the Container Service is deployed on Kubernetes.
Any script in a HTTP repository that is supported by the new Container Service type
The Container service does not support IPAM and VM naming callout scripts.
The following image provides an example of a service-level definition.
Command Line Examples
While entering command do use spaces. Here are some examples of the correct usage:
- touch /tmp/touch.out
- sh -c ls>/tmp/ls.out
- cp /etc/passwd /tmp/passwd
Here are some examples of incorrect usage:
- /bin/sh -c ls > /usr/share/message
- In above command after –c there is a space before and after ls as well as before and after >
See the Deploying a Container Service section below for examples of application-level definitions.
Custom Scripts Sources
The following table identifies the Lifecycle Hooks that are specific to a container service (see the image above).
|Container||The specified script is executed...|
Container Post Start
|After the container is launched/provisioned|
Container Pre Stop
|Before the container is terminated|
When you select the secretkey type, you can configure another layer of abstraction when specifying the Key Name (group level coordinate) and the Key (the location) to retrieve the actual value of the secretkey for the deployment parameter that uses this type. The following image shows these parameters.
Kubernetes Specific Parameters in Deployment Parameters
as the default value for the WORDPRESS DB HOST parameter in the Topology Modeler, as shown in the following screenshot.
Add a Container Service
To add a Container Service, follow this process.
Access the Workload Manager UI > Admin > Services > Add Service page.
Click Container Service to select this service type.
Proceed as you would for a Custom Service Definition.
Configure the scripts for each Service-Based Lifecycle Hook described in the table above.
Deploying a Container Service
To deploy an Container service and ensure that it passes information from a container to any dependent tier (any tier above the current tier), follow this procedure.
Define input parameters in a script and save the script in an accessible location as explained in the Custom Scripts Sources section above.
and provide the script location. See the Add a Container Service section above for additional context.
Save the Container Service.
Application Using a Custom Image. In the Properties section of the Topology Modeler for the selected container service, you can configure the container-specific details.
Only the container-specific service property parameters are explained in this section. If the service properties are the generic, they are explained in Understand Application Tier Properties.
The following table describes the container-specific details.
Properties Fields Description General Settings Base Image
Select the required admin-configured, container service image that is displayed in the dropdown.
See the Images section above for additional context.
- Mount Path
- Default Size
Provide the path to the application that will be using this image along with the default size being used for this volume. Deployment Parameters Add a Parameter If your admin has already configured parameters at the service-level, those parameters are inherited from the admin-level configuration. See the Deployment Parameters section above for related service-level details.
You can also add parameters specific to the deployment. See Using Parameters for additional context on adding deployment-level parameters.
- Service Port
- Port Name
For each container port defined in the Container Service, a default network service of type ClusterIP is created when you add the Container Service to an application profile. See the Container Ports section above for additional details.
The following screenshot shows these parameters.
Network Services define how the ports exposed by the container is made available for other pods or external entities.
Firewall Rules Container Port/Protocol
Firewall Rules define who can access the container.
By default, when you add Container Service to an application profile, a default firewall rule is added for each container port in the service to be accessed from any IP on the Internet.
However, if you enabled Inter-Tier Communication (Firewall Rules) in the Basic Parameters section for an application profile, then the topmost tier has a default firewall rule added for each container port in the service to be accessed from any IP on the internet. See Security and Firewall Rules > Inter-Tier Communication (Firewall Rules) for additional context.
For other dependent tiers, the default firewall rule is added for each container port in the service that is accessed from the dependent tier.
You can choose to keep the preconfigured firewall definition or add/edit any firewall rules as required. All firewall rules are optional in this field.
In the Workload Manager UI, the Column Name changes to Container Port/Protocol if you are deploying a container service.
The following screenshot shows firewall rules.
Minimum Resource Specification
One thousand MilliCPUs is equal to 1 CPU and Memory is measured in bytes.
This configuration depends on the container capabilities. See Managing Compute Resources for Containers for additional context.
Click Save to ensure all changes that you made to the Properties section for this deployment are saved.
Deploy the Application. At this point, Workload Manager adds the configured properties to the dependent tier(s).
Cost and Reports
Billing and Reporting are currently not supported for Container Services.
- No labels