Model a New Application Profile
Application Modeling is the process of capturing all images, scripts, and other dependencies required to fully deploy an existing, working application and building them into a model that you can configure using the guidelines provided in the next section.
Adhere to the following guidelines when modeling a new application profile:
Application Profiles: To understand what an application profile is, see What Is An Application Profile?
Application Discovery Guidelines: Begin your application modeling using a top-down approach focused on the application requirements. In this phase, nothing is configured in Workload Manager.
Consider all of the services that make up the application (for example, Apache, Tomcat, JBoss, MySQL, Cassandra, SQL Server, and so forth).
How are those services dependent on each other?
Where are the artifacts that are required for the services to be successfully deployed?
Are the services better deployed in a modular fashion, on small, discrete VMs or containers? Or is it better to lump some of all of them together onto a single VM?
What are the networking requirements for each service to talk to the others?
Are there external services that need to be connected to and communicated with (for example, a new schema on existing SQL Server Cluster, New VIP on existing Load Balancer, and so forth).
The outcome from this step in the design process should be a topology diagram showing all the required VMs and external services, the ports required between each service, and which services are deployed on which VMs. See How to Model Applications for additional context.
Required Service Guidelines: Services should be developed to be usable in as many applications as possible without needing to re-modeling the same service each time. This will ensure future efficiency. For each service in the application, consider:
Is there already a Workload Manager-provided Service that can be used? Or can an existing service be slightly modified to support this application?
How can this Service be modeled to be generic?
How is that service going to be deployed on the VM? Using scripts or perhaps Chef or Docker?
If this is an external service, can you write a script to connect to that service and carry out whatever work is required? Are there existing libraries available that make this easier? Is one language or another easier for this? For example Python/requests VS Bash/curl VS Python library and so similar? Did you check out the corresponding libraries, for example, the Python library?
What images do you need to support for this service? Is one image sufficient? Can you make this image cross-compatible with minimum effort? For example use the [ -z /etc/redhat-release] command to verify if you're on a CentOS/RedHat type system.
Which inbound and outbound ports do you need to open for this service to function?
The outcome of this step should be a list of all services that your application requires, with a reference to the existing service that will be used if one is suitable. See OOB Services to view a list of supported services in Workload Manager.
Supported Image Guidelines: The VMs deployed by Workload Manager must be clones of existing. These are stored on each cloud as AMIs, QCOWs, VM Snapshots or VM Template Names, and so forth. These images are the foundation to build higher-level services. In an ideal case they are very simple and generic, but often also have required security tools, monitoring agents, etc on them that will be included in every VM.
Keep in mind that services reference LOGICAL images, not real ones. The logical image references one REAL image per cloud region. This is an important part of how Workload Manager achieves cloud portability. The application and associated services are not dependent on any specific piece of infrastructure, not even the cloned images.
For each Logical Image required for your services, consider:
Is there an existing Logical Image that is suitable. If not, is there one that's close enough that can be made to work, or can your service be modified to fit?
Don't make a new instance of a Logical Image type unless absolutely necessary.
Ideally you will have only one Logical Image for each type of OS that you require - and be sure to use as few OS types as possible.
If not, and if you need to create a new Logical Image:
On what cloud regions do you need to run this image?
What OS do you need to use for this image?
Is there a standardized image build script that you can use?
What tools do you want to build into the image, if any?
Don't add application or service-specific configuration as it will be less usable for the next service.
If you have a suitable Logical Image to use, does it have Real Image mappings for each region where you will want to deploy your application?
The outcome from this step should be a list of any Logical and Real Images that need to be created, how to create them, and where to create them for each services. After you have been through this a few times, the ideal outcome is generally, an empty list!
Modeling Process Guidelines: Transition to a bottom-up approach when you model an application profile.
Workload Manager application models are composed of.
If the VM deployment is required for the service, then it can be mapped to a logical VM image. The logical VM images are in turn mapped to real images on a per-cloud basis (for example, AMI, VM-snapshot or template name, QCOW, and so forth depending on the cloud you use). See Map Images for additional context.
An administrator can grant role-based permissions for Permission Control > Role-Based Permissions for additional context. If the user is not a member of the Application Architects or Workload Manager admins groups, the user cannot create application profiles and the model button in the app view will be disabled.See
When you begin the modeling process in Workload Manager, start with configuring your images, followed by services, and finally, the application model.
Create Real Images and Map to Logical Images
If you need to create any images, create the real image and map to a logical image.
If you don't already have a logical Image that you can use, contact your Workload Admin to create the logical Image in Workload Manager. See(concepts and UI) for additional context.
Create the real Image in each cloud region.
You might save some time on this if you use one of the supported out-of-boxfor your cloud, if available and suitable to your enterprise requirements.
Use your internal build process to complete this step. In some cases, you may only need to deploy the OS.
Install the Management Agent bundle using the installer. See Worker (Conditional) for additional context.
Apply the real image mapping to the logical image for the appropriate cloud region. See Map Images for additional context.
Test the images by creating blank services and applications – just for the purpose of launching and testing each image.
With your required images in place, you can start setting up services. Services have an associated lifecycle framework that calls different commands at different points in the service lifecycle. See Service Lifecycle Actions for additional context.
Start with a dummy or other service script. This can be in any language, with any content that meets your needs, but a handy starting point written in bash is provided in the following Sample Script Code: This example contains a single script, called service, that is used to handle all VM lifecycle actions, with an argument ($1) used to control which behavior is activated with each step.
The lifecycle actions would look like:
and so forth.
- It doesn't have to be done this way. You can use different scripts or commands or a different language like Python. This is your choice.
- See logging details on Line 2.
- See the sourced files on Lines 4-8. These are important to pick up helpful environment variables and utility functions.
In the Workload Manager UI, under Admin > Services, create a new service with a descriptive name and type – use a dummy script as a starting point.
Create a dummy application to test this service.
Continue building your service by launching a new test app deployment for this service often to test progress as you go along.
Refer to above service tutorial for additional detail and best-practices.
With the individual services built and tested using dummy applications, you can now pull the pieces together into the final working application.
Access the Workload Manager UI and click Applications.
Click the Model link in the top right corner. The Model a New Application Profile page displays with a list of application profiles.
Select a core, pre-packaged, or custom profile from the Workload Manager UI. For example, if you are modeling a Java web application, use the Java Web App profile.
Use the Workload Manager UI's Topology Modeler to define the application architecture and other components for each tier.
Drag and drop the required service from the Services pane to the Graphical pane in the Topology Modeler.
Connect the services with connectors that correspond to the order in which each service must be configured.
These services ARE NOT related to network dependencies in any way.
All VMs are created simultaneously (barring cloud-specific constraints)
The lifecycle actions are executed from the bottom up according to the arrangement in the Topology Modeler.
Click the service in the Graphical pane and configure the Properties for this service. See Application Tier Properties for additional details required to configure this pane.
Use debugging settings to troubleshoot issues – if required:
Add a global troubleshooting parameter (cliqrIgnoreAppFailure) to your application and assign a default value of true. Be sure to come back and delete this parameter once the application, services, and images are working (see Troubleshooting Parameters for additional context).
For each service tier, under Node Initialization, set sudo 'ALL' and make a note to come back and change this later.
These two things will be very important during the setup and debugging phase. REMEMBER TO TAKE THEM OUT LATER.
Add firewalls rules as required for each Service Tier. See Security and Firewall Rules for additional context.
Add additional services as required for your deployment. The configuration may differ based on the service. See the following sample services for additional context:
Enter the basic information (description, logo, name, and so forth) according to the requirements for your application. See Topology Modeler > Basic Information tab for additional context.
Enter the general settings values (HTTP/HTTPS/Both/None based on the invoked protocol).
The Protocol field provides a None option (in addition to HTTP, HTTPS, and Both) when modeling N-tier applications. If you select None, the Workload Manager does not add any access link URL in the application deployment detail page.
If it is a non-standard port, check the corresponding box.
For example, the default Tomcat server is started on port 8080. By setting the protocol to this port, you can access the server directly from the Deployment Details page.
Enter the required values to access applications (port number in the URL, for example, if the non-standard port is 3309 and the URL will be
Specify the categories and tags for this application, if any. These fields are used as filters to search for this application.
Provide the Access Link (path) for the application's launch page (landing page).
Add metadata as relevant for your cloud. This is a useful way to flag the VMs.AWS Example
AWS displays metadata as tags on the VMs. For example, if you use
Name / %USER_NAME%-%JOB_NAME%
When you log into the AWS console, you can see a list of app deployments and the user who launched the VM each deployment.
Define overriding parameters, if any, in the's Global Parameters tab (for example, administrator, username, password, and so forth). See the Global Parameters section for additional context.
Once you enter all the definitions, you have multiple choices:
Click Save as App to save it to theApplication Tasks page. If you save the modified profile as an Application, the definition is saved as a metadata file that can be exported in JSON format.
Click Save as Template to save it in the Model a New Application page.
Test your application or application profile in each cloud as applicable to your environment.
Export Application Scripts
If you save the modified profile as an Application, the definition is saved as a metadata file that can be exported in JSON format. This section provides some sample application profile formats.
Sample JSON Format – N-Tier
This section provides a N-tier Jenkins application profile JSON Data Transfer Object (DTO) from the UI.
Sample JSON Format – Single Tier
This section provides the single-tier application profile JSON DTO from the UI.
- No labels