Application modeling is only one part of managing deployment automation with CloudBees CD/RO. The other main part is environment modeling.
With application modeling you describe what the processes are that can be run against an application. Now with the environment modeling you are defining where these processes can occur.
It is important that these two be separate or else there would be a lot of repetition of effort. With this setup you only need to define the application processes once and then point them at the target environments. Properties from the environment can then be passed along to the application process during runtime.
In the previous lab we took a look at the application modeling process and setup a microservice definition. As part of this we created our first environment in which we would deploy our application.
Let’s take a look at the environment we created in the previous lab in more detail. To look at the environments click on the hamburger menu an then click on Environments.
|—|—| | | |
Here are the resources you defined in the environment creation form. There is the default
Kubernetes cluster definition on the right and the Utility Agent named k8s-agent
on the left. If there were more components involved in this environment they could be easily added. This is more prevalent in traditional applications where you may be targeting multiple different servers.
In our case, since we’re using Kubernetes, we don’t need to worry about individual servers.
One of the challenges of maintaining many environments is keeping track of what has actually been going on in those environments.
CD/RO makes this easy with the Runs on this environment view. Here you can see all of the application runs that have taken place in this environment.
If you click on the Runs on this environment button on the secondary menu, you should see the application run that you performed from the previous lab. You could click into if you wanted to in order to review those details.
One of the other challenges of maintaining many environments is actually knowing what is installed in those environments.
This is where the Inventory view comes into play. The inventory shows you what applications are installed in the environment, which version is installed, and when that occurred.
If you navigate to this view by clicking the Inventory button on the secondary menu, you should see something like the following.
Now that we’ve got this fully functional QA environment, let’s create another environment that we can deploy our application into.
We’ll call this our Production environment.
To get started, let’s go back to the application view by first going to the “Environment editor view” and clicking the Application: Welcome App
link in the bottom of the Hierarchy Menu.
You can also navigate there via the main menu (aka the burger menu) by going to Applications under Deployment Automation.
To add a new environment, click the large plus button bellow the QA environment.
You’ll now see a familiar “Add environment”. We are essentially going to repeat our steps from the previous lab.
Fill out the environment parameters to define our production environment.
This will look remarkably similar to before. We’ll just replace the values of QA that we used previously with that of Production.
Field | Value | Description |
---|---|---|
Environment name | Production | The name to identify your environment |
Project | Select your project | The project inside which this environment will be stored |
Environment description | Optional | A field to give textual details about this environment |
Utility resource name | k8s-agent | This is the name to identify the utility resource |
Resource | k8s-agent | This is the agent which will communicate with the Kubernetes cluster |
2 |
Once the environment is created you can create cluster references by clicking on the Add cluster reference
button.
Field | Value | Description |
---|---|---|
Cluster name | default | A name to identify this cluster |
Cluster description | Optional | A field to give textual details about this cluster |
Configuration provider | Kubernetes (via Helm) | The type of environment you’re defining |
Configuration name | k8s-worshop | A reference to a configuration that lets CD/RO know where to use Helm |
Namespace | my-username-prod | The Kubernetes namespace where your application will be deployed. You should update this to be YOUR_USERNAME-prod. |
Kubeconfig context | This allows you to target a specific cluster if your configuration is pointed at multiple. For this workshop you can leave this blank. |
The last step in configuring our microservice application is to map the microservice (hello-app) to your environment (Production). To do that click on the button and map your application
|——–|——–| | | |
Now you’re ready to deploy into the production environment. Like before, you can click the “Deploy” button, but this time select the Production environment.
This should fail to deploy into the production environment.
One question may be occurring to you: Didn’t we hardcode the URL for the application?
That is right! Right now we’ve deployed our application into two different environments, yet we have the same URL set for both. That’s not what we want!
This is where we’ll bring in some environment properties.
We’re going to make some modifications that will allow us to deploy to each environment with some key differences including different URLs and different environment variables passed into the deploy.
First, you’ll add a property to the two environments that specifies a subdomain. Then you’ll update the application model to use this subdomain so each deployment can actually be accessed.
We’ll start with the QA environment. You can add a property by clicking on the menu with the three dots on the QA environment box.
Go ahead and fill it out with the properties:
Property | Value |
---|---|
subdomain | my-username-qa |
You can leave all the other values default and save it by clicking OK.
Now go ahead and do the same for the production environment, except use these properties:
Property | Value |
---|---|
subdomain | my-username-prod |
And with that, we’re ready to update the application model.
Let’s go back to the application model. Go ahead and click on the Details option in the menu for the hello-app model.
We’ll update the values block to interpolate some values rather than using the hardcoded values we started with. To make sure you get all the values right, you can copy the following block.
ingress:
hosts:
- host: $[/myEnvironment/subdomain].cdro-workshop.cb-demos.io
paths:
- path: /
pathType: ImplementationSpecific
tls:
- secretName: insurance-frontend-tls
hosts:
- $[/myEnvironment/subdomain].cdro-workshop.cb-demos.io
name: "$[/myJob/launchedByUser]"
environment: "$[/myEnvironment/name]"
What we’re doing here is using the $[propertyName]
syntax to access the values of particular properties. There are a bunch of properties we can access throughout the platform, but in this case we want to reference a few related to this particular environment and deployment.
Property | Description |
---|---|
$[/myEnvironment/subdomain] | Accesses the subdomain property on the environment targeted by a deployment run. |
$[/myEnvironment/name] | Accesses the name of the environment targed by a deployment run. |
$[/myJob/launchedByUser] | Grabs the name of the user who kicked off the run. |
You can do this by using the syntax of $[propertyName]
to access the value of a particular property. There are a bunch of properties we could access throughout the platform, but in this case we want to reference the current environment we’re running against.
With these new settings in place, go ahead and deploy into the QA environment.
This should run fairly quickly. Once it is complete, you can access your application at: my-username-qa.cdro-workshop.cb-demos.io
Now go ahead and do the same for the production environment. You’ll be able to access the production application at: my-username-prod.cdro-workshop.cb-demos.io
After deploying both applications you should now see:
In the next lab you’ll be integrating the application and environment models you’ve worked on over the past 2 labs into the release from the first lab.