It seems that everywhere you turn, automation is becoming the standard for most operations within the datacenter. Processes that were once tedious and error prone can easily be scripted, repeated, and automated through companies’ own service portals and other home-grown tools.
Scenarios where individual users can submit a help desk ticket for requested infrastructure (e.g. refreshing a database copy or provisioning a server/application) and have that request serviced end-to-end without ANY human intervention allow organizations the flexibility to remain as agile as the cloud in the datacenter without consuming IT cycles or overhead to perform these tasks.
Evaluating technology to fit the needs of your organization requires going beyond just testing functionality alone but taking in the integration points for using the tool. Some software applications may provide a graphical user interface (UI) or the ability to interface with the application through a command line interface (CLI). Others like Catalogic Software ECX provide software application programming interfaces (APIs) to allow users to further customize and extend the capabilities of the application using their own tool sets.
Catalogic has several organizations taking advantage of this application framework to automate tasks and realize the simplicity that comes with “systems running systems.” As an example, one company is using automation into their ticketing systems for refreshing Oracle database copies on Pure Storage FlashArray systems from their change request system. An end user submits a change request ticket through their Help Desk portal identifying several variables they would want to use such as:
These variables are then passed to the Catalogic ECX which runs a new restore job using those parameters and another workflow to tear down the copy after the expiration date has been met. Catalogic integrates with Pure Storage copy operations so no outside storage repository is required, and the user gets the benefit of Pure’s extraordinary performance and scalability.
A large corporation and customer of Catalogic Software has taken automation to the next level in helping them build their new “next-generation” datacenters. This customer uses vRealize Automation (vRA) and vRealize Orchestrator (vRO) in conjunction with ECX to deploy and protect the environment with a click of a button. A simplified version of the workflow may look something like this:
Figure 1 Basic workflow of deploying a new server and defining a backup policy to it
When a server is decommissioned, the workflow invokes destructors in reverse order to clean up the storage snapshots and replicas, delete the protection policies and all the other objects / methods supporting that server in order to deprecate and free up space associated with that server.
Each of those tasks can be further broken down into subtasks which would have their own subtasks, etc. I think you can get the idea how something as simple as deploying a server can become fairly complex very quickly when designing these automated workflows that bring in all the other needed components.
But before your head starts to spin, designing a workflow schema with vRO is simply a matter of breaking down a complex task into a series of subtasks which can be represented as a reusable widget/function within your orchestration tool.
Different levels of administrative involvement may be needed like storage, application, virtualization, and general system administrators to provide input into scripting the logic for how these subtasks would be created. However, when using ECX much of the logic is taken care for you simply by leveraging the REST API framework ECX was built upon, along with the native database and VMware awareness of ECX.
While there are several orchestration tools available today, customers looking to take advantage of tight integration with VMware often rely on vRealize tools. vRealize Orchestrator is a workflow visualization tool developed by VMware with some in the community touting it as the most powerful tool available that few if any are aware of or use. Because it was developed by VMware, vRO understands the nuances of VM-specific object definitions and can easily instantiate object types like VMs, datastores, networks, etc. through a simple function call without having to deal with invoking CLI commands through a VMware client.
Customers choose to leverage vRO when working closely with VMware to deploy virtualized guests quickly and efficiently (and with minimal error). That is because vRO allows for the individual steps within a workflow to link and pass information between one another. In other words, it acts a pseudo-programming language construct complete with variable definitions, conditional logic (if-then-else), and loop mechanism (while TRUE, perform this operation again).
Now this is great when wanting to create the same machine repeatedly, but once a virtualized application server has been provisioned through vRO, it will need to come under the domain and ownership of IT and operations who will need to ensure that the data is backed up and requests for restores can be accommodated quickly and easily. This is where ECX comes into the picture.
At Catalogic Software, we develop and market products that are focused on data protection and data management. Deployed as a software appliance, our ECX product specializes in managing copy data throughout various storage array, virtualization and database technologies.
ECX already speaks to a good portion of the technology customers have in their environment. It already possesses the ability to communicate and understand the relationships in the technology stack. By simply registering storage arrays, the hypervisor (or physical servers) and applications that use that storage, companies can leverage ECX to protect and recover data from the application / OS stack down the storage level.
Most users access ECX through a GUI that is accessible via a web portal. All the operations submitted by this GUI are based on a REST framework. This means that one can perform the same operation by invoking the REST API directly through a REST client. This allows a user an integration point for commands to be sent from orchestration tools like vRO.
Now that you have some high-level background on orchestration through vRO and ECX, let’s examine how one Fortune 50 company is using this in conjunction with ECX to build out their data center. The goal is simple – close to as little human involvement as possible. Every time a new server gets deployed, that request comes through in the form of a web portal request which is probably the only time during the entire process a human provides the input.
I’m going to go back to the workflow diagram mentioned above and just break it down into the steps for which just ECX is involved. You’ll see there listed within the workflow:
Once the machine is provisioned, it is immediately registered to a protection policy in ECX where a snapshot copy of the database and log files (to provide point-in-time restore) are taken on a regular basis directly on the storage on which it resides for as long as that database server remains in commission. In this case, the storage is NetApp, but ECX also supports Pure Storage FlashArray and multiple arrays from IBM.
ECX makes this a breeze by providing the services to automatically:
Let’s see what this workflow would look like. Like most data protection applications, ECX requires a login in order to establish a valid session for creating and interacting with the objects within the catalog.
So let’s have one the subtasks for registering our application server look like:
Figure 2 Subtask operations contained with Register Application Server to ECX
The “Login” process provides us with a session ID. Once created, the session ID establishes the building block which other tasks can invoke from an ECX perspective. If we continue to look at this process end-to-end, we can see that we are able to interact with the application server through ECX just from a single session ID:
Represented in vRO, such a workflow may appear like:
Figure 3 An example schema created in vRO showing the logical steps and operations that are run in each workflow
Within the workflow, whose steps can be clearly labeled as to what is occurring in each step, you can see that after the application server is registered, it is tested and an inventory process is run against it to confirm that ECX has the ability to communicate with the server and has added it to a protection policy.
By using vRO, a DevOps developer can create a graphical representation of the workflow which may be pieced together by other workflows which are broken down into a series of subtasks.
Extending this out to an even bigger picture, a vRO schema may look something like this when deploying an application server. Each step in the workflow will be highlighted when the workflow is executed to depict what it is currently working on within the workflow. The workflow depicted just above is a component step of the workflow depicted below.
Figure 4 vRO acts very much like a programming language and allows for conditional logic (if-then-else). Variables can be passed between workflows through input and output variables known as bindings in vRO. In this example, once a login occurs and a sessionID has been obtained, this string variable is passed between all the other components of the workflow allowing vRO to interact with ECX.
Let’s analyze the above workflow to determine exactly what it is doing. This workflow titled Register Application Server attempts to determine the name of the application server just deployed and stores that within a global variable which can then be used in ECX.
It captures the session ID through a login operation and stores this information in another global variable which can be referenced in the scope of this workflow run. This session ID gets passed to other queries made to the ECX catalog such as determining the vCenter server and siteID to which this application will be registered. Once all those variable bindings have been captured, the user would have all the necessary arguments to create the object via a single API call with a request payload detailing these required parameters within a JSON format.
The screenshot below shows the ECX GUI of the application server object that was just created and registered through the automation. The object’s comment field has been noted to show that this application server object was deployed and added to an SLA policy via automation.
Figure 5 Deployed application servers can be differentiated from other servers using the comment field in the request header when making the REST API call. Hundreds of thousands of application servers can be deployed by ending the server name string with a digit that can be incremented as new application servers get deployed.
While registering an application server in ECX can be performed in about 5 clicks or so, the time and energy taken to register tens of thousands of application servers would be downright tedious and error prone, and that doesn’t even account for application servers that are newly created every time a new database gets deployed within the environment. Automation greatly simplifies tasks at scale.
Through orchestration tools like vRealize Orchestrator, one is able to take a simple task and build upon it to create a more complex (and more valuable) task that can be designed to reduce the human intervention required for repeated tasks.
Having software tools that can easily plug into DevOps requirements should be a top of mind consideration after evaluating functionality. By building upon a REST framework and publishing the API requirements, ECX provides the ability to further extend an organization’s existing automation capabilities into the data protection space.
Hopefully by now you are seeing the benefit of evaluating tools and software not just on its capabilities, but also, its ability to play nice with other platforms and services. Ultimately your choice in software tools is going to boil down to whether or not it performs the functions that you require. However, evaluating how the tool can interact and plug into the orchestration and DevOps tools that you have today will certainly make a difference in your choice.