vSphere 7.0 : DRS Re-Designed

Although vsphere 7.0 is a major enhancement in itself with lots of new features added like Kubernetes (Project Pacific), vCenter Server Profiles, vsphere Lifecycle Manager(vLCM), Certificate Management, Refactored vMotion etc.

But the one that caught my eye and is completely re-designed after a span of 15 years is DRS (Distributed Resource Scheduler).

How DRS used to work before vsphere 7.0?

DRS was released back in 2006 and since then it wasn’t changed that much. However, there were a couple of enhancements and changes in vSphere 6.7 (new Initial Placement, NVM Support and enhanced resource Pool reservations). The version of DRS till vsphere 6.7 was a Cluster centric Model. In simple words, the resource utilization was always balanced across the Cluster.

DRS till vsphere 6.7 was a Cluster centric Model.

It’s important to know that DRS regularly monitored the cluster’s balance state once every five minutes, by default, and took the necessary actions to fix any imbalance by live migrating the VM onto the new host using vMotion.

In this way, DRS ensured that each virtual machine in the cluster gets the host resources like memory and CPU, that it needs.

What has changed in DRS vsphere 7.0?

VMware shifted their focus from Cluster centric to Workload Centric Model. Meaning whenever VM runs on an ESXi Host, it calculates “VM DRS Score”. It is totally a new concept!

This score verifies if the VM is scoring enough or is it happy enough on that particular ESXi Host. Let’s see what it is!

VM DRS Score

  • VM DRS Score or also called as “VM Happiness” Score can be defined as the execution efficiency of a virtual machine.
  • Values closer to 0% (not happy) indicate severe resource contention while values closer to 100%(happy) indicate mild to no resource contention.
  • The VM DRS score “works” in buckets. These buckets are 0-20%, 20-40%, 40-60%, 60-80% and 80-100%.
  • A lower bucket score doesn’t directly mean that VM is not running properly. It’s the execution efficiency which is low.
  • DRS will try to maximize the execution efficiency of each virtual machine while ensuring fairness in resource allocation to all virtual machines in the cluster.

How VM DRS Score is calculated?

The calculation of VM DRS Score is per-VM or for a single workload on all the hosts within a cluster.

There are several metrics responsible for VM DRS Score –

  • Performance: DRS looks at CPU Ready Time, CPU Cache behavior and Swapped Memory of the VM.
  • Capacity of the ESXi Host: DRS looks at the headroom that an ESXi Host has and see if an application/workload can burst enough on the ESXi Host that it is running on. This parameter is also called as VM Burst Capacity.
  • Migration Cost: The cost of migration of a VM from one ESXi Host to another. So, you won’t be seeing lots of vMotion happening now! (Only if your DRS is set to Fully-Automated).

Most interesting part is VM DRS Score is calculated every min which gives you far more granular approach.

VM DRS Score is calculated every single min compared to older version where DRS monitored the Cluster’s state every 5 mins.

Cluster DRS Score

As you can see in the diagram, there is something called as Cluster DRS Score which is defined as the average DRS Score of all the virtual machines in the cluster.

Scalable shares:

Very Interesting Concept!

Scalable shares are configured on a cluster level and/or resource pool level.

What’s new is that when you set share level to “high”, it will make sure that VM’s in a Resource pool set to High shares really get more resource prioritization over lower share Resource pools.

In earlier DRS versions, it could possibly occur that VM’s in a Resource pool with shares to “Normal” could get the same resources as a High share Resource pool. Higher share value did not guarantee Higher resource entitlement. This issue is fixed with Scalable Shares.

This setting can be found under Cluster Settings > vSphere DRS > Additional Options > Scalable Shares.

Wrap Up:

We just touched the DRS part. We haven’t discussed about the improved vMotion (or Refactored vMotion) or Assignable Hardware which also plays a major part for DRS.

I hope this article was helpful.

Stay Tuned, and follow the Blog!

For more information on vsphere 7.0, please visit –

Set SCSI controllers to a VM HDD: vRO workflow

Hello All,

SQL servers on VMware infrastructure need to be built as per recommended guidelines. One of the major recommendations is to have specific SCSI controllers assigned to data HDD of a SQL server. The idea here is to have dedicated SCSI controllers for each data disk so that it does not pass all the data via a single SCSI controller.

Recently, I had come across a use case whereas a part of VM provisioning from vRA, I needed to point my SQL server VM disks 3/4/5 to different Para Virtual SCSI controllers.

Hard disk addition to a VM can be handled as part of blueprint or, XAAS request from vRA.

Steps for vRO Workflow to configure SCSI Controller:

Step 1: Shutdown the VM [Used inbuilt WF “Power off the virtual machine and wait“]

Step 2: Create 3 additional SCSI controllers [Copied in-build vRO action “createVirtualScsiControllerConfigSpec” 3 times with updated value of both controller.key and controller.busNumber as 1,2,3 respectively for SCSI controllers 1,2,3 ]

2020-03-15_17-14-41

Step 3: Reconfigure VM to have above SCSI controllers added

Step 4: Identify Hard disks by labels and point individual disk to new SCSI controllers

Both Step 3 and 4 will be handled by the below code:

var configSpec = new VcVirtualMachineConfigSpec();
var deviceConfigSpec = new Array();

deviceConfigSpec[0]= actionResult;
deviceConfigSpec[1]= actionResult1;
deviceConfigSpec[2]= actionResult2;

configSpec.deviceChange = deviceConfigSpec;

task = vm.reconfigVM_Task(configSpec);

System.sleep(5000);

var controller,controller1, controller2;
for each (var device in vm.config.hardware.device)
{
var label = device.deviceInfo.label;
if (label == “SCSI controller 1”)
{
controller = device;
System.log(“Found Controller 1 : ” + controller.key );
}
else if(label == “SCSI controller 2”)
{
controller1 = device;
System.log(“Found Controller 2 : ” + controller1.key );
}
else if (label == “SCSI controller 3”)
{
controller2 = device;
System.log(“Found Controller 3 : ” + controller2.key );
}
}
if(!controller && !controller1 && !controller2)
{
throw “ERROR: Controller not found”
}

var diskConfigSpecs = new Array();
for each (var device in vm.config.hardware.device)
{
var label = device.deviceInfo.label;
if (((device.deviceInfo.label).indexOf(“Hard disk 3”) > -1) && ((device.deviceInfo.label).indexOf(“Hard disk”) > -1))
{
System.log(“Found disk to Change Controller: ” + label);
var diskConfigSpec = new VcVirtualDeviceConfigSpec();
diskConfigSpec.device = new VcVirtualDisk;
diskConfigSpec.device = device;
diskConfigSpec.device.controllerKey = controller.key;
diskConfigSpec.device.unitNumber = 0;
diskConfigSpec.operation = VcVirtualDeviceConfigSpecOperation.edit;
diskConfigSpecs.push(diskConfigSpec);
}
else if (((device.deviceInfo.label).indexOf(“Hard disk 4”) > -1) && ((device.deviceInfo.label).indexOf(“Hard disk”) > -1))
{
System.log(“Found disk to Change Controller: ” + label);
var diskConfigSpec = new VcVirtualDeviceConfigSpec();
diskConfigSpec.device = new VcVirtualDisk;
diskConfigSpec.device = device;
diskConfigSpec.device.controllerKey = controller1.key;
diskConfigSpec.device.unitNumber = 0;
diskConfigSpec.operation = VcVirtualDeviceConfigSpecOperation.edit;
diskConfigSpecs.push(diskConfigSpec);
}
else if (((device.deviceInfo.label).indexOf(“Hard disk 5”) > -1) && ((device.deviceInfo.label).indexOf(“Hard disk”) > -1))
{
System.log(“Found disk to Change Controller: ” + label);
var diskConfigSpec = new VcVirtualDeviceConfigSpec();
diskConfigSpec.device = new VcVirtualDisk;
diskConfigSpec.device = device;
diskConfigSpec.device.controllerKey = controller2.key;
diskConfigSpec.device.unitNumber = 0;
diskConfigSpec.operation = VcVirtualDeviceConfigSpecOperation.edit;
diskConfigSpecs.push(diskConfigSpec);
}
}

var configSpec = new VcVirtualMachineConfigSpec();
configSpec.deviceChange = diskConfigSpecs;
task = vm.reconfigVM_Task(configSpec);
System.sleep(5000);

Step 5: Power on the VM [Used inbuilt WF “Start the virtual machine and wait“]

Final Workflow schema will look like:

2020-03-15_17-20-23

Step 6: Integrate with vRA by configuring the workflow to be triggered as part of your existing Machine Provisioning Subscription or, create a new one if you don’t have one already. [ If you want to know how, comment below and i will write another blog about it]

Thanks,

Welcome, vSphere7 and Tanzu mission control

vSphere7

We all were waiting for this day, Today VMware has announced a few major products with a single objective and that is to fule app modernization. There is no secret what these products are. Yes, you got it right. These are vSphere7, Tanzu Mission Control, and VCF4.0. Please find a brief overview of these new releases.

vSphere7, Runs Kubernetes clusters natively on the existing vSphere platform. VMware admins have got a few new constructs like namespaces, Kubernetes PODs, and containers to manage. (Honestly, I don’t want to claim to know these new constructs. So, you are on your own in case you also fall into the same category as mine.) Please refer to https://blogs.vmware.com/vsphere/2020/03/vsphere-7.html to know more about vSphere7. 

Tanzu mission control, There was certainly a buzz around Project pacific and Tanzu mission control. With tanzu mission control, you can Build, Run and Manage Kubernetes clusters running on vSphere, Public cloud or even on the bare metal server. with the help of the tanzu portfolio, you can leverage consistent operations of Kubernetes across any cloud platform of your choice. 

I am sure you will find plenty of blog posts around the new product portfolio by VMware. However, I am highlighting some of the key takeaways from the event.

  • After the announcement during VMworld 2019, I didn’t expect VMware to release vSphere7 very soon. I was expecting this release during VMworld 2020. Anyways, this is great news and worth welcoming one. 
  • I loved how new vSphere constructs for Kubernetes look into vSphere. For sure, this was a big change for vSphere but the way it is introduced to both developers and VMware admin is simply awesome. Both communities have native look and feel for the new feature. Vmware admins won’t be surprised when they will first see the Namespaces/PODs and Containers spinning up into vSphere. on the other side, developers will continue to work with Kubernetes as they have been working in the past. Please see this demo to understand more. https://www.vmware.com/products/vsphere.html 
  • Any cloud, any device, any app, VMware’s bet on being a leader in Hybrid/Multi-Cloud space was visible. When you look at VCF 4.0 or Tanzu mission control then you can feel what VMware was saying it all along for the last few years. after a decade long debates and discussions, it is clear that edge computing is a real phenomenon and multi-cloud or hybrid cloud is the reality. Having this kind of environment certainly poses a great challenge for security and operations. How do you keep up the security game at its best across all the spheres? how do you keep consistent operations across IT organizations? We have to still wait for some time to see the outcomes of VMware’s Any cloud, any device, any app strategy. Overall, It looks promising. 

That’s it from my side though on the recent release event. I would also like to share some of the HOL labs which you guys can go through and a blog post which I found very useful for ramping up on Kubernetes and containers. 

  • HOL Labs
    • HOL-2032-91-CNA – VMware Tanzu Mission Control Simulation
    • HOL-2013-01-SDC – vSphere 7 with Kubernetes – Lightning Lab
    • HOL-2044-01-ISM – Modernizing Your Data Center with VMware Cloud Foundation
  • Project Pacific for New Users bu @lnmei

Thanks,