VMware Cloud Foundation 4 Series – Part 1

VMware Cloud Foundation 4 presents an important step in offering a hybrid cloud platform supporting native Kubernetes workloads and management alongside your traditional VM-based workloads.

VMware Cloud Foundation

Cloud Foundation has an automated deployment process, known as Bring-up which deploys standardized workload ready private cloud in just matter of hours! By any measure, VMware Cloud Foundation 4 is a massive release that delivers a complete full stack for all capabilities required, more than I can cover in a single blog. Hence presenting a series where we would look at in-depth concepts of VCF 4. 

VCF 4 Architecture

Before we understand the Architecture, check out the software Building blocks of VCF 4.0:

The new version of VCF 4.0 includes vSphere 7.0, VSAN 7.0, NSX-T 3.0, VRA 8.1, vRLCM 8.1 as well as SDDC manager to manage your virtual infrastructure domains. You can find detailed information on Cloud Foundation Bill of Materials (BOM) here. One thing to note here is you cannot upgrade from VCF 3.x to VCF 4.x. VCF 4 has to be deployed as ‘Greenfield’ deployment only, however this functionality is being currently worked upon and we can expect direct upgrade in coming releases.

Workload Domain

Workload Domain

Workload domain is purpose built logical SDDC instance of one or more vsphere cluster with dedicated vcenter server and dedicated or shared NSX-T instance. It also includes dedicated vSAN Ready nodes. It has automated provisioning and can support up to 15 workload domains.

Most of the customers have question about how many vCenter Server licenses are required during deployment. So to answer this question, you only need single vcenter Server License which is entered during initial deployment and that will support all the vcenter instances deployed within the VCF.

Management Domain

Management Domain

It a special purpose domain that is automatically deployed during Initial Deployment (which is also called as Bring-Up process). It requires minimum 4 hosts and vSAN Storage. vSAN is the only principal storage option for Management Domain.

Management domain is designed to host all infrastructure components like SDDC manager, vcenter server and NSX-T instances and NSX edge instances. It also supports 3rd party management apps like Backup server, Active Directory Servers, Domain controllers, etc.

So, now Management Domain in VCF 4 has smaller footprint as it contains smaller number of VMs. This is because PSCs are now Embedded (External PSC’s are not supported) and NSX Managers and Controllers are integrated into one!

Since the PSC’s are embedded, the functionality of SDDC manager towards has also changed. As new vCenters are deployed, SDDC manager configures a replication ring topology for all the embedded PSC’s and SDDC Manager authenticates to the ring rather than to individual PSCs!

Virtual Infrastructure (VI) Workload Domain

VI Workload Domain

VI Workload Domain contains one or more sphere infrastructures designed to run Customer’s applications and has a dedicated vcenter server deployed in the management Domain.

While deploying the VI workload Domain, Admins have an option of deploying a NSX instance or sharing an NSX instance with existing VI workload domain. Admins also have option to choose between vSAN, NFS or FC as their principal storage options unlike the management domain where vSAN is the only principal storage used.

For the first VI workload domain, the workflow deploys a cluster of three NSX Managers in the management domain and configures a virtual IP (VIP) address for the NSX Manager cluster. Subsequent VI workload domains can share an existing NSX Manager cluster or deploy a new one as stated above.

The management domain and deployed VI workload domains are logical units that carve up the compute, network, and storage resources of the Cloud Foundation system. 

VCF 4 Deployment Types

There are two deployment models used based on the size of the environment.

Consolidated Architecture :

Where customer workloads runs in Management Domain, as simple as that! This model has shared vcenter server where customer workloads are deployed into resource pools. This is recommended for small deployments and it uses minimum of 4 servers. Consolidated deployment uses vSAN as a principal storage and they don’t have any option to select any other type of storage.

Consolidated Architecture

Standard Architecture

The standard architecture aligns with industry best practices separating management workloads with Infrastructure workloads. This is recommended for medium to Large Deployments and it required minimum of 7 (recommended 8) servers to deploy.

Management Workloads are dedicated to Infrastructure and Dedicated VI domains for User workloads. You can run max 15 Workload domains including Management Domain. The important point to note here is vCenter Servers run in enhanced Linked-mode.

Standard Architecture

Key Takeaways

  1. Consolidated deployments utilize one workload domain and one vcenter server.
  2. Standard deployments use separate vCenter server for each Domain.
  3. Multiple clusters are supported in Standard and standard architectures both.
  4. Stretched vSAN deployments are supported.
  5. Consolidated deployment uses vSAN for principal storage.
  6. It’s important to note that VCF 4.1 now supports vVols as Principal Storage in VCF Workload Domains to deliver greater flexibility and storage options.
  7. Each Workload domain can consist of multiple clusters and can scale up to VMware documented maximums!
  8. More information on Consolidated Architecture limitations with Cloud Foundation (70622) – https://kb.vmware.com/s/article/70622

vRealize Automation 8.0 – A lot has changed!

Hello Everyone!

It is always good to refresh new products/new features coming in Industry (Although it’s been a year that vRA 8.0 has been released but it’s still new to many!). So, in this blog post, I’ll try to give an insight on new vRealize Automation 8.0 Architecture and will talk about what exactly has changed.

“vRealize Automation automates the delivery of virtual machines, applications, and personalized IT services across different data centers and hybrid cloud environments.”

vRealize Automation 8.0 is a huge change! What was previously used to known as CAS – Cloud Automation services is now vRealize Automation Cloud and it has its own on-premise version which is nothing but vRealize Automation 8.0. One thing to note here is vRA cloud and vRA 8.0 share the same code base and offer the same user experience, the main difference involves how they are delivered!

Difference between 8.0 and previous versions

More information about the product – https://docs.vmware.com/en/vRealize-Automation/index.html?topic=%252Fcom.vmware.vra.programming.doc%252FGUID-75940FA3-1C17-451C-86FF-638E02B7E3DD.html

vRealize Automation 8.0

vRealize Automation 8.0 brings the vRealize Automation Cloud capabilities to the on-premises form factor. This release modernizes the vRA 8 architecture and capability set to enable enhanced agility, efficiency, and governance in the enterprise.

In Simple Terms, vRA 8.0 is an on-premises solution of vRealize Automation Cloud.

This release of vRealize Automation uses a Kubernetes based micro-services architecture. The new release takes a modern approach to delivering hybrid cloud management, extending cloud management to public clouds, delivering applications with DevOps and managing Kubernetes based workloads.

vRealize Automation Components

It contains four core components –

1. VMware Cloud Assembly: Cloud assembly is a cloud-based service that you use to create and deploy machines, applications, and services to multiple clouds like Google Cloud, Native AWS, Azure and VMC on AWS. Cloud assembly provides multiple key features-

  • Multiple cloud accounts.
  • Infrastructure as a code: Supports blueprints with versioning.
  • Self-service Provisioning.
  • Marketplace: Integrates with website called solutionexchange.vmware.com. This has published built-in blueprints that can be accessed through Marketplace.
  • Extensibility: Built-in feature of vRA. You can use XaaS feature for custom queries.
  • Kubernetes Integration: You can deploy Kubernetes cluster through vRA or you can also import existing Kubernetes cluster to vRA.

2. VMware Service Broker: Service Broker aggregates content in native formats from multiple clouds and platforms into a common/single catalog for easy consumption on VMware Cloud.

3. VMware Code Stream: Code stream is a continuous integration and continuous delivery (CICD) software that enables you to deliver software rapidly and reliably, with little overhead.

4. Orchestrator: takes care of 3rd party integrations, custom scripting and supporting lifecycle action through the Event Broker Service.

Since Cloud Assembly, Code Stream, Orchestrator and Service Broker exist in same appliance, logins are passed between applications. Users can swap seamlessly without logging in each time!

vRealize Automation Architecture

vRA Appliance is powered by a Photon OS Base. It includes native Kubernetes installed on the OS to host containerized services. Now, what does that mean? When the vRealize Automation appliance is deployed, at the first boot, docker is installed and kubernetes clusters are configured. Then, Docker images are stored in a private Docker registry on the appliance.

Role of Kubernetes

Those who doesn’t know what HELM is -It is a package manager for Kubernetes. Helm packages, configures, and deploys applications and services onto Kubernetes clusters. Helm takes images that are stored in a private Docker registry on the appliance and deploys Kubernetes services that are running as pods.

vRealize Automation has 16 core services and all the services are deployed and managed as Pods with each having its own web server running on Kubernetes cluster.

vRealize Automation Architecture

There are two more components which also get installed as a part of vRealize Automation On-premises solution.

  • VMware Lifecycle Manager (LCM): It provides a single installation and management platform for various products in vRealize suite. It delivers complete lifecycle and content management capabilities for vRealize Suite products. It helps customers accelerate time to value by automating deployment, upgrades, and configuration, while bringing DevOps principles to the management of vRealize Suite content. It provides a single installation and management platform.

  • VMware Identity Manager (IDM): It is an Identity as a service (IDaas)) solution. It provides application provisioning, conditional access controls, and single sign-on (SSO) for SaaS, web, cloud and native mobile applications.


Namespaces are a way to divide Kubernetes cluster resources between multiple users. All the core vRealize Automation services run as Kubernetes pods within the namespace called “prelude”. You can explore vRA environment using some of the below commands on vRA appliance using SSH –

  1. To list all the pods running –

kubectl get pods -n prelude

  • To get the number of containers in a pod –

kubectl describe pod <pod_name> -n prelude

  • To list all the services running –

kubectl get services -n prelude

  • To list the deployments running –

 kubectl get deployments -n prelude

(A deployment is responsible for keeping a set of pods running.)

Key Takeaways

  • All the core vRA services run as Pods with each having its own Kubernetes cluster.
  • vRealize Automation 8 cannot be installed on your own Kubernetes environment. It comes in the form of an appliance with all the bits and pieces needed to run vRA 8 and this is the only way VMware can support it.
  • Besides the four core vRealize Automation 8 components, Code Assembly, Service Broker, Code Stream and Orchestrator, two supporting services such as vRealize Identity Manager and vRealize Lifecycle Manager are needed to install and run vRealize Automation 8.
  • If you don’t have a LCM and/or IDM instance running, the easy installer will set one up for you. But you can also use existing LCM and IDM configurations for vRealize Automation 8 as well.
  • There is no Windows IaaS server in vRA 8.0.





I hope this article was helpful. Any suggestions/comments are highly appreciated!

vLeaderConnect EP1: In conversation with Joe Beda


Hello and welcome everyone,

“There is always a first time for everything,” #vLeaderConnect was the same experience for us at VMUG Pune. I proposed this idea to all the committee members during one of our internal discussion. The sole objective of vLeaderConnect was to get an insight into how the technical leaders carry themselves, personally and professionally. It was also an opportunity for us to bring some brilliant minds on the VMUG Pune forum and let the community experience their thoughts and wisdom.

We considered Joe Beda as our first guest. There were two reasons for it: First- Kubernetes is making its way into the VMware ecosystem with VMware’s recent product releases. Second- Joe Beda himself is the co-creator of Kubernetes, and he is into the center of all the magic which is happening at VMware. We reached out to Joe, and without taking time, he agreed to discuss it with us.

On the other side of the preparation part, all the community leaders scrambled themselves and explored the possibility of making it happen. We reached out to the community members, got their feedback to understand what are the things which they want to discuss with Joe. It was heartening to see the response we were receiving from the community members across the globe. We sent the invites to our friends from VMUG Romania, VMUG France, VMUG Japan, VMUG Argentina, and other VMUG communities. I must say they were appreciative of the efforts which we have put in and turned up on the day of the Event irrespective of the time zone.

The event,

My Post (5)We expected a turn around of 100+ participants, and we did receive the expected response from the community members. It was very clear from our part that we will not have a scripted conversation with Joe, We did brief him on what topics we will be discussing upon, but the questions, follow-up, and the discussions were impromptu. Joe was very supportive during the entire conversation. He was candid in his thoughts, spoke out of his mind, and, most importantly, he was speaking on his own without carrying his big credentials on his sleeves. We are really thankful to those who have joined the event and shown their support to us. If you couldn’t join the event, then below is the recording session for you guys to go through.


Evolution of Kubernetes, 

Borg was a ten-year-old project written in C++, and it was internally used at google. The experience with Borg gave an understanding that there were other ways to manage and deploy software beyond starting a VM or a Server. This was all possible because of the benefits coming out of the containerized workloads. Borg really gave us essentially a roadmap for how these things could work into the future, and having that roadmap was very instructive for Kubernetes. The next challenge was, with GCE google was very late in the public cloud market so it was a discussion within google about how we can we shake things up so that we can actually create opportunities for Google to really sort of reset things and move the conversation to a place where Google can compete on a more level playing field with GCP versus AWS. The solution to that was to have a containerized workload offering to google customers by turning an internal product to an external one. 

Language Selection while writing Kubernetes, (C++ vs. GO)

We wanted to make Kubernetes an open-source project. At that time, the docker and GO community was shaping up really well. So asking open source communities to contribute to the Kubernetes project became much simpler with GO. Also, GO is really sort of a sweet spot in terms of being low-level enough that you can write system software in it but high-level enough that you know you can remove a lot of the complexity which you get with C or C++. 

Tanzu Portfolio

It’s a portfolio of products that work well together, not a platform. You can pick the products that work for you but it isn’t VMware only. We live in a multi-vendor world. You can still manage container workload running elsewhere. 

Message to vSphere Admin

View this as an opportunity, not a threat. VMware wants the conversation to be vSphere AND cloud (not or). Use these tools for change in your organization.

I am sure there is a lot more to what I have shared in the highlight section, I would like to highly recommend you to watch the complete video to understand more.

Love recieved from the community

I am just highlighting some of the responses we received from the VMUG community.

I know it was our first ever attempt to host something like this at VMUG Pune. Stay tuned with us and keep supporting us.

visit vmug.com and be part of a larger tech evangelist group around you.




vSphere 7.0 : DRS Re-Designed

Although vsphere 7.0 is a major enhancement in itself with lots of new features added like Kubernetes (Project Pacific), vCenter Server Profiles, vsphere Lifecycle Manager(vLCM), Certificate Management, Refactored vMotion etc.

But the one that caught my eye and is completely re-designed after a span of 15 years is DRS (Distributed Resource Scheduler).

How DRS used to work before vsphere 7.0?

DRS was released back in 2006 and since then it wasn’t changed that much. However, there were a couple of enhancements and changes in vSphere 6.7 (new Initial Placement, NVM Support and enhanced resource Pool reservations). The version of DRS till vsphere 6.7 was a Cluster centric Model. In simple words, the resource utilization was always balanced across the Cluster.

DRS till vsphere 6.7 was a Cluster centric Model.

It’s important to know that DRS regularly monitored the cluster’s balance state once every five minutes, by default, and took the necessary actions to fix any imbalance by live migrating the VM onto the new host using vMotion.

In this way, DRS ensured that each virtual machine in the cluster gets the host resources like memory and CPU, that it needs.

What has changed in DRS vsphere 7.0?

VMware shifted their focus from Cluster centric to Workload Centric Model. Meaning whenever VM runs on an ESXi Host, it calculates “VM DRS Score”. It is totally a new concept!

This score verifies if the VM is scoring enough or is it happy enough on that particular ESXi Host. Let’s see what it is!

VM DRS Score

  • VM DRS Score or also called as “VM Happiness” Score can be defined as the execution efficiency of a virtual machine.
  • Values closer to 0% (not happy) indicate severe resource contention while values closer to 100%(happy) indicate mild to no resource contention.
  • The VM DRS score “works” in buckets. These buckets are 0-20%, 20-40%, 40-60%, 60-80% and 80-100%.
  • A lower bucket score doesn’t directly mean that VM is not running properly. It’s the execution efficiency which is low.
  • DRS will try to maximize the execution efficiency of each virtual machine while ensuring fairness in resource allocation to all virtual machines in the cluster.

How VM DRS Score is calculated?

The calculation of VM DRS Score is per-VM or for a single workload on all the hosts within a cluster.

There are several metrics responsible for VM DRS Score –

  • Performance: DRS looks at CPU Ready Time, CPU Cache behavior and Swapped Memory of the VM.
  • Capacity of the ESXi Host: DRS looks at the headroom that an ESXi Host has and see if an application/workload can burst enough on the ESXi Host that it is running on. This parameter is also called as VM Burst Capacity.
  • Migration Cost: The cost of migration of a VM from one ESXi Host to another. So, you won’t be seeing lots of vMotion happening now! (Only if your DRS is set to Fully-Automated).

Most interesting part is VM DRS Score is calculated every min which gives you far more granular approach.

VM DRS Score is calculated every single min compared to older version where DRS monitored the Cluster’s state every 5 mins.

Cluster DRS Score

As you can see in the diagram, there is something called as Cluster DRS Score which is defined as the average DRS Score of all the virtual machines in the cluster.

Scalable shares:

Very Interesting Concept!

Scalable shares are configured on a cluster level and/or resource pool level.

What’s new is that when you set share level to “high”, it will make sure that VM’s in a Resource pool set to High shares really get more resource prioritization over lower share Resource pools.

In earlier DRS versions, it could possibly occur that VM’s in a Resource pool with shares to “Normal” could get the same resources as a High share Resource pool. Higher share value did not guarantee Higher resource entitlement. This issue is fixed with Scalable Shares.

This setting can be found under Cluster Settings > vSphere DRS > Additional Options > Scalable Shares.

Wrap Up:

We just touched the DRS part. We haven’t discussed about the improved vMotion (or Refactored vMotion) or Assignable Hardware which also plays a major part for DRS.

I hope this article was helpful.

Stay Tuned, and follow the Blog!

For more information on vsphere 7.0, please visit –

Set SCSI controllers to a VM HDD: vRO workflow

Hello All,

SQL servers on VMware infrastructure need to be built as per recommended guidelines. One of the major recommendations is to have specific SCSI controllers assigned to data HDD of a SQL server. The idea here is to have dedicated SCSI controllers for each data disk so that it does not pass all the data via a single SCSI controller.

Recently, I had come across a use case whereas a part of VM provisioning from vRA, I needed to point my SQL server VM disks 3/4/5 to different Para Virtual SCSI controllers.

Hard disk addition to a VM can be handled as part of blueprint or, XAAS request from vRA.

Steps for vRO Workflow to configure SCSI Controller:

Step 1: Shutdown the VM [Used inbuilt WF “Power off the virtual machine and wait“]

Step 2: Create 3 additional SCSI controllers [Copied in-build vRO action “createVirtualScsiControllerConfigSpec” 3 times with updated value of both controller.key and controller.busNumber as 1,2,3 respectively for SCSI controllers 1,2,3 ]


Step 3: Reconfigure VM to have above SCSI controllers added

Step 4: Identify Hard disks by labels and point individual disk to new SCSI controllers

Both Step 3 and 4 will be handled by the below code:

var configSpec = new VcVirtualMachineConfigSpec();
var deviceConfigSpec = new Array();

deviceConfigSpec[0]= actionResult;
deviceConfigSpec[1]= actionResult1;
deviceConfigSpec[2]= actionResult2;

configSpec.deviceChange = deviceConfigSpec;

task = vm.reconfigVM_Task(configSpec);


var controller,controller1, controller2;
for each (var device in vm.config.hardware.device)
var label = device.deviceInfo.label;
if (label == “SCSI controller 1”)
controller = device;
System.log(“Found Controller 1 : ” + controller.key );
else if(label == “SCSI controller 2”)
controller1 = device;
System.log(“Found Controller 2 : ” + controller1.key );
else if (label == “SCSI controller 3”)
controller2 = device;
System.log(“Found Controller 3 : ” + controller2.key );
if(!controller && !controller1 && !controller2)
throw “ERROR: Controller not found”

var diskConfigSpecs = new Array();
for each (var device in vm.config.hardware.device)
var label = device.deviceInfo.label;
if (((device.deviceInfo.label).indexOf(“Hard disk 3”) > -1) && ((device.deviceInfo.label).indexOf(“Hard disk”) > -1))
System.log(“Found disk to Change Controller: ” + label);
var diskConfigSpec = new VcVirtualDeviceConfigSpec();
diskConfigSpec.device = new VcVirtualDisk;
diskConfigSpec.device = device;
diskConfigSpec.device.controllerKey = controller.key;
diskConfigSpec.device.unitNumber = 0;
diskConfigSpec.operation = VcVirtualDeviceConfigSpecOperation.edit;
else if (((device.deviceInfo.label).indexOf(“Hard disk 4”) > -1) && ((device.deviceInfo.label).indexOf(“Hard disk”) > -1))
System.log(“Found disk to Change Controller: ” + label);
var diskConfigSpec = new VcVirtualDeviceConfigSpec();
diskConfigSpec.device = new VcVirtualDisk;
diskConfigSpec.device = device;
diskConfigSpec.device.controllerKey = controller1.key;
diskConfigSpec.device.unitNumber = 0;
diskConfigSpec.operation = VcVirtualDeviceConfigSpecOperation.edit;
else if (((device.deviceInfo.label).indexOf(“Hard disk 5”) > -1) && ((device.deviceInfo.label).indexOf(“Hard disk”) > -1))
System.log(“Found disk to Change Controller: ” + label);
var diskConfigSpec = new VcVirtualDeviceConfigSpec();
diskConfigSpec.device = new VcVirtualDisk;
diskConfigSpec.device = device;
diskConfigSpec.device.controllerKey = controller2.key;
diskConfigSpec.device.unitNumber = 0;
diskConfigSpec.operation = VcVirtualDeviceConfigSpecOperation.edit;

var configSpec = new VcVirtualMachineConfigSpec();
configSpec.deviceChange = diskConfigSpecs;
task = vm.reconfigVM_Task(configSpec);

Step 5: Power on the VM [Used inbuilt WF “Start the virtual machine and wait“]

Final Workflow schema will look like:


Step 6: Integrate with vRA by configuring the workflow to be triggered as part of your existing Machine Provisioning Subscription or, create a new one if you don’t have one already. [ If you want to know how, comment below and i will write another blog about it]


Welcome, vSphere7 and Tanzu mission control


We all were waiting for this day, Today VMware has announced a few major products with a single objective and that is to fule app modernization. There is no secret what these products are. Yes, you got it right. These are vSphere7, Tanzu Mission Control, and VCF4.0. Please find a brief overview of these new releases.

vSphere7, Runs Kubernetes clusters natively on the existing vSphere platform. VMware admins have got a few new constructs like namespaces, Kubernetes PODs, and containers to manage. (Honestly, I don’t want to claim to know these new constructs. So, you are on your own in case you also fall into the same category as mine.) Please refer to https://blogs.vmware.com/vsphere/2020/03/vsphere-7.html to know more about vSphere7. 

Tanzu mission control, There was certainly a buzz around Project pacific and Tanzu mission control. With tanzu mission control, you can Build, Run and Manage Kubernetes clusters running on vSphere, Public cloud or even on the bare metal server. with the help of the tanzu portfolio, you can leverage consistent operations of Kubernetes across any cloud platform of your choice. 

I am sure you will find plenty of blog posts around the new product portfolio by VMware. However, I am highlighting some of the key takeaways from the event.

  • After the announcement during VMworld 2019, I didn’t expect VMware to release vSphere7 very soon. I was expecting this release during VMworld 2020. Anyways, this is great news and worth welcoming one. 
  • I loved how new vSphere constructs for Kubernetes look into vSphere. For sure, this was a big change for vSphere but the way it is introduced to both developers and VMware admin is simply awesome. Both communities have native look and feel for the new feature. Vmware admins won’t be surprised when they will first see the Namespaces/PODs and Containers spinning up into vSphere. on the other side, developers will continue to work with Kubernetes as they have been working in the past. Please see this demo to understand more. https://www.vmware.com/products/vsphere.html 
  • Any cloud, any device, any app, VMware’s bet on being a leader in Hybrid/Multi-Cloud space was visible. When you look at VCF 4.0 or Tanzu mission control then you can feel what VMware was saying it all along for the last few years. after a decade long debates and discussions, it is clear that edge computing is a real phenomenon and multi-cloud or hybrid cloud is the reality. Having this kind of environment certainly poses a great challenge for security and operations. How do you keep up the security game at its best across all the spheres? how do you keep consistent operations across IT organizations? We have to still wait for some time to see the outcomes of VMware’s Any cloud, any device, any app strategy. Overall, It looks promising. 

That’s it from my side though on the recent release event. I would also like to share some of the HOL labs which you guys can go through and a blog post which I found very useful for ramping up on Kubernetes and containers. 

  • HOL Labs
    • HOL-2032-91-CNA – VMware Tanzu Mission Control Simulation
    • HOL-2013-01-SDC – vSphere 7 with Kubernetes – Lightning Lab
    • HOL-2044-01-ISM – Modernizing Your Data Center with VMware Cloud Foundation
  • Project Pacific for New Users bu @lnmei


PSProvider and VMware datastore – PowerCLI

Hello everyone,

I am writing this short blog after a long time. While explaining in-out of PowerShell to some of my friends in person, I discussed about PSProviders. Most of the knowledge about PSProvider is information only and as a script writer we dont really bother about how powershell is playing with different resources (Variable/function/Filesystem/Registery Keys etc) which are used in PowerShell Session or a Script.

However as a VMware Admin I do use PSProvider in background alot in order to move the datastore item from,

  1. datastore to datastore
  2. Datastore to local drive (Windows Drive or Shared Drive) or vice versa

In this Post we will learn about Copy-DatastoreItem cmdlet and PSProviders.

What is PSProvider?

In Simple Term, PSProviders are special repository(data stored within Powershell)  to process the data as it recieves during PowerShell execution. This data is presented in their respective PowerShell drives which are known as PSDrive.

For Ex. See the below command output from Get-PSProvider


by default, you get above psproviders which are registry, Alias, Environment, Filesystem, Function and variable. You can also see the respective drives associated to its PSProvider. This means that if you are creating any variable it will be stored in variable: , If you are creating a function then it will be stored in Function: 

check the below image, where i am going into respective drive and able to see the variable which i have created. 


In conclusion, whatever variable/function/etc which I create in powershell gets stored in their respective drives.

vimstore PSProvider. 

vimstore is a one of the PSProvider which you get after connecting to VMware vCenter via PowerCLI. Do this, Connect-VIServer vCenter-ip and then run get-PSProvider cmdlet and you will see additional PSProviders are available to you. These providers are something which provides VMware Inventory and datastore resources to the POwerCLI or PowerShell.

So, After connecting to vCenter via PowerCli you can see additional PSDrives are available to you, provided by 2 additional PSProviders. I can do cd vmstore: and can actually list the available datastore in the datastore inventory (Simillar to how we list the directories and files in a path) or can list the host inventory.

Once you are connected you can follow below commands to create a New-PSDrvive with ‘Vimdatastore’ PSProvider.


Now you have DS: drive which is available to you and you can basically navigate through the same way you do it for any other drive.

Use below command to move data from your local drive to the VMware datastore using PowerCLI. Please note that i am already in DS: , If you are in any other drive then give proper path using vimdatastore drive


Note: This method is quite helpful in case you are trying to move things around from datastore and you can automate the move operation. also this is an alternate to certificate error which you may receive while moving data from Web Client. For ex, Operation failed when I tried to upload the same ISO using web client.


Use PowerCLI vimdatastore Psprovider and copy-datastoreitem cmdlet to work around this.













Install-Module -Name VMware.Powercli, behind the proxies!

Are you trying to install PowerCLI from your corporate server? If yes, then you might have faced some sort of errors simillar to this-



Based on my experience this issue happens mainly because your powershell session is not able to talk to powershell gallery through Nuget package providers. This happens because of corporate proxy connection.


Sometime you don’t have required package provider. In that case ensure FIPS compliant encryption is disabled.

For detailed steps please refer below

  1. Ensure you are running with PSVerion 5 or above. run $psversiontable to check the ps version. 
  2. Ensure you have required package providers
    • Open powershell as an administrator and Run this Get-PackageProvider
    • If you see output as below then you are good. check the step 2.
    • MicrosoftTeams-image
    • If you do not see any package provider than there could be a possibility that FIPS is enabled on your system.
      • Disable FIPS
        • open gpedit.msc
        • Computer Configuration > Windows Settings > Security Settings > Local Policies > Security Options
        • In the Details pane, double-click System cryptography: Use FIPS-compliant algorithms for encryption, hashing, and signing and Disable it.

Important: if you do not have default package provider as shown above (more specifically PowerShellGet) then you will not be able to use commands such as install-module/ Find-Module /Update-Module /Save-module etc. 

2. Check the PSRepository

  • Ensure that Powershell gallery is register as PSRepository.
  • Run This command
    • Get-PsRepository
    • PSRepo1.PNG
    • If you see above warning then it means that there is no PSRepo exists.
    • Register PSRepository. 
      • Run this to register a PSRepository.
      • if you recieve below error, then your corporate proxy server is not allowing PSRepository to communiate with your system.
      • PSrepoErr
    • bypass connections via a proxy server.
      • You would require proxy server details (ProxyServerName and Port number)
      • Create a powershell profile by following steps, If its not there. check the below snap and follow exactly the samae
        • New-item -itemtype file -Path $Profile
        • Test-Path $profile
        • notepad $profile
        • Profile
      • With this pase below lines of code in your profile, save and close it. Change your proxy server address and port number
      • This will allow communication to PSgallery after you restart your PSSession.

  • Again run get-Psrepository and you have PSGallery available and registered as ps repository


3. Now you have Packagemanager and PSRepository. 

4. Run Install-module -name VMware.PowerCLI -Force

5. This will require Nuget and as you have allowed PSgallery communication via proxy, It will first install Nuget and then it will install VMware.Powercli. 


Coporate systems do have proxy and sometime FIPS compliance enabled. These 2 security standards stops commincation to PSgallery. Disable FIPS if its enabled and not required and then allow communication to PSgallery via proxy server as explained above.


-Jatin Purohit