The vSphere DSC – Just another perspective

In the last couple of weeks, I have done rounds of meeting with our customers and discussed ways to automate ESXi build and configuration. The most common piece which I found in each of the environment was vSphere auto-deploy. Today, most of our customers deploy ESXi hosts using auto-deploy and post configuration tasks via host profiles. Majority of question or concerns which I got were related to the host profile. My understanding says that customers tend to find host profiles difficult to understand, which is not the case in reality.

Host profiles are excellent. It’s just you need to fine-tune them initially. You rarely get any issue if you have cracked the host profiles successfully. The key here is to set up a reference host and extracting the host profile from it.

Having said that, let me bring you another perspective on doing the post configuration tasks. Today many of you love to do Infrastructure as a Code and believe in a configuration management ecosystem. When you look around all the configuration management tools, you will find that vSphere Desired State Configuration (DSC) is very close to being a complete solution for vSphere configuration management.

vSphere DSC is an open-source module that provides PowerShell DSC resources for VMware. PowerShell DSC resources follow declarative code style, like most configuration management tools, and allow you to document infrastructure configurations. The module has 71 resources that cover most of the configuration aspects of vSphere infrastructure.

We shouldn’t be looking at vSphere DSC in isolation and rather look at complimenting it with vSphere auto-deploy. Think about this, PXE boot ESXi hosts from vSphere auto-deploy and let vSphere DSC do all the post configurations for you, isn’t that cool!

When you extract the host profile, you get all the configurations of an ESXi host, and at times you need to trim down the configurations to ensure that you have control over it. 

vSphere DSC is just the opposite of this approach. You can start with an empty configuration file and keep adding the resource definitions to it as and when required. vSphere DSC configuration gives a complete picture of configurations that you want to Ensure and allows you to quickly replicate the same in other environments.

Just take a look at the below snippet and a demo of my Lab configuration which does range of things on vCenter and ESXi host.

Concluding this, I would say that vSphere DSC just opens up another way of automating the infrastructure builds and config. The project has come a long way now and has done significant improvements in terms of resource coverage.

Stay tuned with the vSphere DSC project and soon you will get new updates from the VMware PowerCLI team.

Learn More about vSphere DSC:

vLeaderConnect EP1: In conversation with Joe Beda


Hello and welcome everyone,

“There is always a first time for everything,” #vLeaderConnect was the same experience for us at VMUG Pune. I proposed this idea to all the committee members during one of our internal discussion. The sole objective of vLeaderConnect was to get an insight into how the technical leaders carry themselves, personally and professionally. It was also an opportunity for us to bring some brilliant minds on the VMUG Pune forum and let the community experience their thoughts and wisdom.

We considered Joe Beda as our first guest. There were two reasons for it: First- Kubernetes is making its way into the VMware ecosystem with VMware’s recent product releases. Second- Joe Beda himself is the co-creator of Kubernetes, and he is into the center of all the magic which is happening at VMware. We reached out to Joe, and without taking time, he agreed to discuss it with us.

On the other side of the preparation part, all the community leaders scrambled themselves and explored the possibility of making it happen. We reached out to the community members, got their feedback to understand what are the things which they want to discuss with Joe. It was heartening to see the response we were receiving from the community members across the globe. We sent the invites to our friends from VMUG Romania, VMUG France, VMUG Japan, VMUG Argentina, and other VMUG communities. I must say they were appreciative of the efforts which we have put in and turned up on the day of the Event irrespective of the time zone.

The event,

My Post (5)We expected a turn around of 100+ participants, and we did receive the expected response from the community members. It was very clear from our part that we will not have a scripted conversation with Joe, We did brief him on what topics we will be discussing upon, but the questions, follow-up, and the discussions were impromptu. Joe was very supportive during the entire conversation. He was candid in his thoughts, spoke out of his mind, and, most importantly, he was speaking on his own without carrying his big credentials on his sleeves. We are really thankful to those who have joined the event and shown their support to us. If you couldn’t join the event, then below is the recording session for you guys to go through.


Evolution of Kubernetes, 

Borg was a ten-year-old project written in C++, and it was internally used at google. The experience with Borg gave an understanding that there were other ways to manage and deploy software beyond starting a VM or a Server. This was all possible because of the benefits coming out of the containerized workloads. Borg really gave us essentially a roadmap for how these things could work into the future, and having that roadmap was very instructive for Kubernetes. The next challenge was, with GCE google was very late in the public cloud market so it was a discussion within google about how we can we shake things up so that we can actually create opportunities for Google to really sort of reset things and move the conversation to a place where Google can compete on a more level playing field with GCP versus AWS. The solution to that was to have a containerized workload offering to google customers by turning an internal product to an external one. 

Language Selection while writing Kubernetes, (C++ vs. GO)

We wanted to make Kubernetes an open-source project. At that time, the docker and GO community was shaping up really well. So asking open source communities to contribute to the Kubernetes project became much simpler with GO. Also, GO is really sort of a sweet spot in terms of being low-level enough that you can write system software in it but high-level enough that you know you can remove a lot of the complexity which you get with C or C++. 

Tanzu Portfolio

It’s a portfolio of products that work well together, not a platform. You can pick the products that work for you but it isn’t VMware only. We live in a multi-vendor world. You can still manage container workload running elsewhere. 

Message to vSphere Admin

View this as an opportunity, not a threat. VMware wants the conversation to be vSphere AND cloud (not or). Use these tools for change in your organization.

I am sure there is a lot more to what I have shared in the highlight section, I would like to highly recommend you to watch the complete video to understand more.

Love recieved from the community

I am just highlighting some of the responses we received from the VMUG community.

I know it was our first ever attempt to host something like this at VMUG Pune. Stay tuned with us and keep supporting us.

visit and be part of a larger tech evangelist group around you.




Decrypt PSCredential object password and it’s applications

Hello Everyone,

I feel it’s no more a secret that you can decrypt PSCredential Object and read/get the password in plain text. Wait…, I do not know what is PSCredntial objectThis is what you must be thinking. I feel you stumble upon PSCredential object if you do basic PowerShell for system administration.

Get-Credential is the cmdlet that will prompt you for username and password. once you enter your username and password then its basically a PSCredential object for you.


Now, Let’s take a look at the PSCredential Object.

I have stored credentials in a variable $cred which is now a PSCredential Object. When you do Get-member you will come to know more about this PSCredential Object. Look at the below screenshot to understand more.


When I get $cred in the last command, It does show you a username and password. but if you notice Password than you will come to know that it’s stored as a secure string. This is good because you do not want PowerShell to store the password in plain text.

However, this is sometimes a need to reuse the same credential to authenticate with some other processes in your PowerShell script which requires plain text password as an input. Also, there is a limitation of the PSCredential Object. PSCredential Object will work on cmdlets that know what a PSCredential Object is. In fact, not all the .Net Classes understand what PSCredential Object is. So if you have a cmdlet which is written in .Net class rather than a PowerShell class than you can’t reuse the PSCredential object. In Order to use this, you need to decrypt the password from PSCredential Object and reuse the password to the respective class. Another example is invoking REST APIs, Not all REST APIs understand PSCredential so this means that you need to pass the username and password as a plain text.

Check the below example script, Here I need to invoke a REST method POST which requires username and password to authenticate. I have 2 parameters Pwd(Password) and Name (Username). This specific API does not understand the PSCredential so I need to pass the credential password in plain text.

Now, if I have this script than obviously, it is not secure because whoever has the access to the script will be able to know the credential which you don’t want to do obviously.

So what is the Solution?  Let’s try something.

Can I access the password directly from the PSCredential object

No, You can’t as it’s stored as a secure string. Look at this example.


  • $cred.Password will not return you the Password as plain text
  • $cred.Password|Convertfrom-SecureString will give you cipher data rather than a password as a plain text.

So what’s the solution. Well, the solution is in the PSCredential object itself. Do $cred|Get-member. 


PSCredential object has a method called GetNetworkCredential() method. you can use this method to decrypt the password in PSCredential object.

When I invoke this method and do Get-Member, it will show you the properties of the object and you will find a property called Password. use the last command $cred.GetNetworkCredential().Password and it will return the password in plain text. Please refer to the below screenshot.


So now I have modified the same script as below,


Yes, PSCredential stores the password in a secure string but it has a built-in function GetNetworkCredential() to decrypt the same.

Is it safe to use?

I feel No. Once script execution stops or runtime environment close, variables get disposed and you no longer have access to the variable. However, there are ways in which you can obviously exploit this feature with some tweaks in your Powershell script. for example, I wrote this to a text file. So yes, a PowerShell developer can write this line of code to a txt file and exploit a feature that was intended to be there to help you out.


I am not sure what is the right way to use credentials in PowerShell script. if you know a method which is definitely a secure way than do let me know with your comments here.




Welcome, vSphere7 and Tanzu mission control


We all were waiting for this day, Today VMware has announced a few major products with a single objective and that is to fule app modernization. There is no secret what these products are. Yes, you got it right. These are vSphere7, Tanzu Mission Control, and VCF4.0. Please find a brief overview of these new releases.

vSphere7, Runs Kubernetes clusters natively on the existing vSphere platform. VMware admins have got a few new constructs like namespaces, Kubernetes PODs, and containers to manage. (Honestly, I don’t want to claim to know these new constructs. So, you are on your own in case you also fall into the same category as mine.) Please refer to to know more about vSphere7. 

Tanzu mission control, There was certainly a buzz around Project pacific and Tanzu mission control. With tanzu mission control, you can Build, Run and Manage Kubernetes clusters running on vSphere, Public cloud or even on the bare metal server. with the help of the tanzu portfolio, you can leverage consistent operations of Kubernetes across any cloud platform of your choice. 

I am sure you will find plenty of blog posts around the new product portfolio by VMware. However, I am highlighting some of the key takeaways from the event.

  • After the announcement during VMworld 2019, I didn’t expect VMware to release vSphere7 very soon. I was expecting this release during VMworld 2020. Anyways, this is great news and worth welcoming one. 
  • I loved how new vSphere constructs for Kubernetes look into vSphere. For sure, this was a big change for vSphere but the way it is introduced to both developers and VMware admin is simply awesome. Both communities have native look and feel for the new feature. Vmware admins won’t be surprised when they will first see the Namespaces/PODs and Containers spinning up into vSphere. on the other side, developers will continue to work with Kubernetes as they have been working in the past. Please see this demo to understand more. 
  • Any cloud, any device, any app, VMware’s bet on being a leader in Hybrid/Multi-Cloud space was visible. When you look at VCF 4.0 or Tanzu mission control then you can feel what VMware was saying it all along for the last few years. after a decade long debates and discussions, it is clear that edge computing is a real phenomenon and multi-cloud or hybrid cloud is the reality. Having this kind of environment certainly poses a great challenge for security and operations. How do you keep up the security game at its best across all the spheres? how do you keep consistent operations across IT organizations? We have to still wait for some time to see the outcomes of VMware’s Any cloud, any device, any app strategy. Overall, It looks promising. 

That’s it from my side though on the recent release event. I would also like to share some of the HOL labs which you guys can go through and a blog post which I found very useful for ramping up on Kubernetes and containers. 

  • HOL Labs
    • HOL-2032-91-CNA – VMware Tanzu Mission Control Simulation
    • HOL-2013-01-SDC – vSphere 7 with Kubernetes – Lightning Lab
    • HOL-2044-01-ISM – Modernizing Your Data Center with VMware Cloud Foundation
  • Project Pacific for New Users bu @lnmei


PsTK1: Getting Started with ‘NetApp PowerShell Toolkit’

Welcome back, As promised earlier, I am back with the new blog series. So let’s get started.

Note: From, Hereon I will be using abbreviation PSTK for ‘NetApp PowerShell ToolKit’, as the same has been referred on NetApp documentation as well.

What is Netapp PowerShell ToolKit? 

NetApp PowerShell Tool Kit (PSTK) is a PowerShell Toolkit packaged with 2 PowerShell Modules which are DataOntap and SANTricity. DataOntap module helps in managing NetApp Storage devices which are managed by the ONTAP management system- such as FAS, AFF, and NetApp Cloud, etc. SANTricity module is used to manage E-Series Storage array and EF-Series flash array.  In this blog series, I will be focusing only on the DataOntap PowerShell module. 

I am highlighting some of the specifications of PSTK here,

Platform: Windows only, Requires PowerShell 3.0 or above and .net 4.5 or above
Is it available on PSGallary? No, Not yet. This means that you can not download it from Install-Module cmdlet of PowerShell
PowerShell Core: No, It does not support PowerShell core yet. So you can’t use this on the Linux Platform yet.
# of cmdlets: 2300 or more for DataOntap Module and ~300 for SANTricity Module.

Documentation and Download link

Why should I learn PSTK?

If you are a Storage admin/Engineer then you would discover that working on PowerShell gives you greater flexibility and automation capabilities compare to any other shell environment. If you have already worked with PowerShell then it’s great. You can simply start using the PSTK module. If you haven’t worked with PowerShell then know this, PowerShell is the simplest scripting platform available for us. Invest some time and you will get it. 🙂

  • PowerShell is primarily a tool for administrators like us
  • PSTK is just a PowerShell module, so if you are already working on any other PowerShell module than you almost require zero additional skillsets to start working on PSTK or any other module in that case
  • The same script can help you to orchestrate things related to the different technology stack. For example, the same script can create a LUN with the help of the DataOntap PowerShell module and further creates a datastore in VMware with the help of PowerCLI (PowerShell Module for VMware vSphere)
  • Everything in PowerShell is an object
  • PowerShell’s command discoverability makes it easy to transition from typing commands interactively to creating and running scripts

How to Install?

Download the .msi installer file and click on install. Ensure you are running with PowerShell 3.0 or above version.


If you are running PowerShell 4.0 or above, By default module will be imported the moment you execute any of the commands which are part of that respective module. However, use below cmdlet if you want to import the module into the Powershell session.


Get-Command cmdlet

Below cmdlet will list all the commands which are available to use from DataOntap Module.

If you are entirely new to the PowerShell then I would highly recommend you to refer PowerShell documentation to start your learning with Powershell.




Coming Soon: Blog Series, Netapp PowerShell Toolkit


Hello Everyone,

I used to consider myself a VMware engineer rather than anything else. Even though my core expertise was from the VMware Compute domain, I understood well in advance that if I want to be a good VMware engineer than I must also work on the Storage and Network piece of infrastructure.

In 2019, I have spent a good amount of time understanding and working on the Storage side of the world, given the role I have in my hand.

We use a data replication product that takes advantage of core Netapp functionalities like Linked Clone, DD copy, and Flex Vol, etc. to do the data replication from source to DR. In my current role, I am tasked to build, run/config and test the product and creates the operating procedures for Ops to follow. So obviously I do break, rebuild, re-config my lab infrastructure multiple times. That is where PowerShell comes into the picture.

Why I am Blogging about Netapp PowerShell Toolkit?

I was using the Netapp PowerShell toolkit for my own purpose and was never thinking of extending this knowledge to a larger audience. One day, I wanted to get some reports from another lab environment (obviously, I didn’t had the access required), I requested some help from our storage engineers. When I got the reports, it was all in a few screenshots or simply a text export. When you deal with a large amount of data, you would love to get it in a CSV or similar formats so that you can process it simply. If you have experienced PowerShell earlier than you could have figured it out by now what I am actually talking about. Yes, Its Get-Something|Export-csvThis is how easy when it comes to PowerShell. I felt like letting my friends know a few tips and tricks from PowerShell and they simply loved it.

For any system admin, be it Compute/Network or Storage, Challenges are the same. Everyone deals with data, Everyone needs to automate simple day to day tasks if not large scales of automation, and that is where PowerShell lets you win the battle. The same PowerShell NetApp toolkit helped me to write an orchestration that lets us migrate the protected workload from the one Storage controller to another one.

This one year of experience with Netapp Storage tells me that, there may be many more storage admins who might not be aware of NetApp PowerShell Toolkit. I intend to write this blog series to bring PowerShell capabilities and it’s advantages to Storage Engineers in their day to day work.

In the coming days, I would be writing about NetApp PowerShell Toolkit and it’s the usage. Here, I would share some tips and tricks to do NetApp Storage Orchestration via PowerShell toolkit.

I hope you would like this blog series. If you are currently working on NetApp storage, Please do comment below what you would like to read about? What are the current challenges you have as Storage admins? If you are using Netapp PowerShell Toolkit, How is your experience? until then stay tuned here will come back with first blog post in this blog series.

Please do subscribe and follow my blog if you haven’t done so far.







PSProvider and VMware datastore – PowerCLI

Hello everyone,

I am writing this short blog after a long time. While explaining in-out of PowerShell to some of my friends in person, I discussed about PSProviders. Most of the knowledge about PSProvider is information only and as a script writer we dont really bother about how powershell is playing with different resources (Variable/function/Filesystem/Registery Keys etc) which are used in PowerShell Session or a Script.

However as a VMware Admin I do use PSProvider in background alot in order to move the datastore item from,

  1. datastore to datastore
  2. Datastore to local drive (Windows Drive or Shared Drive) or vice versa

In this Post we will learn about Copy-DatastoreItem cmdlet and PSProviders.

What is PSProvider?

In Simple Term, PSProviders are special repository(data stored within Powershell)  to process the data as it recieves during PowerShell execution. This data is presented in their respective PowerShell drives which are known as PSDrive.

For Ex. See the below command output from Get-PSProvider


by default, you get above psproviders which are registry, Alias, Environment, Filesystem, Function and variable. You can also see the respective drives associated to its PSProvider. This means that if you are creating any variable it will be stored in variable: , If you are creating a function then it will be stored in Function: 

check the below image, where i am going into respective drive and able to see the variable which i have created. 


In conclusion, whatever variable/function/etc which I create in powershell gets stored in their respective drives.

vimstore PSProvider. 

vimstore is a one of the PSProvider which you get after connecting to VMware vCenter via PowerCLI. Do this, Connect-VIServer vCenter-ip and then run get-PSProvider cmdlet and you will see additional PSProviders are available to you. These providers are something which provides VMware Inventory and datastore resources to the POwerCLI or PowerShell.

So, After connecting to vCenter via PowerCli you can see additional PSDrives are available to you, provided by 2 additional PSProviders. I can do cd vmstore: and can actually list the available datastore in the datastore inventory (Simillar to how we list the directories and files in a path) or can list the host inventory.

Once you are connected you can follow below commands to create a New-PSDrvive with ‘Vimdatastore’ PSProvider.


Now you have DS: drive which is available to you and you can basically navigate through the same way you do it for any other drive.

Use below command to move data from your local drive to the VMware datastore using PowerCLI. Please note that i am already in DS: , If you are in any other drive then give proper path using vimdatastore drive


Note: This method is quite helpful in case you are trying to move things around from datastore and you can automate the move operation. also this is an alternate to certificate error which you may receive while moving data from Web Client. For ex, Operation failed when I tried to upload the same ISO using web client.


Use PowerCLI vimdatastore Psprovider and copy-datastoreitem cmdlet to work around this.