The vSphere DSC – Just another perspective

In the last couple of weeks, I have done rounds of meeting with our customers and discussed ways to automate ESXi build and configuration. The most common piece which I found in each of the environment was vSphere auto-deploy. Today, most of our customers deploy ESXi hosts using auto-deploy and post configuration tasks via host profiles. Majority of question or concerns which I got were related to the host profile. My understanding says that customers tend to find host profiles difficult to understand, which is not the case in reality.

Host profiles are excellent. It’s just you need to fine-tune them initially. You rarely get any issue if you have cracked the host profiles successfully. The key here is to set up a reference host and extracting the host profile from it.

Having said that, let me bring you another perspective on doing the post configuration tasks. Today many of you love to do Infrastructure as a Code and believe in a configuration management ecosystem. When you look around all the configuration management tools, you will find that vSphere Desired State Configuration (DSC) is very close to being a complete solution for vSphere configuration management.

vSphere DSC is an open-source module that provides PowerShell DSC resources for VMware. PowerShell DSC resources follow declarative code style, like most configuration management tools, and allow you to document infrastructure configurations. The module has 71 resources that cover most of the configuration aspects of vSphere infrastructure.

We shouldn’t be looking at vSphere DSC in isolation and rather look at complimenting it with vSphere auto-deploy. Think about this, PXE boot ESXi hosts from vSphere auto-deploy and let vSphere DSC do all the post configurations for you, isn’t that cool!

When you extract the host profile, you get all the configurations of an ESXi host, and at times you need to trim down the configurations to ensure that you have control over it. 

vSphere DSC is just the opposite of this approach. You can start with an empty configuration file and keep adding the resource definitions to it as and when required. vSphere DSC configuration gives a complete picture of configurations that you want to Ensure and allows you to quickly replicate the same in other environments.

Just take a look at the below snippet and a demo of my Lab configuration which does range of things on vCenter and ESXi host.

Concluding this, I would say that vSphere DSC just opens up another way of automating the infrastructure builds and config. The project has come a long way now and has done significant improvements in terms of resource coverage.

Stay tuned with the vSphere DSC project and soon you will get new updates from the VMware PowerCLI team.

Learn More about vSphere DSC: https://github.com/vmware/dscr-for-vmware/wiki



vRealize Automation 8.0 – A lot has changed!

Hello Everyone!

It is always good to refresh new products/new features coming in Industry (Although it’s been a year that vRA 8.0 has been released but it’s still new to many!). So, in this blog post, I’ll try to give an insight on new vRealize Automation 8.0 Architecture and will talk about what exactly has changed.

“vRealize Automation automates the delivery of virtual machines, applications, and personalized IT services across different data centers and hybrid cloud environments.”

vRealize Automation 8.0 is a huge change! What was previously used to known as CAS – Cloud Automation services is now vRealize Automation Cloud and it has its own on-premise version which is nothing but vRealize Automation 8.0. One thing to note here is vRA cloud and vRA 8.0 share the same code base and offer the same user experience, the main difference involves how they are delivered!

Difference between 8.0 and previous versions

More information about the product – https://docs.vmware.com/en/vRealize-Automation/index.html?topic=%252Fcom.vmware.vra.programming.doc%252FGUID-75940FA3-1C17-451C-86FF-638E02B7E3DD.html

vRealize Automation 8.0

vRealize Automation 8.0 brings the vRealize Automation Cloud capabilities to the on-premises form factor. This release modernizes the vRA 8 architecture and capability set to enable enhanced agility, efficiency, and governance in the enterprise.

In Simple Terms, vRA 8.0 is an on-premises solution of vRealize Automation Cloud.

This release of vRealize Automation uses a Kubernetes based micro-services architecture. The new release takes a modern approach to delivering hybrid cloud management, extending cloud management to public clouds, delivering applications with DevOps and managing Kubernetes based workloads.

vRealize Automation Components

It contains four core components –

1. VMware Cloud Assembly: Cloud assembly is a cloud-based service that you use to create and deploy machines, applications, and services to multiple clouds like Google Cloud, Native AWS, Azure and VMC on AWS. Cloud assembly provides multiple key features-

  • Multiple cloud accounts.
  • Infrastructure as a code: Supports blueprints with versioning.
  • Self-service Provisioning.
  • Marketplace: Integrates with website called solutionexchange.vmware.com. This has published built-in blueprints that can be accessed through Marketplace.
  • Extensibility: Built-in feature of vRA. You can use XaaS feature for custom queries.
  • Kubernetes Integration: You can deploy Kubernetes cluster through vRA or you can also import existing Kubernetes cluster to vRA.

2. VMware Service Broker: Service Broker aggregates content in native formats from multiple clouds and platforms into a common/single catalog for easy consumption on VMware Cloud.

3. VMware Code Stream: Code stream is a continuous integration and continuous delivery (CICD) software that enables you to deliver software rapidly and reliably, with little overhead.

4. Orchestrator: takes care of 3rd party integrations, custom scripting and supporting lifecycle action through the Event Broker Service.

Since Cloud Assembly, Code Stream, Orchestrator and Service Broker exist in same appliance, logins are passed between applications. Users can swap seamlessly without logging in each time!

vRealize Automation Architecture

vRA Appliance is powered by a Photon OS Base. It includes native Kubernetes installed on the OS to host containerized services. Now, what does that mean? When the vRealize Automation appliance is deployed, at the first boot, docker is installed and kubernetes clusters are configured. Then, Docker images are stored in a private Docker registry on the appliance.

Role of Kubernetes

Those who doesn’t know what HELM is -It is a package manager for Kubernetes. Helm packages, configures, and deploys applications and services onto Kubernetes clusters. Helm takes images that are stored in a private Docker registry on the appliance and deploys Kubernetes services that are running as pods.

vRealize Automation has 16 core services and all the services are deployed and managed as Pods with each having its own web server running on Kubernetes cluster.

vRealize Automation Architecture

There are two more components which also get installed as a part of vRealize Automation On-premises solution.

  • VMware Lifecycle Manager (LCM): It provides a single installation and management platform for various products in vRealize suite. It delivers complete lifecycle and content management capabilities for vRealize Suite products. It helps customers accelerate time to value by automating deployment, upgrades, and configuration, while bringing DevOps principles to the management of vRealize Suite content. It provides a single installation and management platform.

  • VMware Identity Manager (IDM): It is an Identity as a service (IDaas)) solution. It provides application provisioning, conditional access controls, and single sign-on (SSO) for SaaS, web, cloud and native mobile applications.

Namespaces

Namespaces are a way to divide Kubernetes cluster resources between multiple users. All the core vRealize Automation services run as Kubernetes pods within the namespace called “prelude”. You can explore vRA environment using some of the below commands on vRA appliance using SSH –

  1. To list all the pods running –

kubectl get pods -n prelude

  • To get the number of containers in a pod –

kubectl describe pod <pod_name> -n prelude

  • To list all the services running –

kubectl get services -n prelude

  • To list the deployments running –

 kubectl get deployments -n prelude

(A deployment is responsible for keeping a set of pods running.)

Key Takeaways

  • All the core vRA services run as Pods with each having its own Kubernetes cluster.
  • vRealize Automation 8 cannot be installed on your own Kubernetes environment. It comes in the form of an appliance with all the bits and pieces needed to run vRA 8 and this is the only way VMware can support it.
  • Besides the four core vRealize Automation 8 components, Code Assembly, Service Broker, Code Stream and Orchestrator, two supporting services such as vRealize Identity Manager and vRealize Lifecycle Manager are needed to install and run vRealize Automation 8.
  • If you don’t have a LCM and/or IDM instance running, the easy installer will set one up for you. But you can also use existing LCM and IDM configurations for vRealize Automation 8 as well.
  • There is no Windows IaaS server in vRA 8.0.

References

https://blogs.vmware.com/management/2020/03/vrealize-automation-8-architecture.html

https://docs.vmware.com/en/vRealize-Automation/8.0/rn/vRealize-Automation-80-release-notes.html

https://docs.vmware.com/en/vRealize-Automation/index.html?topic=%252Fcom.vmware.vra.programming.doc%252FGUID-75940FA3-1C17-451C-86FF-638E02B7E3DD.html

I hope this article was helpful. Any suggestions/comments are highly appreciated!

Decrypt PSCredential object password and it’s applications

Hello Everyone,

I feel it’s no more a secret that you can decrypt PSCredential Object and read/get the password in plain text. Wait…, I do not know what is PSCredntial objectThis is what you must be thinking. I feel you stumble upon PSCredential object if you do basic PowerShell for system administration.

Get-Credential is the cmdlet that will prompt you for username and password. once you enter your username and password then its basically a PSCredential object for you.

gc

Now, Let’s take a look at the PSCredential Object.

I have stored credentials in a variable $cred which is now a PSCredential Object. When you do Get-member you will come to know more about this PSCredential Object. Look at the below screenshot to understand more.

gc1

When I get $cred in the last command, It does show you a username and password. but if you notice Password than you will come to know that it’s stored as a secure string. This is good because you do not want PowerShell to store the password in plain text.

However, this is sometimes a need to reuse the same credential to authenticate with some other processes in your PowerShell script which requires plain text password as an input. Also, there is a limitation of the PSCredential Object. PSCredential Object will work on cmdlets that know what a PSCredential Object is. In fact, not all the .Net Classes understand what PSCredential Object is. So if you have a cmdlet which is written in .Net class rather than a PowerShell class than you can’t reuse the PSCredential object. In Order to use this, you need to decrypt the password from PSCredential Object and reuse the password to the respective class. Another example is invoking REST APIs, Not all REST APIs understand PSCredential so this means that you need to pass the username and password as a plain text.

Check the below example script, Here I need to invoke a REST method POST which requires username and password to authenticate. I have 2 parameters Pwd(Password) and Name (Username). This specific API does not understand the PSCredential so I need to pass the credential password in plain text.

Now, if I have this script than obviously, it is not secure because whoever has the access to the script will be able to know the credential which you don’t want to do obviously.

So what is the Solution?  Let’s try something.

Can I access the password directly from the PSCredential object

No, You can’t as it’s stored as a secure string. Look at this example.

gc2

  • $cred.Password will not return you the Password as plain text
  • $cred.Password|Convertfrom-SecureString will give you cipher data rather than a password as a plain text.

So what’s the solution. Well, the solution is in the PSCredential object itself. Do $cred|Get-member. 

gc3

PSCredential object has a method called GetNetworkCredential() method. you can use this method to decrypt the password in PSCredential object.

When I invoke this method and do Get-Member, it will show you the properties of the object and you will find a property called Password. use the last command $cred.GetNetworkCredential().Password and it will return the password in plain text. Please refer to the below screenshot.

gc4

So now I have modified the same script as below,

Conclusion: 

Yes, PSCredential stores the password in a secure string but it has a built-in function GetNetworkCredential() to decrypt the same.

Is it safe to use?

I feel No. Once script execution stops or runtime environment close, variables get disposed and you no longer have access to the variable. However, there are ways in which you can obviously exploit this feature with some tweaks in your Powershell script. for example, I wrote this to a text file. So yes, a PowerShell developer can write this line of code to a txt file and exploit a feature that was intended to be there to help you out.

gc5

I am not sure what is the right way to use credentials in PowerShell script. if you know a method which is definitely a secure way than do let me know with your comments here.


Thanks,

 

 

Set SCSI controllers to a VM HDD: vRO workflow

Hello All,

SQL servers on VMware infrastructure need to be built as per recommended guidelines. One of the major recommendations is to have specific SCSI controllers assigned to data HDD of a SQL server. The idea here is to have dedicated SCSI controllers for each data disk so that it does not pass all the data via a single SCSI controller.

Recently, I had come across a use case whereas a part of VM provisioning from vRA, I needed to point my SQL server VM disks 3/4/5 to different Para Virtual SCSI controllers.

Hard disk addition to a VM can be handled as part of blueprint or, XAAS request from vRA.

Steps for vRO Workflow to configure SCSI Controller:

Step 1: Shutdown the VM [Used inbuilt WF “Power off the virtual machine and wait“]

Step 2: Create 3 additional SCSI controllers [Copied in-build vRO action “createVirtualScsiControllerConfigSpec” 3 times with updated value of both controller.key and controller.busNumber as 1,2,3 respectively for SCSI controllers 1,2,3 ]

2020-03-15_17-14-41

Step 3: Reconfigure VM to have above SCSI controllers added

Step 4: Identify Hard disks by labels and point individual disk to new SCSI controllers

Both Step 3 and 4 will be handled by the below code:

var configSpec = new VcVirtualMachineConfigSpec();
var deviceConfigSpec = new Array();

deviceConfigSpec[0]= actionResult;
deviceConfigSpec[1]= actionResult1;
deviceConfigSpec[2]= actionResult2;

configSpec.deviceChange = deviceConfigSpec;

task = vm.reconfigVM_Task(configSpec);

System.sleep(5000);

var controller,controller1, controller2;
for each (var device in vm.config.hardware.device)
{
var label = device.deviceInfo.label;
if (label == “SCSI controller 1”)
{
controller = device;
System.log(“Found Controller 1 : ” + controller.key );
}
else if(label == “SCSI controller 2”)
{
controller1 = device;
System.log(“Found Controller 2 : ” + controller1.key );
}
else if (label == “SCSI controller 3”)
{
controller2 = device;
System.log(“Found Controller 3 : ” + controller2.key );
}
}
if(!controller && !controller1 && !controller2)
{
throw “ERROR: Controller not found”
}

var diskConfigSpecs = new Array();
for each (var device in vm.config.hardware.device)
{
var label = device.deviceInfo.label;
if (((device.deviceInfo.label).indexOf(“Hard disk 3”) > -1) && ((device.deviceInfo.label).indexOf(“Hard disk”) > -1))
{
System.log(“Found disk to Change Controller: ” + label);
var diskConfigSpec = new VcVirtualDeviceConfigSpec();
diskConfigSpec.device = new VcVirtualDisk;
diskConfigSpec.device = device;
diskConfigSpec.device.controllerKey = controller.key;
diskConfigSpec.device.unitNumber = 0;
diskConfigSpec.operation = VcVirtualDeviceConfigSpecOperation.edit;
diskConfigSpecs.push(diskConfigSpec);
}
else if (((device.deviceInfo.label).indexOf(“Hard disk 4”) > -1) && ((device.deviceInfo.label).indexOf(“Hard disk”) > -1))
{
System.log(“Found disk to Change Controller: ” + label);
var diskConfigSpec = new VcVirtualDeviceConfigSpec();
diskConfigSpec.device = new VcVirtualDisk;
diskConfigSpec.device = device;
diskConfigSpec.device.controllerKey = controller1.key;
diskConfigSpec.device.unitNumber = 0;
diskConfigSpec.operation = VcVirtualDeviceConfigSpecOperation.edit;
diskConfigSpecs.push(diskConfigSpec);
}
else if (((device.deviceInfo.label).indexOf(“Hard disk 5”) > -1) && ((device.deviceInfo.label).indexOf(“Hard disk”) > -1))
{
System.log(“Found disk to Change Controller: ” + label);
var diskConfigSpec = new VcVirtualDeviceConfigSpec();
diskConfigSpec.device = new VcVirtualDisk;
diskConfigSpec.device = device;
diskConfigSpec.device.controllerKey = controller2.key;
diskConfigSpec.device.unitNumber = 0;
diskConfigSpec.operation = VcVirtualDeviceConfigSpecOperation.edit;
diskConfigSpecs.push(diskConfigSpec);
}
}

var configSpec = new VcVirtualMachineConfigSpec();
configSpec.deviceChange = diskConfigSpecs;
task = vm.reconfigVM_Task(configSpec);
System.sleep(5000);

Step 5: Power on the VM [Used inbuilt WF “Start the virtual machine and wait“]

Final Workflow schema will look like:

2020-03-15_17-20-23

Step 6: Integrate with vRA by configuring the workflow to be triggered as part of your existing Machine Provisioning Subscription or, create a new one if you don’t have one already. [ If you want to know how, comment below and i will write another blog about it]

Thanks,

PsTK1: Getting Started with ‘NetApp PowerShell Toolkit’

Welcome back, As promised earlier, I am back with the new blog series. So let’s get started.

Note: From, Hereon I will be using abbreviation PSTK for ‘NetApp PowerShell ToolKit’, as the same has been referred on NetApp documentation as well.

What is Netapp PowerShell ToolKit? 

NetApp PowerShell Tool Kit (PSTK) is a PowerShell Toolkit packaged with 2 PowerShell Modules which are DataOntap and SANTricity. DataOntap module helps in managing NetApp Storage devices which are managed by the ONTAP management system- such as FAS, AFF, and NetApp Cloud, etc. SANTricity module is used to manage E-Series Storage array and EF-Series flash array.  In this blog series, I will be focusing only on the DataOntap PowerShell module. 

I am highlighting some of the specifications of PSTK here,

Platform: Windows only, Requires PowerShell 3.0 or above and .net 4.5 or above
Is it available on PSGallary? No, Not yet. This means that you can not download it from Install-Module cmdlet of PowerShell
PowerShell Core: No, It does not support PowerShell core yet. So you can’t use this on the Linux Platform yet.
# of cmdlets: 2300 or more for DataOntap Module and ~300 for SANTricity Module.

Documentation and Download link

Why should I learn PSTK?

If you are a Storage admin/Engineer then you would discover that working on PowerShell gives you greater flexibility and automation capabilities compare to any other shell environment. If you have already worked with PowerShell then it’s great. You can simply start using the PSTK module. If you haven’t worked with PowerShell then know this, PowerShell is the simplest scripting platform available for us. Invest some time and you will get it. 🙂

  • PowerShell is primarily a tool for administrators like us
  • PSTK is just a PowerShell module, so if you are already working on any other PowerShell module than you almost require zero additional skillsets to start working on PSTK or any other module in that case
  • The same script can help you to orchestrate things related to the different technology stack. For example, the same script can create a LUN with the help of the DataOntap PowerShell module and further creates a datastore in VMware with the help of PowerCLI (PowerShell Module for VMware vSphere)
  • Everything in PowerShell is an object
  • PowerShell’s command discoverability makes it easy to transition from typing commands interactively to creating and running scripts

How to Install?

Download the .msi installer file and click on install. Ensure you are running with PowerShell 3.0 or above version.

Import-Module

If you are running PowerShell 4.0 or above, By default module will be imported the moment you execute any of the commands which are part of that respective module. However, use below cmdlet if you want to import the module into the Powershell session.

importmodule

Get-Command cmdlet

Below cmdlet will list all the commands which are available to use from DataOntap Module.

If you are entirely new to the PowerShell then I would highly recommend you to refer PowerShell documentation to start your learning with Powershell.


Thanks,

 

 

Coming Soon: Blog Series, Netapp PowerShell Toolkit

PowerShell

Hello Everyone,

I used to consider myself a VMware engineer rather than anything else. Even though my core expertise was from the VMware Compute domain, I understood well in advance that if I want to be a good VMware engineer than I must also work on the Storage and Network piece of infrastructure.

In 2019, I have spent a good amount of time understanding and working on the Storage side of the world, given the role I have in my hand.

We use a data replication product that takes advantage of core Netapp functionalities like Linked Clone, DD copy, and Flex Vol, etc. to do the data replication from source to DR. In my current role, I am tasked to build, run/config and test the product and creates the operating procedures for Ops to follow. So obviously I do break, rebuild, re-config my lab infrastructure multiple times. That is where PowerShell comes into the picture.

Why I am Blogging about Netapp PowerShell Toolkit?

I was using the Netapp PowerShell toolkit for my own purpose and was never thinking of extending this knowledge to a larger audience. One day, I wanted to get some reports from another lab environment (obviously, I didn’t had the access required), I requested some help from our storage engineers. When I got the reports, it was all in a few screenshots or simply a text export. When you deal with a large amount of data, you would love to get it in a CSV or similar formats so that you can process it simply. If you have experienced PowerShell earlier than you could have figured it out by now what I am actually talking about. Yes, Its Get-Something|Export-csvThis is how easy when it comes to PowerShell. I felt like letting my friends know a few tips and tricks from PowerShell and they simply loved it.

For any system admin, be it Compute/Network or Storage, Challenges are the same. Everyone deals with data, Everyone needs to automate simple day to day tasks if not large scales of automation, and that is where PowerShell lets you win the battle. The same PowerShell NetApp toolkit helped me to write an orchestration that lets us migrate the protected workload from the one Storage controller to another one.

This one year of experience with Netapp Storage tells me that, there may be many more storage admins who might not be aware of NetApp PowerShell Toolkit. I intend to write this blog series to bring PowerShell capabilities and it’s advantages to Storage Engineers in their day to day work.

In the coming days, I would be writing about NetApp PowerShell Toolkit and it’s the usage. Here, I would share some tips and tricks to do NetApp Storage Orchestration via PowerShell toolkit.

I hope you would like this blog series. If you are currently working on NetApp storage, Please do comment below what you would like to read about? What are the current challenges you have as Storage admins? If you are using Netapp PowerShell Toolkit, How is your experience? until then stay tuned here will come back with first blog post in this blog series.

Please do subscribe and follow my blog if you haven’t done so far.


Thanks,

 

 

 

 

 

PSProvider and VMware datastore – PowerCLI

Hello everyone,

I am writing this short blog after a long time. While explaining in-out of PowerShell to some of my friends in person, I discussed about PSProviders. Most of the knowledge about PSProvider is information only and as a script writer we dont really bother about how powershell is playing with different resources (Variable/function/Filesystem/Registery Keys etc) which are used in PowerShell Session or a Script.

However as a VMware Admin I do use PSProvider in background alot in order to move the datastore item from,

  1. datastore to datastore
  2. Datastore to local drive (Windows Drive or Shared Drive) or vice versa

In this Post we will learn about Copy-DatastoreItem cmdlet and PSProviders.

What is PSProvider?

In Simple Term, PSProviders are special repository(data stored within Powershell)  to process the data as it recieves during PowerShell execution. This data is presented in their respective PowerShell drives which are known as PSDrive.

For Ex. See the below command output from Get-PSProvider

Get-Provider.PNG

by default, you get above psproviders which are registry, Alias, Environment, Filesystem, Function and variable. You can also see the respective drives associated to its PSProvider. This means that if you are creating any variable it will be stored in variable: , If you are creating a function then it will be stored in Function: 

check the below image, where i am going into respective drive and able to see the variable which i have created. 

CDVariable

In conclusion, whatever variable/function/etc which I create in powershell gets stored in their respective drives.

vimstore PSProvider. 

vimstore is a one of the PSProvider which you get after connecting to VMware vCenter via PowerCLI. Do this, Connect-VIServer vCenter-ip and then run get-PSProvider cmdlet and you will see additional PSProviders are available to you. These providers are something which provides VMware Inventory and datastore resources to the POwerCLI or PowerShell.
POwerCLIProviders

So, After connecting to vCenter via PowerCli you can see additional PSDrives are available to you, provided by 2 additional PSProviders. I can do cd vmstore: and can actually list the available datastore in the datastore inventory (Simillar to how we list the directories and files in a path) or can list the host inventory.

Once you are connected you can follow below commands to create a New-PSDrvive with ‘Vimdatastore’ PSProvider.

DatastoreDS

Now you have DS: drive which is available to you and you can basically navigate through the same way you do it for any other drive.

Use below command to move data from your local drive to the VMware datastore using PowerCLI. Please note that i am already in DS: , If you are in any other drive then give proper path using vimdatastore drive

copydatastore.PNG

Note: This method is quite helpful in case you are trying to move things around from datastore and you can automate the move operation. also this is an alternate to certificate error which you may receive while moving data from Web Client. For ex, Operation failed when I tried to upload the same ISO using web client.

CertError

Use PowerCLI vimdatastore Psprovider and copy-datastoreitem cmdlet to work around this.

 

Thanks

Jatin