The vSphere DSC – Just another perspective

In the last couple of weeks, I have done rounds of meeting with our customers and discussed ways to automate ESXi build and configuration. The most common piece which I found in each of the environment was vSphere auto-deploy. Today, most of our customers deploy ESXi hosts using auto-deploy and post configuration tasks via host profiles. Majority of question or concerns which I got were related to the host profile. My understanding says that customers tend to find host profiles difficult to understand, which is not the case in reality.

Host profiles are excellent. It’s just you need to fine-tune them initially. You rarely get any issue if you have cracked the host profiles successfully. The key here is to set up a reference host and extracting the host profile from it.

Having said that, let me bring you another perspective on doing the post configuration tasks. Today many of you love to do Infrastructure as a Code and believe in a configuration management ecosystem. When you look around all the configuration management tools, you will find that vSphere Desired State Configuration (DSC) is very close to being a complete solution for vSphere configuration management.

vSphere DSC is an open-source module that provides PowerShell DSC resources for VMware. PowerShell DSC resources follow declarative code style, like most configuration management tools, and allow you to document infrastructure configurations. The module has 71 resources that cover most of the configuration aspects of vSphere infrastructure.

We shouldn’t be looking at vSphere DSC in isolation and rather look at complimenting it with vSphere auto-deploy. Think about this, PXE boot ESXi hosts from vSphere auto-deploy and let vSphere DSC do all the post configurations for you, isn’t that cool!

When you extract the host profile, you get all the configurations of an ESXi host, and at times you need to trim down the configurations to ensure that you have control over it. 

vSphere DSC is just the opposite of this approach. You can start with an empty configuration file and keep adding the resource definitions to it as and when required. vSphere DSC configuration gives a complete picture of configurations that you want to Ensure and allows you to quickly replicate the same in other environments.

Just take a look at the below snippet and a demo of my Lab configuration which does range of things on vCenter and ESXi host.

Concluding this, I would say that vSphere DSC just opens up another way of automating the infrastructure builds and config. The project has come a long way now and has done significant improvements in terms of resource coverage.

Stay tuned with the vSphere DSC project and soon you will get new updates from the VMware PowerCLI team.

Learn More about vSphere DSC: https://github.com/vmware/dscr-for-vmware/wiki



VMware Cloud Foundation 4 Series – Part 1

VMware Cloud Foundation 4 presents an important step in offering a hybrid cloud platform supporting native Kubernetes workloads and management alongside your traditional VM-based workloads.

VMware Cloud Foundation

Cloud Foundation has an automated deployment process, known as Bring-up which deploys standardized workload ready private cloud in just matter of hours! By any measure, VMware Cloud Foundation 4 is a massive release that delivers a complete full stack for all capabilities required, more than I can cover in a single blog. Hence presenting a series where we would look at in-depth concepts of VCF 4. 

VCF 4 Architecture

Before we understand the Architecture, check out the software Building blocks of VCF 4.0:

The new version of VCF 4.0 includes vSphere 7.0, VSAN 7.0, NSX-T 3.0, VRA 8.1, vRLCM 8.1 as well as SDDC manager to manage your virtual infrastructure domains. You can find detailed information on Cloud Foundation Bill of Materials (BOM) here. One thing to note here is you cannot upgrade from VCF 3.x to VCF 4.x. VCF 4 has to be deployed as ‘Greenfield’ deployment only, however this functionality is being currently worked upon and we can expect direct upgrade in coming releases.

Workload Domain

Workload Domain

Workload domain is purpose built logical SDDC instance of one or more vsphere cluster with dedicated vcenter server and dedicated or shared NSX-T instance. It also includes dedicated vSAN Ready nodes. It has automated provisioning and can support up to 15 workload domains.

Most of the customers have question about how many vCenter Server licenses are required during deployment. So to answer this question, you only need single vcenter Server License which is entered during initial deployment and that will support all the vcenter instances deployed within the VCF.

Management Domain

Management Domain

It a special purpose domain that is automatically deployed during Initial Deployment (which is also called as Bring-Up process). It requires minimum 4 hosts and vSAN Storage. vSAN is the only principal storage option for Management Domain.

Management domain is designed to host all infrastructure components like SDDC manager, vcenter server and NSX-T instances and NSX edge instances. It also supports 3rd party management apps like Backup server, Active Directory Servers, Domain controllers, etc.

So, now Management Domain in VCF 4 has smaller footprint as it contains smaller number of VMs. This is because PSCs are now Embedded (External PSC’s are not supported) and NSX Managers and Controllers are integrated into one!

Since the PSC’s are embedded, the functionality of SDDC manager towards has also changed. As new vCenters are deployed, SDDC manager configures a replication ring topology for all the embedded PSC’s and SDDC Manager authenticates to the ring rather than to individual PSCs!

Virtual Infrastructure (VI) Workload Domain

VI Workload Domain

VI Workload Domain contains one or more sphere infrastructures designed to run Customer’s applications and has a dedicated vcenter server deployed in the management Domain.

While deploying the VI workload Domain, Admins have an option of deploying a NSX instance or sharing an NSX instance with existing VI workload domain. Admins also have option to choose between vSAN, NFS or FC as their principal storage options unlike the management domain where vSAN is the only principal storage used.

For the first VI workload domain, the workflow deploys a cluster of three NSX Managers in the management domain and configures a virtual IP (VIP) address for the NSX Manager cluster. Subsequent VI workload domains can share an existing NSX Manager cluster or deploy a new one as stated above.

The management domain and deployed VI workload domains are logical units that carve up the compute, network, and storage resources of the Cloud Foundation system. 

VCF 4 Deployment Types

There are two deployment models used based on the size of the environment.

Consolidated Architecture :

Where customer workloads runs in Management Domain, as simple as that! This model has shared vcenter server where customer workloads are deployed into resource pools. This is recommended for small deployments and it uses minimum of 4 servers. Consolidated deployment uses vSAN as a principal storage and they don’t have any option to select any other type of storage.

Consolidated Architecture

Standard Architecture

The standard architecture aligns with industry best practices separating management workloads with Infrastructure workloads. This is recommended for medium to Large Deployments and it required minimum of 7 (recommended 8) servers to deploy.

Management Workloads are dedicated to Infrastructure and Dedicated VI domains for User workloads. You can run max 15 Workload domains including Management Domain. The important point to note here is vCenter Servers run in enhanced Linked-mode.

Standard Architecture

Key Takeaways

  1. Consolidated deployments utilize one workload domain and one vcenter server.
  2. Standard deployments use separate vCenter server for each Domain.
  3. Multiple clusters are supported in Standard and standard architectures both.
  4. Stretched vSAN deployments are supported.
  5. Consolidated deployment uses vSAN for principal storage.
  6. It’s important to note that VCF 4.1 now supports vVols as Principal Storage in VCF Workload Domains to deliver greater flexibility and storage options.
  7. Each Workload domain can consist of multiple clusters and can scale up to VMware documented maximums!
  8. More information on Consolidated Architecture limitations with Cloud Foundation (70622) – https://kb.vmware.com/s/article/70622

Unable to use Azure Private Endpoints with On-Prem DNS server!!

I have come across a use case where I want to connect Azure Database Service like SQL using Private endpoint and the connectivity is initiated from an on-prem VM which is pointing to my on-prem local DNS server.

* You Should already have connectivity to on-prem from Azure networK VIA Express route or VPN

Problem Statement

You should be having your local on-prem DNS server and when trying to connect to Azure Services using private endpoint you will fail to do so. If your on-prem DNS forward queries to public DNS servers you will get public IP of your Azure Resource and won’t be able to connect to the required service with private IP as your on-prem DNS won’t be able to resolve the endpoint DNS name to it’s associated private IP address hence failing the whole purpose of using private endpoints.

Solution

You need to set up your infrastructure to make this happen. Below are the steps:

  1. Setup conditional forwarding under your on-prem DNS server to forward specific domain queries to the forwarder server created under step 1. 
    • Conditional Forwarding should be made to the public DNS zone forwarder. E.g. database.windows.net instead of privatelink.database.windows.net
  2. Create a DNS forwarder VM in Azure and configure it to forward all queries to the Azure default DNS server
  3. Create Private DNS Zone for endpoint domain name in the same VNet as your Azure DNS forwarder server and create an A record with Private endpoint information (FQDN record name and private IP address)
    • The Private DNS zone is the resource with which the Azure DNS server consults with to resolve the DB FQDN to its endpoint private IP address.

** Important point to know is Azure doesn’t allow access to its default DNS server (169.63.129.16) from any server outside Azure. This is the only reason we need to create a forwarder server in Azure.

On-premises forwarding to Azure DNS
Architecture for using On-Prem DNS to resolve Azure Private Endpoint

vRealize Automation 8.0 – A lot has changed!

Hello Everyone!

It is always good to refresh new products/new features coming in Industry (Although it’s been a year that vRA 8.0 has been released but it’s still new to many!). So, in this blog post, I’ll try to give an insight on new vRealize Automation 8.0 Architecture and will talk about what exactly has changed.

“vRealize Automation automates the delivery of virtual machines, applications, and personalized IT services across different data centers and hybrid cloud environments.”

vRealize Automation 8.0 is a huge change! What was previously used to known as CAS – Cloud Automation services is now vRealize Automation Cloud and it has its own on-premise version which is nothing but vRealize Automation 8.0. One thing to note here is vRA cloud and vRA 8.0 share the same code base and offer the same user experience, the main difference involves how they are delivered!

Difference between 8.0 and previous versions

More information about the product – https://docs.vmware.com/en/vRealize-Automation/index.html?topic=%252Fcom.vmware.vra.programming.doc%252FGUID-75940FA3-1C17-451C-86FF-638E02B7E3DD.html

vRealize Automation 8.0

vRealize Automation 8.0 brings the vRealize Automation Cloud capabilities to the on-premises form factor. This release modernizes the vRA 8 architecture and capability set to enable enhanced agility, efficiency, and governance in the enterprise.

In Simple Terms, vRA 8.0 is an on-premises solution of vRealize Automation Cloud.

This release of vRealize Automation uses a Kubernetes based micro-services architecture. The new release takes a modern approach to delivering hybrid cloud management, extending cloud management to public clouds, delivering applications with DevOps and managing Kubernetes based workloads.

vRealize Automation Components

It contains four core components –

1. VMware Cloud Assembly: Cloud assembly is a cloud-based service that you use to create and deploy machines, applications, and services to multiple clouds like Google Cloud, Native AWS, Azure and VMC on AWS. Cloud assembly provides multiple key features-

  • Multiple cloud accounts.
  • Infrastructure as a code: Supports blueprints with versioning.
  • Self-service Provisioning.
  • Marketplace: Integrates with website called solutionexchange.vmware.com. This has published built-in blueprints that can be accessed through Marketplace.
  • Extensibility: Built-in feature of vRA. You can use XaaS feature for custom queries.
  • Kubernetes Integration: You can deploy Kubernetes cluster through vRA or you can also import existing Kubernetes cluster to vRA.

2. VMware Service Broker: Service Broker aggregates content in native formats from multiple clouds and platforms into a common/single catalog for easy consumption on VMware Cloud.

3. VMware Code Stream: Code stream is a continuous integration and continuous delivery (CICD) software that enables you to deliver software rapidly and reliably, with little overhead.

4. Orchestrator: takes care of 3rd party integrations, custom scripting and supporting lifecycle action through the Event Broker Service.

Since Cloud Assembly, Code Stream, Orchestrator and Service Broker exist in same appliance, logins are passed between applications. Users can swap seamlessly without logging in each time!

vRealize Automation Architecture

vRA Appliance is powered by a Photon OS Base. It includes native Kubernetes installed on the OS to host containerized services. Now, what does that mean? When the vRealize Automation appliance is deployed, at the first boot, docker is installed and kubernetes clusters are configured. Then, Docker images are stored in a private Docker registry on the appliance.

Role of Kubernetes

Those who doesn’t know what HELM is -It is a package manager for Kubernetes. Helm packages, configures, and deploys applications and services onto Kubernetes clusters. Helm takes images that are stored in a private Docker registry on the appliance and deploys Kubernetes services that are running as pods.

vRealize Automation has 16 core services and all the services are deployed and managed as Pods with each having its own web server running on Kubernetes cluster.

vRealize Automation Architecture

There are two more components which also get installed as a part of vRealize Automation On-premises solution.

  • VMware Lifecycle Manager (LCM): It provides a single installation and management platform for various products in vRealize suite. It delivers complete lifecycle and content management capabilities for vRealize Suite products. It helps customers accelerate time to value by automating deployment, upgrades, and configuration, while bringing DevOps principles to the management of vRealize Suite content. It provides a single installation and management platform.

  • VMware Identity Manager (IDM): It is an Identity as a service (IDaas)) solution. It provides application provisioning, conditional access controls, and single sign-on (SSO) for SaaS, web, cloud and native mobile applications.

Namespaces

Namespaces are a way to divide Kubernetes cluster resources between multiple users. All the core vRealize Automation services run as Kubernetes pods within the namespace called “prelude”. You can explore vRA environment using some of the below commands on vRA appliance using SSH –

  1. To list all the pods running –

kubectl get pods -n prelude

  • To get the number of containers in a pod –

kubectl describe pod <pod_name> -n prelude

  • To list all the services running –

kubectl get services -n prelude

  • To list the deployments running –

 kubectl get deployments -n prelude

(A deployment is responsible for keeping a set of pods running.)

Key Takeaways

  • All the core vRA services run as Pods with each having its own Kubernetes cluster.
  • vRealize Automation 8 cannot be installed on your own Kubernetes environment. It comes in the form of an appliance with all the bits and pieces needed to run vRA 8 and this is the only way VMware can support it.
  • Besides the four core vRealize Automation 8 components, Code Assembly, Service Broker, Code Stream and Orchestrator, two supporting services such as vRealize Identity Manager and vRealize Lifecycle Manager are needed to install and run vRealize Automation 8.
  • If you don’t have a LCM and/or IDM instance running, the easy installer will set one up for you. But you can also use existing LCM and IDM configurations for vRealize Automation 8 as well.
  • There is no Windows IaaS server in vRA 8.0.

References

https://blogs.vmware.com/management/2020/03/vrealize-automation-8-architecture.html

https://docs.vmware.com/en/vRealize-Automation/8.0/rn/vRealize-Automation-80-release-notes.html

https://docs.vmware.com/en/vRealize-Automation/index.html?topic=%252Fcom.vmware.vra.programming.doc%252FGUID-75940FA3-1C17-451C-86FF-638E02B7E3DD.html

I hope this article was helpful. Any suggestions/comments are highly appreciated!

vLeaderConnect EP1: In conversation with Joe Beda

vLeaderConnectQuote_LinkedIn

Hello and welcome everyone,

“There is always a first time for everything,” #vLeaderConnect was the same experience for us at VMUG Pune. I proposed this idea to all the committee members during one of our internal discussion. The sole objective of vLeaderConnect was to get an insight into how the technical leaders carry themselves, personally and professionally. It was also an opportunity for us to bring some brilliant minds on the VMUG Pune forum and let the community experience their thoughts and wisdom.

We considered Joe Beda as our first guest. There were two reasons for it: First- Kubernetes is making its way into the VMware ecosystem with VMware’s recent product releases. Second- Joe Beda himself is the co-creator of Kubernetes, and he is into the center of all the magic which is happening at VMware. We reached out to Joe, and without taking time, he agreed to discuss it with us.

On the other side of the preparation part, all the community leaders scrambled themselves and explored the possibility of making it happen. We reached out to the community members, got their feedback to understand what are the things which they want to discuss with Joe. It was heartening to see the response we were receiving from the community members across the globe. We sent the invites to our friends from VMUG Romania, VMUG France, VMUG Japan, VMUG Argentina, and other VMUG communities. I must say they were appreciative of the efforts which we have put in and turned up on the day of the Event irrespective of the time zone.

The event,

My Post (5)We expected a turn around of 100+ participants, and we did receive the expected response from the community members. It was very clear from our part that we will not have a scripted conversation with Joe, We did brief him on what topics we will be discussing upon, but the questions, follow-up, and the discussions were impromptu. Joe was very supportive during the entire conversation. He was candid in his thoughts, spoke out of his mind, and, most importantly, he was speaking on his own without carrying his big credentials on his sleeves. We are really thankful to those who have joined the event and shown their support to us. If you couldn’t join the event, then below is the recording session for you guys to go through.


Highlights: 

Evolution of Kubernetes, 

Borg was a ten-year-old project written in C++, and it was internally used at google. The experience with Borg gave an understanding that there were other ways to manage and deploy software beyond starting a VM or a Server. This was all possible because of the benefits coming out of the containerized workloads. Borg really gave us essentially a roadmap for how these things could work into the future, and having that roadmap was very instructive for Kubernetes. The next challenge was, with GCE google was very late in the public cloud market so it was a discussion within google about how we can we shake things up so that we can actually create opportunities for Google to really sort of reset things and move the conversation to a place where Google can compete on a more level playing field with GCP versus AWS. The solution to that was to have a containerized workload offering to google customers by turning an internal product to an external one. 

Language Selection while writing Kubernetes, (C++ vs. GO)

We wanted to make Kubernetes an open-source project. At that time, the docker and GO community was shaping up really well. So asking open source communities to contribute to the Kubernetes project became much simpler with GO. Also, GO is really sort of a sweet spot in terms of being low-level enough that you can write system software in it but high-level enough that you know you can remove a lot of the complexity which you get with C or C++. 

Tanzu Portfolio

It’s a portfolio of products that work well together, not a platform. You can pick the products that work for you but it isn’t VMware only. We live in a multi-vendor world. You can still manage container workload running elsewhere. 

Message to vSphere Admin

View this as an opportunity, not a threat. VMware wants the conversation to be vSphere AND cloud (not or). Use these tools for change in your organization.

I am sure there is a lot more to what I have shared in the highlight section, I would like to highly recommend you to watch the complete video to understand more.


Love recieved from the community

I am just highlighting some of the responses we received from the VMUG community.


I know it was our first ever attempt to host something like this at VMUG Pune. Stay tuned with us and keep supporting us.


visit vmug.com and be part of a larger tech evangelist group around you.

Thanks,

 

 

SAM 101 – Build and Deploy your Lambda Function Using AWS SAM

Hello!

I came across a use case, where I have to deploy a CloudFormation template which creates a lambda resource under my AWS account.

To provide Lambda function code to CFN template I have two ways:

  1. Use Inline lambda function inside the CFN template.
  2. Use the Serverless Application Model (SAM) by creating Lambda function artifacts under S3 and putting codeURI in the CFN template.

An inline function is a straightforward approach with a code limitation of 4 KB.

I will explain in this blog how to use SAM as an extension of AWS CloudFormation.

Note: Serverless application is more than just a lambda function, it can include additional resources such as APIs, databases and event source mappings.

SAM Deployment

Note: Make sure you have SAM CLI installed on your machine and I use Visual Studio Code for AWS CLI

  • Download a sample application

# sam init

You can see a sample app folder structure created by the name sam_app under your current folder

init

  • Add your application code and update CloudFormation Template
    • Lambda Function – Added a folder under sam_app by the name myLambda containing my Lambda function (ssm_Lambda.py) and requirments.txt file.
    • CloudFormation Template – Replaced existing template.yaml with my CFN which will create a lambda resource using a function defined under myLambda Folder (You can see CodeUri: myLambda/)

code_place

  • Build your application

# sam build

The ‘sam build’ command iterates through the functions in your application, looks for a manifest file (such as requirements.txt ) that contains the dependencies and automatically creates deployment artifacts.

build

A new folder with all artifacts gets created with the name build under .aws-sam

after_build

  • Package application

#  sam package –s3-bucket abhishek-bucket-lambda –output-template-file template-with-artifacts.yaml –no-verify-ssl

Packages an AWS SAM application. It creates a ZIP file of your code and dependencies and uploads it to Amazon S3. It then returns a copy of your AWS SAM template, replacing references to local artifacts with the Amazon S3 location where the command uploaded the artifacts. (Screenshots shows the uploaded zip file using above command and the SAM template template-with-artifacts.yaml)

s3

artifacts

  • Deploy Stack with SAM CLI

# sam deploy –stack-name “Sample-CFN-Stack” –s3-bucket abhishek-bucket-lambda –capabilities CAPABILITY_NAMED_IAM –template-file template-with-artifacts.yaml –region “eu-west-1” –no-verify-ssl

Or, you can also deploy your stack with CloudFormation CLI

# aws cloudformation deploy –template-file C:\Users\abhishek\sam-app\template-with-artifacts.yaml –stack-name “Sample-CFN-Stack”

The CloudFormation is deployed now and it has created the Lambda resource too.

Cheers !!

Decrypt PSCredential object password and it’s applications

Hello Everyone,

I feel it’s no more a secret that you can decrypt PSCredential Object and read/get the password in plain text. Wait…, I do not know what is PSCredntial objectThis is what you must be thinking. I feel you stumble upon PSCredential object if you do basic PowerShell for system administration.

Get-Credential is the cmdlet that will prompt you for username and password. once you enter your username and password then its basically a PSCredential object for you.

gc

Now, Let’s take a look at the PSCredential Object.

I have stored credentials in a variable $cred which is now a PSCredential Object. When you do Get-member you will come to know more about this PSCredential Object. Look at the below screenshot to understand more.

gc1

When I get $cred in the last command, It does show you a username and password. but if you notice Password than you will come to know that it’s stored as a secure string. This is good because you do not want PowerShell to store the password in plain text.

However, this is sometimes a need to reuse the same credential to authenticate with some other processes in your PowerShell script which requires plain text password as an input. Also, there is a limitation of the PSCredential Object. PSCredential Object will work on cmdlets that know what a PSCredential Object is. In fact, not all the .Net Classes understand what PSCredential Object is. So if you have a cmdlet which is written in .Net class rather than a PowerShell class than you can’t reuse the PSCredential object. In Order to use this, you need to decrypt the password from PSCredential Object and reuse the password to the respective class. Another example is invoking REST APIs, Not all REST APIs understand PSCredential so this means that you need to pass the username and password as a plain text.

Check the below example script, Here I need to invoke a REST method POST which requires username and password to authenticate. I have 2 parameters Pwd(Password) and Name (Username). This specific API does not understand the PSCredential so I need to pass the credential password in plain text.

Now, if I have this script than obviously, it is not secure because whoever has the access to the script will be able to know the credential which you don’t want to do obviously.

So what is the Solution?  Let’s try something.

Can I access the password directly from the PSCredential object

No, You can’t as it’s stored as a secure string. Look at this example.

gc2

  • $cred.Password will not return you the Password as plain text
  • $cred.Password|Convertfrom-SecureString will give you cipher data rather than a password as a plain text.

So what’s the solution. Well, the solution is in the PSCredential object itself. Do $cred|Get-member. 

gc3

PSCredential object has a method called GetNetworkCredential() method. you can use this method to decrypt the password in PSCredential object.

When I invoke this method and do Get-Member, it will show you the properties of the object and you will find a property called Password. use the last command $cred.GetNetworkCredential().Password and it will return the password in plain text. Please refer to the below screenshot.

gc4

So now I have modified the same script as below,

Conclusion: 

Yes, PSCredential stores the password in a secure string but it has a built-in function GetNetworkCredential() to decrypt the same.

Is it safe to use?

I feel No. Once script execution stops or runtime environment close, variables get disposed and you no longer have access to the variable. However, there are ways in which you can obviously exploit this feature with some tweaks in your Powershell script. for example, I wrote this to a text file. So yes, a PowerShell developer can write this line of code to a txt file and exploit a feature that was intended to be there to help you out.

gc5

I am not sure what is the right way to use credentials in PowerShell script. if you know a method which is definitely a secure way than do let me know with your comments here.


Thanks,