Unable to use Azure Private Endpoints with On-Prem DNS server!!

I have come across a use case where I want to connect Azure Database Service like SQL using Private endpoint and the connectivity is initiated from an on-prem VM which is pointing to my on-prem local DNS server.

* You Should already have connectivity to on-prem from Azure networK VIA Express route or VPN

Problem Statement

You should be having your local on-prem DNS server and when trying to connect to Azure Services using private endpoint you will fail to do so. If your on-prem DNS forward queries to public DNS servers you will get public IP of your Azure Resource and won’t be able to connect to the required service with private IP as your on-prem DNS won’t be able to resolve the endpoint DNS name to it’s associated private IP address hence failing the whole purpose of using private endpoints.

Solution

You need to set up your infrastructure to make this happen. Below are the steps:

  1. Setup conditional forwarding under your on-prem DNS server to forward specific domain queries to the forwarder server created under step 1. 
    • Conditional Forwarding should be made to the public DNS zone forwarder. E.g. database.windows.net instead of privatelink.database.windows.net
  2. Create a DNS forwarder VM in Azure and configure it to forward all queries to the Azure default DNS server
  3. Create Private DNS Zone for endpoint domain name in the same VNet as your Azure DNS forwarder server and create an A record with Private endpoint information (FQDN record name and private IP address)
    • The Private DNS zone is the resource with which the Azure DNS server consults with to resolve the DB FQDN to its endpoint private IP address.

** Important point to know is Azure doesn’t allow access to its default DNS server (169.63.129.16) from any server outside Azure. This is the only reason we need to create a forwarder server in Azure.

On-premises forwarding to Azure DNS
Architecture for using On-Prem DNS to resolve Azure Private Endpoint

SAM 101 – Build and Deploy your Lambda Function Using AWS SAM

Hello!

I came across a use case, where I have to deploy a CloudFormation template which creates a lambda resource under my AWS account.

To provide Lambda function code to CFN template I have two ways:

  1. Use Inline lambda function inside the CFN template.
  2. Use the Serverless Application Model (SAM) by creating Lambda function artifacts under S3 and putting codeURI in the CFN template.

An inline function is a straightforward approach with a code limitation of 4 KB.

I will explain in this blog how to use SAM as an extension of AWS CloudFormation.

Note: Serverless application is more than just a lambda function, it can include additional resources such as APIs, databases and event source mappings.

SAM Deployment

Note: Make sure you have SAM CLI installed on your machine and I use Visual Studio Code for AWS CLI

  • Download a sample application

# sam init

You can see a sample app folder structure created by the name sam_app under your current folder

init

  • Add your application code and update CloudFormation Template
    • Lambda Function – Added a folder under sam_app by the name myLambda containing my Lambda function (ssm_Lambda.py) and requirments.txt file.
    • CloudFormation Template – Replaced existing template.yaml with my CFN which will create a lambda resource using a function defined under myLambda Folder (You can see CodeUri: myLambda/)

code_place

  • Build your application

# sam build

The ‘sam build’ command iterates through the functions in your application, looks for a manifest file (such as requirements.txt ) that contains the dependencies and automatically creates deployment artifacts.

build

A new folder with all artifacts gets created with the name build under .aws-sam

after_build

  • Package application

#  sam package –s3-bucket abhishek-bucket-lambda –output-template-file template-with-artifacts.yaml –no-verify-ssl

Packages an AWS SAM application. It creates a ZIP file of your code and dependencies and uploads it to Amazon S3. It then returns a copy of your AWS SAM template, replacing references to local artifacts with the Amazon S3 location where the command uploaded the artifacts. (Screenshots shows the uploaded zip file using above command and the SAM template template-with-artifacts.yaml)

s3

artifacts

  • Deploy Stack with SAM CLI

# sam deploy –stack-name “Sample-CFN-Stack” –s3-bucket abhishek-bucket-lambda –capabilities CAPABILITY_NAMED_IAM –template-file template-with-artifacts.yaml –region “eu-west-1” –no-verify-ssl

Or, you can also deploy your stack with CloudFormation CLI

# aws cloudformation deploy –template-file C:\Users\abhishek\sam-app\template-with-artifacts.yaml –stack-name “Sample-CFN-Stack”

The CloudFormation is deployed now and it has created the Lambda resource too.

Cheers !!

Set SCSI controllers to a VM HDD: vRO workflow

Hello All,

SQL servers on VMware infrastructure need to be built as per recommended guidelines. One of the major recommendations is to have specific SCSI controllers assigned to data HDD of a SQL server. The idea here is to have dedicated SCSI controllers for each data disk so that it does not pass all the data via a single SCSI controller.

Recently, I had come across a use case whereas a part of VM provisioning from vRA, I needed to point my SQL server VM disks 3/4/5 to different Para Virtual SCSI controllers.

Hard disk addition to a VM can be handled as part of blueprint or, XAAS request from vRA.

Steps for vRO Workflow to configure SCSI Controller:

Step 1: Shutdown the VM [Used inbuilt WF “Power off the virtual machine and wait“]

Step 2: Create 3 additional SCSI controllers [Copied in-build vRO action “createVirtualScsiControllerConfigSpec” 3 times with updated value of both controller.key and controller.busNumber as 1,2,3 respectively for SCSI controllers 1,2,3 ]

2020-03-15_17-14-41

Step 3: Reconfigure VM to have above SCSI controllers added

Step 4: Identify Hard disks by labels and point individual disk to new SCSI controllers

Both Step 3 and 4 will be handled by the below code:

var configSpec = new VcVirtualMachineConfigSpec();
var deviceConfigSpec = new Array();

deviceConfigSpec[0]= actionResult;
deviceConfigSpec[1]= actionResult1;
deviceConfigSpec[2]= actionResult2;

configSpec.deviceChange = deviceConfigSpec;

task = vm.reconfigVM_Task(configSpec);

System.sleep(5000);

var controller,controller1, controller2;
for each (var device in vm.config.hardware.device)
{
var label = device.deviceInfo.label;
if (label == “SCSI controller 1”)
{
controller = device;
System.log(“Found Controller 1 : ” + controller.key );
}
else if(label == “SCSI controller 2”)
{
controller1 = device;
System.log(“Found Controller 2 : ” + controller1.key );
}
else if (label == “SCSI controller 3”)
{
controller2 = device;
System.log(“Found Controller 3 : ” + controller2.key );
}
}
if(!controller && !controller1 && !controller2)
{
throw “ERROR: Controller not found”
}

var diskConfigSpecs = new Array();
for each (var device in vm.config.hardware.device)
{
var label = device.deviceInfo.label;
if (((device.deviceInfo.label).indexOf(“Hard disk 3”) > -1) && ((device.deviceInfo.label).indexOf(“Hard disk”) > -1))
{
System.log(“Found disk to Change Controller: ” + label);
var diskConfigSpec = new VcVirtualDeviceConfigSpec();
diskConfigSpec.device = new VcVirtualDisk;
diskConfigSpec.device = device;
diskConfigSpec.device.controllerKey = controller.key;
diskConfigSpec.device.unitNumber = 0;
diskConfigSpec.operation = VcVirtualDeviceConfigSpecOperation.edit;
diskConfigSpecs.push(diskConfigSpec);
}
else if (((device.deviceInfo.label).indexOf(“Hard disk 4”) > -1) && ((device.deviceInfo.label).indexOf(“Hard disk”) > -1))
{
System.log(“Found disk to Change Controller: ” + label);
var diskConfigSpec = new VcVirtualDeviceConfigSpec();
diskConfigSpec.device = new VcVirtualDisk;
diskConfigSpec.device = device;
diskConfigSpec.device.controllerKey = controller1.key;
diskConfigSpec.device.unitNumber = 0;
diskConfigSpec.operation = VcVirtualDeviceConfigSpecOperation.edit;
diskConfigSpecs.push(diskConfigSpec);
}
else if (((device.deviceInfo.label).indexOf(“Hard disk 5”) > -1) && ((device.deviceInfo.label).indexOf(“Hard disk”) > -1))
{
System.log(“Found disk to Change Controller: ” + label);
var diskConfigSpec = new VcVirtualDeviceConfigSpec();
diskConfigSpec.device = new VcVirtualDisk;
diskConfigSpec.device = device;
diskConfigSpec.device.controllerKey = controller2.key;
diskConfigSpec.device.unitNumber = 0;
diskConfigSpec.operation = VcVirtualDeviceConfigSpecOperation.edit;
diskConfigSpecs.push(diskConfigSpec);
}
}

var configSpec = new VcVirtualMachineConfigSpec();
configSpec.deviceChange = diskConfigSpecs;
task = vm.reconfigVM_Task(configSpec);
System.sleep(5000);

Step 5: Power on the VM [Used inbuilt WF “Start the virtual machine and wait“]

Final Workflow schema will look like:

2020-03-15_17-20-23

Step 6: Integrate with vRA by configuring the workflow to be triggered as part of your existing Machine Provisioning Subscription or, create a new one if you don’t have one already. [ If you want to know how, comment below and i will write another blog about it]

Thanks,