Category Archives: Microsoft Azure

All things Microsoft Azure

Establishing a GCP VPN Tunnel to Azure Virtual WAN; Active/Active BPG Configuration

This is a quick reflection of the steps I took to establish two IPSec tunnels between GCP’s VPC and Azure’s Virtual WAN VPN Gateway, propagating routes dynamically via BPG and ensuring High Availability. The design is fairly straightforward since both GCP and Azure offer the ability to established multiple connections to remote peers. When everything is said and done, you’ll end up with a diagram that conceptually looks something like this:

Note: It is recommended to complete the steps in this document in the outlined order to complete the least amount of steps. In this case, provide Azure Virtual WAN first, then configure GCP, then create the Azure Virtual WAN Site Links / Connections to GCP.

Note 2: If you have followed my previous guide on establishing an AWS VPN tunnel to Azure Virtual WAN , this guide will co-exist both connections and can skip the Create Azure Virtual WAN and Virtual WAN Hub sections.

Create Azure Virtual WAN and Virtual WAN Hub

On the Azure side, first we need to create a Virtual WAN resource and a Virtual WAN Hub, which will contain our VPN Gateway. If you have already created these, you can skip to the next session.

First, click the "Hamburger" icon and select Create a resource

Search for Virtual WAN and select it from the list in the marketplace.

Select Create

Specify the resource group and region you wish to deploy the Virtual WAN resource to. Specify a name for your Virtual WAN resource and click Review + Create

Click Create to start provisioning the Virtual WAN resource.

Once the resource is created, click Go to resource to navigate to your Virtual WAN resource.

On the Virtual WAN resource, select New Hub from the top menu.

Specify the name of the Hub and an address space that can be used for all the networking components Virtual WAN will deploy into the Virtual Hub. Click Next : Site to Site >

On the Site to Site tab, toggle Yes that you want to provision a VPN Gateway, and specify the scale units you need. Click the Review + create button when done.

Click the Create button to start provisioning the Hub and VPN Gateway. Please note this can take up to 30 minutes to complete.

Once the Virtual WAN Hub has been created, click the Menu icon and select All services (note: if you click the Go to resource button after the Virtual WAN Hub resource is created, it'll take you the properties of the Hub, which isn't where we want to be).

Search for Virtual WAN and select Virtual WANs.

Select your Virtual WAN resource.

Click on Hubs under Connectivity and select your Virtual WAN Hub.

Select VPN (Site to Site) under Connectivity and then click on the View/Configure link.

Set the Custom BGP IP addresses for each instance. Use the values below:

  • VPN Gateway Instance 0: 169.254.21.2
  • VPN Gateway Instance 1: 169.254.22.2

Click Edit once completed.

Configure GCP

Prerequisites

This guide assumes you have a VPC already (in my case, mine is called GCP-VPC with an address space of 10.60.0.0/16) and corresponding set of subnets for your servers.

Note: A GCP VPC is the equivalent of a VNet in Azure. One thing that is different between GCP and Azure is that in GCP you do not need to specify a subnet for your Gateways (i.e. “GatewaySubnet”).

Within the GCP Console, select Hybrid Connectivity -> VPN

Click Create VPN Connection

Select High-availability (HA) VPN and select Continue

Enter a name, select your VPC, and specify a region. Click Create & Continue.

Write down your Interface public IPs (we'll use these later) and check On-prem or Non Google Cloud for Peer VPN Gateway. Click Create & continue.

Select two interfaces and enter your Instance 0 and Instance 1 Public IP addresses from your Virtual WAN Hub's VPN Gateway. Click Create.

Click the dropdown for Cloud Router and select Create a new router

Enter a name and description for your router. For the ASN, enter a unique ASN to use (I used 64700 to differentiate from Azure as well as the ASN I used in the AWS example (which was 64512)). You can specify any supported ASN for this, however I would recommend against using 65515 specifically as this is reserved by Azure's VPN Gateways.

Note: Google ASN must be an integer between 64512 and 65534 or between 4200000000 and 4294967294 or 16550

Click the pencil icon to modify the first VPN tunnel.

Select the instance 0 VPN Gateway interface, enter a name, set the IKE version to IKEv2, enter a pre-shared key, and click Done.

Repeat the same steps for the second VPN tunnel, specifying instance 1 VPN Gateway interface, enter a name, set the IKE version to IKEv2, enter a pre-shared key, and click Done and then Create & continue.

Click the Configure button for the first BPG session.

Enter a name for the first BGP peer connecting to instance 0 gateway on Virtual WAN.

Specify Peer ASN of 65515 (this is Azure VWAN's BGP ASN), specify 169.254.21.1 for Cloud Router BGP IP and 169.254.21.2 as BGP peer IP (Azure VWAN's BGP Peer IP). Click the Save and continue button.

Click the Configure button and enter a name for the first BGP peer connecting to instance 1 gateway on Virtual WAN.

Specify Peer ASN of 65515 (this is Azure VWAN's BGP ASN), specify 169.254.22.1 for Cloud Router BGP IP and 169.254.22.2 as BGP peer IP (Azure VWAN's BGP Peer IP).

Click the Save BGP configuration button.

Configure Azure Virtual WAN VPN Site

On the Virtual WAN hub, select VPN (Site to site) and click + Create new VPN site

Specify a name for the VPN connection, enter GCP for vendor, and click Next : Links >

Specify the following values to define each VPN tunnel that should be created to connect to GCP's VPN interfaces.

Note: I entered 1000 for the link speed as a placeholder, but that doesn't mean the connection will be throttled down to 1Gbps.

  • First Link:
    • Link Name: gcp-east1-vpc-vpn-int0
    • Link Speed: 1000
    • Link provider name: GCP
    • Link IP address: <GCP VPN Interface 0 Public IP>
    • Link ASN: 64700
  • Second Link:
    • Link Name: gcp-east1-vpc-vpn-int1
    • Link Speed: 1000
    • Link provider name: GCP
    • Link IP address: <GCP VPN Interface 1 Public IP>
    • Link ASN: 64700

Click Create

Configure Virtual WAN VPN Connection

Once the Virtual WAN Hub has been created, click the Menu icon and select All services.

Search for Virtual WAN and select Virtual WANs.

Select your Virtual WAN resource.

Click on Hubs under Connectivity and select your Virtual WAN Hub.

Select VPN (Site to Site) under Connectivity and then click on the X to remove the Hub association filter.

Check the box for your VPN site and click Connect VPN sites

Specify the following information:

  • Pre-shared key (PSK): <use the same one you specified in GCP>
  • Protocol: IKEv2
  • IPsec: Custom
    • SA Lifetime in seconds: 36000
    • Phase 1 (IKE):
      • Encryption: GCMAES256
      • Integrity/PRF: SHA384
      • DH Group: ECP256
    • Phase 2 (IPSec):
      • Encryption: GCMAES256
      • IPSec Integrity: GCMAES256
      • PFS Gropu: ECP384
  • Propagate Default Route: Disable
  • Use policy based traffic selector: Disable

Click Connect.

Note: Here is the link of Supported IKE ciphers for GCP.

Verify Connectivity

From the Azure Side, we will review three different areas to validate connectivity and propagation of routes via BGP.

Note: I connected a virtual network to the Virtual WAN Hub to show further configuration. In this case, you'll see an additional IP address space of 10.51.0.0/16, which defines my connected VNet.

On the Azure Side, you should see the VPN Site’s Connectivity status change to Connected on the VPN (Site to site) blade of your Virtual WAN hub.

On the Routing blade, the Effective Routes will show you the learned VPC address space from GCP (10.60.0.0/16)

On a virtual machine in a connected VNet to the Virtual WAN Hub, you can pull the Effective Routes. Here I see the 10.60.0.0/16 route learned from both Instance 0 and Instance 1 gateways from the Virtual WAN Hub.

From the GCP Side, we can see the VPN tunnel status as well as Bgp session status now Established and Green on the Hybrid Connectivity -> VPN -> Cloud VPN Tunnels section.

If we switch over to Hybrid Connectivity -> Cloud Routers -> and select View on the logs column

Further, if creating a VM (instance) in GCP, you can view the Firewall and Route details to confirm you see the learned routes from the gateway (in our case, we see 10.51.0.0/16 and 10.50.0.0/24 learned from both BGP Peers):

Huzzah! Traffic! 🙂

Establishing an AWS VPN Tunnel to Azure Virtual WAN; Active/Active BPG Configuration

This is a quick reflection of the steps I took to establish two IPSec tunnels between AWS' VPG and Azure's Virtual WAN VPN Gateway, propagating routes dynamically via BPG and ensuring High Availability. The design itself is a bit interesting since AWS and Azure differ on how connections are established to remote peers. When everything is said and done, you'll end up with a diagram that conceptually looks something like this:

Note: It is recommended to start with the Virtual WAN side first since you cannot modify the IP address of a Customer Gateway in AWS

Create Azure Virtual WAN and Virtual WAN Hub

On the Azure side, first we need to create a Virtual WAN resource and a Virtual WAN Hub, which will contain our VPN Gateway. If you have already created these, you can skip to the next session.

First, click the "Hamburger" icon and select Create a resource

Search for Virtual WAN and select it from the list in the marketplace.

Select Create

Specify the resource group and region you wish to deploy the Virtual WAN resource to. Specify a name for your Virtual WAN resource and click Review + Create

Click Create to start provisioning the Virtual WAN resource.

Once the resource is created, click Go to resource to navigate to your Virtual WAN resource.

On the Virtual WAN resource, select New Hub from the top menu.

Specify the name of the Hub and an address space that can be used for all the networking components Virtual WAN will deploy into the Virtual Hub. Click Next : Site to Site >

On the Site to Site tab, toggle Yes that you want to provision a VPN Gateway, and specify the scale units you need. Click the Review + create button when done.

Click the Create button to start provisioning the Hub and VPN Gateway. Please note this can take up to 30 minutes to complete.

Configure customer BGP IP Address for Virtual WAN VPN Gateway Instances

Once provisioning is completed, navigate back to the Virtual WAN resource. You can do this by clicking the "Hamburger" icon and searching for Virtual WAN

Select your Virtual WAN resource.

You should now see your Virtual WAN Hub resource you provisioned. Select the Virtual WAN Hub.

On the Virtual WAN Hub, click on the View/Configure link.

On the View/Configure Gateway Configuration blade, specify 169.254.21.2 as the Custom BGP IP address for Instance 0 and 169.254.22.2 as the Custom BGP IP address for Instance 1. Notate the Public IP address uses for Instance 0 and 1 and then click Edit and Confirm to apply the changes.

Create Virtual WAN VPN Site

On the Virtual WAN Hub, click Create new VPN Site

Specify a name for your VPN Site to define the connection connecting to AWS. Click Next : Links >

On the Links tab, add two entries with the following values (to tell VWAN how to connect to each of the AWS Site-to-Site connections). Note: this is very similar to AWS' Customer Gateway section.

Link 1:

  • Link Name; AWS_Tunnel1
  • Link Speed: 1000
  • Link Provider Name: AWS
  • Link IP address: 1.1.1.1 (this is a placeholder value until we configure the AWS side)
  • Link BGP address: 169.254.21.1
  • Link ASN: 64512

Link 2:

  • Link Name; AWS_Tunnel2
  • Link Speed: 1000
  • Link Provider Name: AWS
  • Link IP address: 1.1.1.2 (this is placeholder value until we configure the AWS side)
  • Link BGP address: 169.254.22.1
  • Link ASN: 64512

Click Next: Review + Create >

Click Create

Click Go to resource once the links have finished being created.

Configure Phase 1/2 Proposals

Select your Virtual WAN hub on the Virtual WAN Overview blade.

Check the box for the new VPN Site Name and click the Connect VPN sites button

Specify the following configuration:

  • Pre-shared key (PSK): YourSecretKeyWithNumb3rs
    • Must be a 8-64 character string with alphanumeric, underscore(_), and dot(.). It cannot start with 0.
  • Protocol: IKEv2
  • IPSec: Custom
  • IKE Phase 1:
    • Encryption: GCMAES256
      • GCM algorithm is more efficient and can improve throughput on the Azure Gateways
    • Integrity/PRF: SHA256
    • DH Group: DHGroup14
  • IKE Phase 2 (ipsec):
    • IPSec Encryption: AES256
      • AWS does not support GCM algorithm for IPSec integrity at time of writing this, but if it is available, you may want to opt for that
    • IPSec Integrity: SHA256
    • PFS Group: PFS14

Click Connect

Configure AWS

Prerequisites

This guide assumes you have a VPC already (in my case, mine is called AWS-OHIO-VPC), a corresponding set of subnets for your servers, and a route table associated to your VPC.

Note: An AWS VPC is the equivalent of a VNet in Azure. One thing that is different between AWS and Azure is that in AWS you do not need to specify a subnet for your Gateways (i.e. "GatewaySubnet").

Create the Customer Gateways

Customer Gateways in AWS are the equivalent of a local network gateway that you'd associate to a connection for a traditional VPN Gateway in Azure. It is also the equivalent of a defined Site Link for Azure's Virtual WAN VPN configuration.

In this section, you will need to create two Customer Gateways. Specify the corresponding instance value obtained from the Configure Customer BPG IP address section. When creating the Customer Gateways ensure Dynamic routing is enabled and the BGP ASN is specified as 65515.

Configuration for the second Customer Gateway using the Instance 1 Gateway Public IP address.

Create a Virtual Private Gateway

Next we need to create an AWS Virtual Private Gateway. This is the equivalent of Azure's VPN Gateway.

Create VPN Connections

We need to create two VPN Connections, each VPN Connection linked to its corresponding Customer Gateway and VPC.

On the Inside IPv4 CIDR for Tunnel 1 on the first VPN Connection, ensure you use 169.254.21.0/30 as the BGP Peer addresses and 169.254.21.4/30 for the second tunnel. Due to the way that the VPN Connection works, we are using a placeholder value of 169.254.21.4/30 tunnel, which will never be used in practice since we cannot point it to leverage Azure's secondary VPN Gateway instance. This value must be specified as if we define the secondary BGP Peer address that will be created for the secondary instance in VWAN, you will receive an error that overlapping address space exists between this VPN Connection and the secondary VPN connection we create in AWS. Add the pre-shared key value you specified in Azure during this time as well.

When creating the second VPN connection, ensure 169.254.22.0/30 is specified for Inside IPv4 CIDR for Tunnel 1 and 169.254.22.4/30 is specified for Inside IPv4 CIDR for Tunnel 2 (which is again a placeholder value that won't be used).

Configure Route Table to Propagate Routes

To allow the learned routes from BGP propagate to the VPC, you need to enable route propagation on your Route Table.

Navigate to Route Tables and select your Route Table and click the Route Propagation tab and select Edit route propagation

Check the Propagate box and click Save

Update Azure

Update Azure Site Link IP addresses

As per the Configure Phase 1/2 Proposals section for Azure Virtual WAN, you specified 1.1.1.1 and 1.1.1.2 as a placeholder value for the Public IP addresses of the AWS VPN Gateway instances. We will need to update these addresses with the proper values.

Naviate to your Virtual WAN instance and select your Virtual WAN hub

Select VPN (Site to site) and choose click on the Site name you created

Click on the three dots (ellipsis) for AWS_Tunnel1 and click Edit Link.

Specify the proper IP address for Tunnel 1 on AWS Site-to-Site connection 1. Click Confirm.

Click on the three dots (ellipsis) for AWS_Tunnel2 and click Edit Link.

Specify the proper IP address for Tunnel 1 on AWS Site-to-Site connection 2. Click Confirm.

Verify connectivity

On the Azure Side, you should see the VPN Site's Connectivity status change to Connected

You can also select a Virtual Machine that may have it's virtual network attached to the VWAN Hub and validate you see learned routes from the VWAN Hub (and AWS) propagated into the VNet.

Tip: You can see the same route twice as we have both VPN Gateway instance BGP Peers actively connected to AWS. In the event you lose a peer, you would only see one route to one gateway listed.

On the AWS side, you can validate for each Site to Site VPN connection that you see Tunnel 1's status as UP and Tunnel 2's status as DOWN (remember, Tunnel 2 will always be listed as down because a fictitious BGP is specified).

Here you can see the secondary Site-to-Site connection with the same status: UP for Tunnel 1, DOWN for Tunnel 2

How to generate base64 encoded SSL certificates via PowerShell for Azure

Background

Many Azure services allow you to bring your own SSL Certificate to the cloud. While Azure provides an easy way to create and deploy resources through ARM templates, specification of what SSL certificate is a little less trivial since it's not as easy to specify an exported PEM or PFX file. In this case, Azure may look for the certificate in a base64 encoded format, so the certificate can be passed as a string (or list of characters) into the template.

Goal of this tutorial

This tutorial will walk through the commands needed to generate a self-signed certificate that is base64 encoded via PowerShell (Option 1) or base64 encode an existing PFX (Option 2), so that the certificate can be passed as a parameter into ARM templates in Azure.

Option 1: Generate and encode a self-signed certificate

Generate a self-signed certificate
$selfSignedCert = New-SelfSignedCertificate -DnsName *.azurewebsites.net -NotAfter (Get-Date).AddYears(2)
Export the self-signed certificate into PFX format from Certificate manager
$pwd = ConvertTo-SecureString -String "1234" -Force -AsPlainText
Export-PfxCertificate -cert $selfSignedCert.PSPath -FilePath "selfSignedCertificate.pfx" -Password $pwd
Convert the certificate to base64 encoding
$pfxBytes = Get-Content "selfSignedCertificate.pfx" -Encoding Byte
[System.Convert]::ToBase64String($pfxBytes) | Out-File "selfSignedCertificate.txt"

Option 2: Encode from a pre-existing pfx file

Convert the certificate to base64 encoding
$pfxBytes = Get-Content "selfSignedCertificate.pfx" -Encoding Byte
[System.Convert]::ToBase64String($pfxBytes) | Out-File "selfSignedCertificate.txt"

Result

At this point, if you open selfSignedCertificate.txt, you should see a long list of characters compromised of letters, numbers, and a few symbols, which is your base64 version of your certificate. See example below (...s denote I removed a large portion of the text, you won't see that in your file).

MIIKcQIBAzCCCi0GCSqGSIb3DQEHAaCCCh4EggoaMIIKFjCCBg8GCSqGSIb3DQEHAaCCBgAEggX8MIIF+DCCBfQGCyqGSIb3DQEMCgECoIIE/jCCBPowHAYKKoZIhvcNAQwBAzAOBAij81GovXchnAICB9AEggTYvVQbLThNVlLYiivGlD0uSASG3g6OaY9xF+c0BfZ1ZCHGKKQ3705CDkIy4.......jx9lSOAForjR+e1nNaBFfMGy+ONccoS0lnWvFIgggZG8RCZx2jQGMnPQdm4hPdmL3j2pUPMDswHzAHBgUrDgMCGgQUJpp3pnPr5/NXgyhYzi+rGzVkCJMEFBsqGkHSsFZaBXQ/bvR5DnhzgaekAgIH0A==

This text can be used-as within your templates now (although, in general, try to never code these values into your templates, these values should be passed as parameters into the template).

Cheat sheet on Azure Subnetting

Here's a quick cheat sheet on recommended subnet sizing for Azure. Items in bold are subnet names reserved by the platform for their corresponding service.

GatewaySubnet - /27 - https://docs.microsoft.com/en-us/azure/vpn-gateway/vpn-gateway-about-vpn-gateway-settings#gwsub

Point-to-Site (P2S) addressing (VPN or VWAN) - Requires a non-vnet address space – depends on how many P2S clients - https://docs.microsoft.com/en-us/azure/vpn-gateway/point-to-site-about#gwsku

AzureBastionSubnet - /26 (as of Nov, 2021; previously was /27) - https://docs.microsoft.com/en-us/azure/bastion/bastion-create-host-portal#createhost

Azure Virtual WAN Hub - /24 - https://docs.microsoft.com/en-us/azure/virtual-wan/virtual-wan-site-to-site-portal#hub

AzureFirewallSubnet - /26 - https://docs.microsoft.com/en-us/azure/firewall/tutorial-firewall-deploy-portal#create-a-vnet

AzureFirewallManagementSubnet - /26 - Azure Firewall forced tunneling | Microsoft Docs

RouteServerSubnet - /27 - Quickstart: Create and configure Route Server using Azure PowerShell | Microsoft Docs

Application Gateway - min /27 per deployment - https://docs.microsoft.com/en-us/azure/application-gateway/configuration-overview#size-of-the-subnet

Azure AD Domain Services (AADDS) - min /28 - Network planning and connections for Azure AD Domain Services | Microsoft Docs

Azure SQL Managed Instance (SQL MI) - min /27 - https://docs.microsoft.com/en-us/azure/sql-database/sql-database-managed-instance-determine-size-vnet-subnet

App Services (Web Apps, Functions, API Apps) - min /27 - https://docs.microsoft.com/en-us/azure/app-service/web-sites-integrate-with-vnet

App Service Environment - /24 - https://docs.microsoft.com/en-us/azure/app-service/environment/network-info

Logic Apps integration service - /27 - https://docs.microsoft.com/en-us/azure/logic-apps/connect-virtual-network-vnet-isolated-environment#set-up-network-ports

API Management – min /29 - https://docs.microsoft.com/en-us/azure/api-management/api-management-using-with-vnet#--subnet-size-requirement

Azure Kubernetes Service (AKS) - depends on node count -  https://docs.microsoft.com/en-us/azure/aks/configure-azure-cni#plan-ip-addressing-for-your-cluster

Azure Container Instances (ACI) - /29 - https://docs.microsoft.com/en-us/azure/container-instances/container-instances-vnet

Azure Databricks - Requires 2 subnets (Public/Private) – min of two /26 - https://docs.azuredatabricks.net/administration-guide/cloud-configurations/azure/vnet-inject.html#virtual-network-requirements

Azure NetApp Files - /28 - https://docs.microsoft.com/en-us/azure/azure-netapp-files/azure-netapp-files-delegate-subnet

Azure Dedicated HSM - /28 - https://docs.microsoft.com/en-us/azure/dedicated-hsm/networking#subnets

Azure VMware Solutions - /22 - https://docs.microsoft.com/en-us/azure/azure-vmware/tutorial-network-checklist#routing-and-subnet-considerations

Azure Spring Cloud - /28 - Deploy Azure Spring Cloud in a virtual network | Microsoft Docs

Notes

Microsoft has added a list of services that can be injected into Virtual Networks as well here: https://docs.microsoft.com/en-us/azure/virtual-network/virtual-network-for-azure-services#services-that-can-be-deployed-into-a-virtual-network

[Tutorial] Using Azure Hybrid Connection Manager to reach resources on-premises without VPN Connections

One of the hidden gems of Azure is HCM (Hybrid Connection Manager), which addresses the issue of Azure's App Services (Web App, API App, Functions) having the ability to connect to resources hosted in other Azure environments, clouds, or on-premises. In many cases, VPN or ExpressRoute connectivity may be overkill or not a possibility in establishing connectivity to the requested service. The great thing is Hybrid Connections is all the traffic will be egress TCP 443 traffic to Azure via TLS 1.2, which can easily attest to the needs of many secured environments and not require ports to be opened inbound into the environment.

There are two ways to leverage Hybrid Connections for App Services in Azure:

  1. Via WCF Hybrid Relays
  2. Via Hybrid Connections

For the purposes of this article, we are going to cover how to connect to a web service "on-premises" via the HCM Agent. While we are using a Web App as an example, keep in mind that this concept can be applied to all App Services such as Web Apps, API Apps, Logic Apps, and Azure Functions. In addition, this article will make a call to a web service on-premises, however keep in mind that HCM is able to connect to any TCP service such as MSSQL, MySQL, Oracle, Web Services, custom TCP service, mainframes, etc.

Tutorial

To begin, we will first deploy a Web App from the Azure Portal to give us access to the Hybrid Connection Manager blade. Note: You can leverage any App Service to create the hybrid connection manager instance, but you must be on a paid tier (Free tier will not work).

  1. Login to the Azure Portal (portal.azure.com)
  2. Select All services -> App Services -> click + Add
  3. Fill out the required information, ensuring you are on a plan greater than Free. Select Review + create and Create

Once deployed, navigate to your Web App, select Networking, and click on Configure your hybrid connection endpoints

On the Hybrid connections screen, click on Download connection manager.

Note: This is the agent you will need to install in the environment that contains the service you are trying to access. The agent itself can be deployed on any machine as long as the machine can access the service you are trying to reach.

Installation of the agent is very straightforward. Complete the steps below.

    1. Select HybridConnectionManager.msi
    2. Read the EULA, select I accept the terms in the License Agreement, and click Install
    3. Click Finish

Once installed, navigate back to the Azure Portal (portal.azure.com), click All services -> App Services -> Select your webapp, click Networking, select Configure your hybrid connection endpoints, and click Add hybrid connection.

Click Create new hybrid connection and enter the following:

  • Hybrid connection Name
    • MyService
  • Endpoint Host
    • IPAddress or DNSNameOfTheService
  • Endpoint Port
    • PortNumberofYourService
  • Servicebus namepsace
    • Create new
  • Location
    • Pick the location of the Azure region you want to go to
  • Name
    • Enter a unique name for the service bus resource that will be created. This is a globally unique name accross all of Azure and must only consist of lowercase letters, numbers, and hyphens.

Click OK once you have filled out the information above. Once Azure has created the connection, navigate back to the machine you installed the agent on. On the machine, click Start, HybridConnectionManager, and select Hybrid Connection Manager UI.

Once the agent has launched, select Add a new Hybrid Connection.

This will prompt you to enter your Azure credentials. Enter your credentials in the prompt.

Note: if the machine is locked down and cannot leverage javascript, you can close out of the sign-in window and select Enter Manually on the previous step. Back in the Azure Portal, you can select your connection and copy the "Gateway Connection String" to connect this agent to Azure.

Once you have authenticated click the Subscription dropdown to select your Azure Subscription, select the connection you created via the portal, and click Save.

Once Saved, you should see the connection we created via the Azure Portal with the Azure Status of "Connected". If you don't see "Connected", double check you don't have a proxy blocking outbound TCP 443 requests to the Service Bus instance we created earlier (azurehcmdemo.servicebus.windows.net).

Note: To help with resiliency, you can deploy multiple agents on different machines to ensure resiliency/availability/scalability. When you select the same connection endpoint, HCM will automatically begin to load balance traffic between the agents.

Once you see the agent connected on-premises, you can validate from the Azure Portal we see the agent is connected as well. Via All services -> App Services -> your app service -> Networking -> Configure your hybrid connection endpoints, you should see "Connected" via the Status column on your Hybrid connections blade.

At this point, within your application, you should be able to reference the contents of the on-premises machine via the same connection string you may have used before. Below I've added an example showing an on-premises IIS server that displays the text "Moo" when you browse to the web page. Via my Web App in Azure, I created a quick PHP script that will request the on-premises server, in which HCM on the App Service will place the request on a Service Bus queue, the HCM agent on-premises will pull down the request, forward the request to the Web App on-premises, place the response back on the queue, and the web app will display the result "Moo".

Hope this helps! If you have any questions or comments feel free to reach out below.

Helpful Links/Sources

Azure Friday Video showing an example of this: https://www.youtube.com/watch?v=y_zAJZC_8Yk

Azure documentation on Hybrid Connections: https://docs.microsoft.com/en-us/azure/app-service/app-service-hybrid-connections

How to enable logging/debug HCM: https://blogs.msdn.microsoft.com/waws/2017/06/26/troubleshooting-hybrid-connections-with-logging/

Deploying FortiGate Virtual Appliances (FortiGate-VM) on Azure

Here is a recap of some of the reflections I have with deploying Fortinet's FortiGate appliance on Azure. This is more of a reflection of the steps I took rather than a guide, but you can use the information below as you see fit.  At a high level, you will need to deploy the device on Azure and then configure the internal “guts” of the device to allow it to route traffic properly on your Virtual Network (VNet) in Azure. While Fortinet does have some documentation on deploying their appliance, I found it very confusing, so I hope this helps walk through deployment. At the time of writing this, v6.2 was the latest version; however I recommend using at least version 6.0 or greater as it provides support for auto-scaling, which is what we will be looking at for this guide.

First, just want to provide a quick overview of the different options you can take and a rough overview of each architecture:

Deploy the Appliance in Azure

As part of this tutorial, we will look at FortiGate's Autoscaling deployment as this will allow us to dynamically scale up or down depending on load. In addition, this deployment will provide us high availability, so in the event we lose a VM, network traffic will automatically failover to another appliance.

Architecture

A high level overview of what resources are deployed

Deployment

  1. Login to the Azure Portal
    1. https://portal.azure.com
  2. Create two new Resource Groups
    1. Navigate to All services -> Resource Groups
    2. Click Add 
    3. Create two new resource groups with the following names (they can be different if you wish, but you will need at least 2)
      1. Fortigate-Handler-RG
      2. Fortigate-VMSS-RG
  3. Create a Service Principal
    1. Navigate to All services -> Azure Active Directory
    2. Select App registrations
    3. Click New Registration
      1. Name: Fortigate-NVA
      2. Supported account types: Accounts in this organizational directory only
      3. Redirect URI: leave blank
    4. Click Register
    5. Write down the Application (client) ID, Directory (tenant) ID, and Object ID.
    6. Click on Certificates & secrets
    7. Click on the New client secret button and set the description to Fortigate-NVA, set the password expiry to your preference and click Add
    8. Write down the value of your client secret
      1. Note: once you navigate away from the blade you won't be able to retrieve it again
  4. Delegate the Service Principal
    1. Navigate to All services -> Subscriptions -> select your subscription -> and select Access control (IAM)
    2. Click Add, Add role assignment, and use the following configuration
      1. Role: Owner
      2. Assign access to: Azure AD user, group, or service principal
      3. Select: Search for Fortigate-NVA and select it
    3. Click Save

      1. Note: I didn't have a chance to test, but I think these permissions could likely be delegated down at the resource group level vs subscription.  If someone could confirm, please leave a comment below.
  5. Deploy the Fortigate Handler (CosmosDB and Function App)
    1. Once you click the button above to deploy the template, use the following configuration
      1. Function App Name
        1. This is the name of the Azure Function resource that gets created.  This must be globally unique across all customers within Azure.
      2. Cosmos DB Name
        1. Name of the Cosmos DB that will be created. This field must be between 3 and 31 characters and can contain only lowercase letters, numbers and -.  This value should be globally unique across all customers within Azure.
      3. Storage Account Type: Standard_LRS
      4. Tenant ID
        1. Use the Directory (tenant) ID from the Service Principal we created earlier.
      5. Subscription ID
        1. Enter the subscription ID to the Azure Subscription you wish to deploy to.  You can find your subscription ID by navigating to All services -> Subscriptions and selecting your subscription.
      6. Rest App ID
        1. Use the Application (client) ID from the Service Principal we created earlier.
      7. Rest App Secret: iW8gS...........................pMX
        1. Use the value you wrote down when generating the Client Secret when creating the Service Principal.
      8. Heart Beat Loss Count: 3
        1. Number of consecutively lost heartbeats. When the heartbeat loss count has been reached, the VM is deemed unhealthy and failover activities commence.
      9. Scaling Group Resource Group Name: Fortigate-VMSS-RG
        1. This is the value of the secret Resource Group you created at the beginning of this guide.  This Resource Group will contain the VM Scale Set and it's corresponding resources.
      10. Script Timeout: 230
        1. This is the timeout for the Function App script to run.  By default this is 230 seconds.
      11. Election Wait Time: 90
        1. This is the maximum time (in seconds) to wait for a master election for the FortiGate's to complete.
      12. PSK Secret: mysupersecretpassphrase
        1. This is a random string of characters used by the FortiGates in the scale set to synchronize configuration items.
      13. Package Res URL: https://github.com/fortinet/fortigate-autoscale/releases/download/1.0.3/fortigate-autoscale-azure-funcapp.zip
        1. Grab the latest version of the package for the Azure Function App from GitHub.  You can find the latest compiled versions here: https://github.com/fortinet/fortigate-autoscale/releases
  6. Deploy the VM Scale Set
    1. Once you click the button above to deploy the template, use the following configuration
      1. Instance Type: Standard_F2
      2. FOS Version: 6.2.1
      3. VNet New Or Existing: new
        1. Select whether you wish to use an existing or new Virtual Network
      4. VNet Name: AzureHubVNet
        1. The name of the VNet to be used or created.
      5. Subnet Address Prefix: 10.0.0.0/16
        1. The address space of the VNet to be used or created.
      6. Subnet1Name: Untrust
        1. The name of the subnet that will be public facing to the internet.
      7. Subnet1Prefix: 10.0.1.0/24
        1. The address space of the subnet to be created for the public facing zone.
      8. Subnet2Name: Trust
        1. The name of the subnet that will contain the private NICs of the FortiGate's.
      9. Subnet2Prefix: 10.0.2.0/24
        1. The address space of the subnet to be created for the private facing zone.
      10. Subnet2Load Balancer IP: 10.0.2.10
        1. The IP address of the load balancer in the private zone.
      11. Subnet3Name: Private
        1. The name of the subnet that will contain the private machines that are behind the FortiGate appliance.
      12. Subnet3Prefix: 10.0.3.0/24
        1. The address space of the subnet that will contain the private machines that are behind the FortiGate.  Note: this is more of a place holder in FortiGate's template, you can create additional subnets later on/use a different subnet for your private resources.
      13. Public IP New or Existing: new
        1. The Public IP address to be associated as the VIP of the Azure Load Balancer for incoming traffic.
      14. Scaling Group Name Prefix: fgtasg
        1. The prefix each VMSS Name is given when deploying the FortiGate autoscale template. The value of this parameter should be the same as for deploy_funcapp.json. The prefix cannot contain special characters \/""[]:|<>+=;,?*@& or begin with '_' or end with '.' or '-'.
      15. Initial Capacity: 2
        1. How many FortiGate's should be deployed.  Default value is 1, however I recommend at least 2 for high availability.
      16. Min Capacity: 2
        1. The smallest amount of FortiGate's that should be running.  Default value is 1, however I recommend at least 2 for high availability.
      17. Max Capacity: 3
        1. The max amount of FortiGate's that should be deployed.
      18. Scale Out Threshold: 80
        1. Percentage of CPU utilization at which scale-out should occur.
      19. Scale In Threshold: 20
        1. Percentage of CPU utilization at which scale-in should occur.
      20. Admin Username: azureadmin
        1. FortiGate administrator username on all VMs.
      21. Admin Password: azurepassword
        1. FortiGate administrator password on all VMs. This field must be between 11 and 26 characters and must include at least one uppercase letter, one lowercase letter, one digit, and one special character such as (! @ # $ %).
      22. Endpoint URL: https://yourfunctionappurl.azurewebsites.net
        1. This can be found by navigating to All services -> Function App -> YourFunctionApp -> URL on the overview blade.

At this point, your FortiGate deployment should be completed. When a FortiGate appliance comes up, it will reach out to the Azure Function to pull down its base configuration. Any changes to the primary FortiGate will be synchronized to any additional FortiGates deployed as well.

For those using a hub/spoke network, you will want to associate a UDR to each of your subnets to force traffic back to the internal load balancer's VIP. You can do this by creating a new Route Table, add a Route, set the next hop type to Virtual Appliance, and set the IP address to the IP address you specified for the "Subnet2Load Balancer IP".

You can connect to the primary FortiGate for management via web console on Port 8443 (https://IP.AD.DR.ESS:8443) or via SSH on Port 22.

References

https://docs.fortinet.com/vm/azure/fortigate/6.2/azure-cookbook/6.2.0/128029/about-fortigate-vm-for-azure

Deploying Cisco Virtual Appliances (NGFWv) on Azure

Here is a recap of some of the reflections I have with deploying Cisco NGFWv (Next Generation Firewall Virtual) on Azure. This is more of a reflection of the steps I took rather than a guide, but you can use the information below as you see fit.  At a high level, you will need to deploy the device on Azure and then configure the internal "guts" of the Cisco device to allow it to route traffic properly on your Virtual Network (VNet) in Azure. While Cisco does have decent documentation on deploying a single appliance, the primary purpose of this document is to look at HA/Scale out deployments.

First, just want to provide a quick overview of some of Cisco's offerings today for Azure:

  • Cisco CSR
  • Cisco Meraki
    • In Cisco’s words:
      • Virtual MX is a virtual instance of a Meraki security & SD-WAN appliance, dedicated specifically to providing the simple configuration benefits of site-to-site Auto VPN for customers running or migrating IT services to an Amazon Web Services or Microsoft Azure Virtual Private Cloud (VPC).
    • Source: https://meraki.cisco.com/products/appliances/vmx100
  • Cisco ASAv
    • In Cisco's words:
      • The ASAv is a virtualized network security solution that provides policy enforcement and threat inspection across heterogeneous, multisite environments.
      • ASA firewall and VPN capabilities help safeguard traffic and multitenant architectures. Available in most hypervisor environments, the Cisco ASAv can be deployed exactly where it is needed to protect users and workloads on-premises or in the cloud.
    • Source: https://github.com/cisco/asav
  • Cisco Firepower NGFW (Threat Defense Virtual)
    • In Cisco's words:
      • The Cisco Firepower® NGFW (next-generation firewall) is the industry’s first fully integrated, threat-focused next-gen firewall with unified management. It uniquely provides advanced threat protection before, during, and after attacks.
      • The Firepower Threat Defense Virtual (FTDv) is the virtualized component of the Cisco NGFW solution. Organizations employing SDN can rapidly provision and orchestrate flexible network protection with Firepower NGFWv. As well, organizations using NFV can further lower costs utilizing Firepower FTDv.
    • Source: https://github.com/cisco/firepower-ngfw

Deploy the Appliance in Azure

In deploying the Cisco appliances, you'll notice you can deploy from the Azure Marketplace: https://azuremarketplace.microsoft.com/en-us/marketplace/apps/cisco.cisco-firepower-threat-defense-appliance?tab=Overview).  Personally, I'm not a big fan of deploying the appliance this way as I don't have as much control over naming conventions, don't have the ability to deploy more than one appliance for scale, cannot specify my availability set, etc. While Cisco does offer an ARM template, it doesn't allow flexibility for more than two devices, nor configures anything from a load balancer perspective. In this case, I've written a custom ARM template that leverages managed disks, availability sets, consistent naming nomenclature, proper VM sizing, and most importantly, let you define how many virtual instances you'd like to deploy for scaling.

Note: this article doesn't cover deployment of Cisco's Firepower Management Center, which is what is used to centrally manage each of the scale-out instances in a "single pane of glass".

With the above said, this article will cover what Cisco calls their "scalable design" model. Here is an example of what this visually looks like (taken from one of their slide decks listed in the notes section at the bottom of this article):

Scalable design model as per Cisco's Reference Architecture

Below is a link to the ARM template I use.

Cisco-NGFWv-HA.json

Deployment of this template can be done by navigating to the Azure Portal (portal.azure.com), select Create a resource, type Template Deployment in the Azure Marketplace, click Create,  select Build your own template in the editor, and paste the code into the editor.

Alternatively, you can click this button here:

Here are some notes on what the parameters mean in the template:

VMsize: Per Cisco, the recommend VM sizes should be D3v2, D4v2, or D5v2.  Interestingly, they don't call out the use of Premium storage anywhere, which I would highly recommend using if this was a single instance machine (to get at least some sort of SLA by Azure).

CiscoSku: Here is where you can select to use bring-your-own-license or pay-as-you-go.  Plans are should be outlined in the following link, but oddly enough the BYOL image is only available via PowerShell and their plans don't show it:  https://azuremarketplace.microsoft.com/en-us/marketplace/apps/cisco.cisco-firepower-threat-defense-appliance?tab=PlansAndPrice

CiscoVersion: The version of the Cisco appliance to deploy.

CiscoCount: This defines how many virtual instances you want deployed and placed behind load balancers.

VNetName: The name of your virtual network you have created.

VNetRG: The name of the resource group your virtual network is in.  This may be the same as the Resource Group you are placing the Cisco devices in, but this is a needed configurable option to prevent errors referencing a VNet in a different resource group.

envPrefix: All of the resources that get created (load balancer, virtual machines, public IPs, NICs, etc.) will use this naming nomenclature.

manPrivateIPPrefix, diagPrivateIPPrefix, trustPrivateIPPrefix, untrustPrivateIPPrefix: Corresponding subnet address range.  These should be the first 3 octets of the range followed by a period.  For example, 10.5.6. would be a valid value.

manPrivateIPFirst, diagPrivateIPFirst, trustPrivateIPFirst, untrustPrivateIPFirst: The first usable IP address on the subnet specified.  For example, if my subnet is 10.4.255.0/24, I would need to specify 4 as my first usable address.

Username: this is the name of the privileged account that should be used to ssh and login to the PanOS web portal.

NewStorageAccountName: this is the name of the storage account that will store boot diagnostics for the Cisco appliances. This will give you the ability to see what the serial console shows. This value should be alphanumeric and 3-24 characters.

Password: Password to the privileged account used to ssh and login to the device.

Configure the Appliance

Complete these steps for both devices.

  1. SSH to the device via it's public or private IP address of the management interface
    1. Please note, SSH may not come up for another 10+ minutes after deployment has finished, even though the VMs show running. There are several tasks within the Cisco appliance that run post-provisioning which take awhile to complete before the ability to SSH works.
  2. Login using the following credentials
    1. Note: Even though we specified credentials within our template, cisco has a default set of admin credentials "baked" into the image and they should be specified during first login (which prompts you to immediately change). Please login using the default admin credentials.
    2. Username: admin
    3. Password: Admin123
      1. The password is case sensitive, you should use a capital A on Admin123.
  3. Change your password once prompted
  4. Enter y to configure IPv4
  5. Enter n to not configure IPv6
    1. As of 6/1/2019, Azure only has preview support for IPv6, so this article won't cover any IPv6 specific items
  6. Enter dhcp to configure IPv4 with DHCP
    1. All addresses in Azure should be DHCP, static addresses are set within Azure, which essentially give the appliance a DHCP reservation
    2. Important Note: Once you configure this option, you'll get an awkward "If your networking information has changed, you will need to reconnect" message and things will appears to be stuck. Be patient, it appears a script runs in the background, you'll see it eventually prompt for the next question.
  7. Leave your SSH connection open for the next step

Configure NGFWv to use FirePower Management Center

Once you have gone through the initial configuration on both devices, you will need to register the sensor to a Firepower Management Center instance. To do this, you will need to run the configure manager command on both appliances. Please note I've listed the command below with the parameters it will accept, you will need to use the applicable values for your environment.

configure manager add {hostname | IPv4_address | IPv6_address | DONTRESOLVE} reg_key [nat_id]

Per Cisco's documentation:

  • The registration key is a user-defined one-time use key that must not exceed 37 characters. Valid characters include alphanumeric characters (A–Z, a–z, 0–9) and the hyphen (-). You will need to remember this registration key when you add the device to the Firepower Management Center.
  • If the Firepower Management Center is not directly addressable, use DONTRESOLVE.
  • The NAT ID is an optional user-defined alphanumeric string that follows the same conventions as the registration key described above. It is required if the hostname is set to DONTRESOLVE. You will need to
    remember this NAT ID when you add the device to the Firepower Management Center

Add the appliances into FirePower Management Center

Repeat the following steps for each of the appliances you deployed

  1. Login to FirePower Management Center
  2. Select the Devices tab, click Device Management, and then click the Add button
  3. Enter the following
    1. Host: ManagementIP
    2. Device Name: FriendlyDeviceNameOrHostName
    3. Registration Key: KeyYouUsedWhenRunningConfigureManagerCommandAbove
    4. Access Control Panel
      1. Specify a Name, select Network Discovery
    5. Smart Licensing
      1. Check the following you are licensed for
        1. Malware
        2. Threat
        3. URL Filtering
    6. If you used NAT, configure NAT and specify the NAT ID
    7. Click Register

Initialize the interfaces on your appliances

Repeat the following steps for each of the appliances you deployed

  1. Select the Devices tab, click Device Management, and select the edit button (Pencil Icon) for your appliance
  2. Click the edit button (Pencil Icon) for GigabitEthernet 0/0
    1. Name: Untrust
    2. Check the Enabled checkbox
    3. Security Zone: Create a new zone called Untrusted
    4. Click the IPv4 tab
      1. IP Type: Use Static
      2. IP Address: IPAddressOfYourAppliance/SubnetSize
    1. Click OK
  3. Click the edit button (Pencil Icon) for GigabitEthernet 0/1
    1. Name: Trust
    2. Check the Enabled checkbox
    3. Security Zone: Create a new zone called Trusted
    4. Click the IPv4 tab
      1. P Type: Use Static
      2. IP Address: IPAddressOfYourAppliance/SubnetSize
    5. Click OK
  4. Click the Save button

Once you have completed the steps above, click Deploy, select each of your appliances, and click Deploy to push the configuration to the device

Configure static routes on your device

In this section, we will create several routes to handle the flow of traffic to and to/from your trusted subnets, traffic destined towards the internal, traffic destined towards the management interface (we'll need this to help handle the health probes from the azure load balancer later on), and a specific route to define the Azure Health Probes themselves.

Repeat the following steps for each of the appliances you deployed.

  1. Select the Devices tab, click Device Management, and select the edit button (Pencil Icon) for your appliance
  2. Select the Routing tab and click Static Route
  3. Click the Add Route button
    1. Type: IPv4
    2. Interface: Trust
    3. Create new network objects
      1. Add network objects that represent each of the subnets you have in Azure that the device will need to return traffic to
        1. For example, you'd repeat these steps for each private subnet
          1. Name: DBServers
          2. Network: 10.3.5.0/24
          3. Click Save
      2. Add network object for the appliance's management interface
        1. Name: YourAppliance-mgmt
        2. Network: IPAddressOfManagementInterface
          1. Use the private IP of your management interface
        3. Click Save
      3. Add network object for Azure Health Probes
        1. Name: Azure-LB-Probe
        2. Network: 168.63.129.16
        3. Click Save
    4. Add the defined network objects above to Selected Network box
    5. Gateway: Use the IP address of the default gateway of your subnet the Trust interface is deployed on
      1. Note: To find this, navigate to the Azure Portal (portal.azure.com) and select All Services -> Virtual Networks -> Your Virtual Network -> Subnets and use the first IP address of your subnet the trusted interface is on.  For example, if the address range of my subnet is 10.5.15.0/24, I would use 10.5.15.1 as my IP address.  If my subnet was 10.5.15.128/25, I would use 129 10.5.15.129 as my IP address
    6. Metric: 3
    7. Click OK
  4. Click the Add Route button
    1. Type: IPv4
    2. Interface: Untrust
    3. Add the any-ipv4 object to Selected Network box
      1. This will allow us to force all internet bound traffic through our Untrust interface
    4. Add the Azure-LB-Probe object to the Selected Network box
      1. This will allow health probes from the external azure load balancer probes to flow properly
    5. Add the YourAppliance-mgmt object to the Selected Network box
    6. Gateway: Use the IP address of the default gateway of your subnet the Untrust interface is deployed on
      1. Note: To find this, navigate to the Azure Portal (portal.azure.com) and select All Services -> Virtual Networks -> Your Virtual Network -> Subnets and use the first IP address of your subnet the untrusted interface is on.  For example, is the address range of my subnet is 10.5.15.0/24, I would use 10.5.15.1 as my IP address.  If my subnet was 10.5.15.128/25, I would use 129 10.5.15.129 as my IP address
    7. Metric: 2
    8. Click OK
  5. Click the Save button

Once you have completed the steps above, click Deploy, select each of your appliances, and click Deploy to push the configuration to the device

Configure NAT Policies

First create a NAT rule that will SNAT any traffic from our trusted zone to the Untrust interface. This is needed so Azure understands to return traffic through the external interface of your device for inspection.

  1. Select the Devices tab, click NAT, and select the Threat Defense NAT Policy link (or New Policy button)
  2. Select your first appliance, click the Add to Policy button, and click Save
  3. Click the Add Rule button
    1. NAT Rule: Auto NAT Rule
    2. Type: Dynamic
    3. Interface Objects Tab
      1. Select the Trusted Interface Object and click the Add to Source button
      2. Select the Untrusted Interface object and click the Add to Destination button
    4. Translation Tab
      1. Click the green button to add a new network object under Original Packet
        1. Name: any-ipv4
        2. Network: 0.0.0.0/0
        3. Click Save
      2. Original Source: any-ipv4
      3. Translated Source: Destination Interface IP
    5. Click OK

Next, we need to create a new NAT statement to handle traffic for our load balancer probes. We will need to configure two statements since we will receive health probes from the same IP address (168.63.129.16) to both NICs. On the same appliance, continue the following steps.

  1. Click the Add Rule button
    1. NAT Rule: Manual NAT Rule
    2. Type: Static
    3. Interface Objects Tab
      1. Select the Trusted Interface Object and click the Add to Source button
      2. Select the Untrusted Interface object and click the Add to Destination button
    4. Translation Tab
      1. Original Source: Azure-LB-Probe
      2. Original Destination: Source Interface IP
      3. Original Destination Port: SSH
      4. Translated Source: Destination Interface IP
      5. Translated Destination: YourAppliance-mgmt
      6. Translated Destination Port: SSH
    5. Click OK
  2. Click the Add Rule button
    1. NAT Rule: Manual NAT Rule
    2. Type: Static
    3. Interface Objects Tab
      1. Select the Untrusted Interface Object and click the Add to Source button
      2. Select the Trusted Interface object and click the Add to Destination button
    1. Translation Tab
      1. Original Source: Azure-LB-Probe
      2. Original Destination: Source Interface IP
      3. Original Destination Port: SSH
      4. Translated Source: Destination Interface IP
      5. Translated Destination: YourAppliance-mgmt
      6. Translated Destination Port: SSH
  3. Click OK

Optional Step: If you are using the appliances to front applications to the internet, you will also need to configure a NAT rule for ingress traffic. This is an optional step, but will show you how to configure traffic to let's say a web server (which the ALB is configured to listen for per the template). If you do complete this step, make sure you add an access policy (Policies -> Access Control -> Select your policy -> click Add Rule).

  1. Click the Add Rule button
    1. NAT Rule: Manual NAT Rule
    2. Type: Static
    3. Interface Objects Tab
      1. Select the Untrusted Interface Object and click the Add to Source button
      2. Select the Trusted Interface object and click the Add to Destination button
    4. Translation Tab
      1. Original Source: any-ipv4
      2. Original Destination: Source Interface IP
      3. Original Destination Port: HTTP
      4. Translated Source: Destination Interface IP
      5. Translated Destination: webserver
        1. Click the green add button to create a new network object to define the private IP address of your web server.
      6. Translated Destination Port: HTTP
    1. Click OK

Click Save once you have finished adding the rules.

At this point, you will need to repeat the same steps above. The reason why we cannot apply the policy to both devices is when you configure the rule for the Azure Health Probes, you'll need to specify the correct Translated Destination (I.e. Appliance1 should use the network object that resolves to appliance 1; Appliance2 should use the network object that resolves to appliance 2)

Once you have completed the steps above, click Deploy, select each of your appliances, and click Deploy to push the configuration to the device

Finalize the environment

Now that the environment is configured, there are two steps you will want to check back on.

  1. Add Route Tables to each subnet to force traffic to the Cisco appliances
    1. You will need to leverage route tables with custom routes to force traffic to the Cisco appliance. I'd highly recommend giving this a read to familiarize yourself with how Route Tables work in Azure: https://docs.microsoft.com/en-us/azure/virtual-network/virtual-networks-udr-overview
  2. Ensure there is a Network Security Group (NSG) on the Untrust subnet
    1. As per Azure Load Balancer's documentation, you will need an NSG associated to the NICs or subnet to allow traffic in from the internet. https://docs.microsoft.com/en-us/azure/load-balancer/load-balancer-standard-overview#securebydefault
  3. Remove the public IP from your management interface
    1. Considering at this point you've configured the device and have private connectivity via VPN or ExpressRoute, I'd remove the public IP from your management interface to prevent the public internet from accessing this interface
  4. Adjust NSG rules
    1. Similar to above, I'd scope down who/what network segments can pass traffic to the device. Go back and modify the NSG on the management interfaces to only allow traffic from specific source addresses.

References

Azure APIs and ARM Templating

One thing that has always been a mystery when leveraging Azure Resource Manager (ARM) templates is the "apiVersion" that needs to be specified for each resource that should be created in your template.

At a high level, every item processed by ARM will reach out to its corresponding service provider. Each of these providers have their own API for handling requests: https://docs.microsoft.com/en-us/rest/api/resources/

The tricky thing is many of the template examples that are provided out on GitHub in the quickstart gallery don't align or may not be using the latest API. In this case, the question is... how do I find the latest API used by each resource provider?

Within PowerShell there's a single command that can return back the API versions supported by each Resource Provider.

(Get-AzResourceProvider -ProviderNamespace "Microsoft.Storage").ResourceTypes

You'll notice the follow result, which provides us exactly what we were looking for.

PS C:\Users\jstrom> (Get-AzResourceProvider -ProviderNamespace "Microsoft.Storage").ResourceTypes


ResourceTypeName : storageAccounts
Locations        : {East US, East US 2, West US, West Europe...}
ApiVersions      : {2019-04-01, 2018-11-01, 2018-07-01, 2018-03-01-preview...}

ResourceTypeName : operations
Locations        : {}
ApiVersions      : {2019-04-01, 2018-11-01, 2018-07-01, 2018-03-01-preview...}

ResourceTypeName : locations/asyncoperations
Locations        : {East US, East US 2, West US, West Europe...}
ApiVersions      : {2019-04-01, 2018-11-01, 2018-07-01, 2018-03-01-preview...}

ResourceTypeName : storageAccounts/listAccountSas
Locations        : {East US, East US 2, West US, West Europe...}
ApiVersions      : {2019-04-01, 2018-11-01, 2018-07-01, 2018-03-01-preview...}

ResourceTypeName : storageAccounts/listServiceSas
Locations        : {East US, East US 2, West US, West Europe...}
ApiVersions      : {2019-04-01, 2018-11-01, 2018-07-01, 2018-03-01-preview...}

ResourceTypeName : storageAccounts/blobServices
Locations        : {East US, East US 2, West US, West Europe...}
ApiVersions      : {2019-04-01, 2018-11-01, 2018-07-01, 2018-03-01-preview...}

ResourceTypeName : storageAccounts/tableServices
Locations        : {East US, East US 2, West US, West Europe...}
ApiVersions      : {2019-04-01, 2018-11-01, 2018-07-01, 2018-03-01-preview...}

ResourceTypeName : storageAccounts/queueServices
Locations        : {East US, East US 2, West US, West Europe...}
ApiVersions      : {2019-04-01, 2018-11-01, 2018-07-01, 2018-03-01-preview...}

ResourceTypeName : storageAccounts/fileServices
Locations        : {East US, East US 2, West US, West Europe...}
ApiVersions      : {2019-04-01, 2018-11-01, 2018-07-01, 2018-03-01-preview...}

ResourceTypeName : locations
Locations        : {}
ApiVersions      : {2019-04-01, 2018-11-01, 2018-07-01, 2018-03-01-preview...}

ResourceTypeName : locations/usages
Locations        : {East US, East US 2, West US, West Europe...}
ApiVersions      : {2019-04-01, 2018-11-01, 2018-07-01, 2018-03-01-preview...}

ResourceTypeName : locations/deleteVirtualNetworkOrSubnets
Locations        : {East US, East US 2, West US, West Europe...}
ApiVersions      : {2019-04-01, 2018-11-01, 2018-07-01, 2018-03-01-preview...}

ResourceTypeName : usages
Locations        : {}
ApiVersions      : {2019-04-01, 2018-11-01, 2018-07-01, 2018-03-01-preview...}

ResourceTypeName : checkNameAvailability
Locations        : {}
ApiVersions      : {2019-04-01, 2018-11-01, 2018-07-01, 2018-03-01-preview...}

ResourceTypeName : storageAccounts/services
Locations        : {East US, West US, East US 2 (Stage), West Europe...}
ApiVersions      : {2014-04-01}

ResourceTypeName : storageAccounts/services/metricDefinitions
Locations        : {East US, West US, East US 2 (Stage), West Europe...}

Happy ARM templating!

Deploying Palo Alto VM-Series on Azure

Here is a recap of some of the reflections I have with deploying Palo Alto's VM-Series Virtual Appliance on Azure. This is more of a reflection of the steps I took rather than a guide, but you can use the information below as you see fit.  At a high level, you will need to deploy the device on Azure and then configure the internal "guts" of the Palo Alto to allow it to route traffic properly on your Virtual Network (VNet) in Azure.  The steps outlined should work for both the 8.0 and 8.1 versions of the Palo Alto VM-Series appliance.

Please note, this tutorial also assumes you are looking to deploy a scale-out architecture.  This can help ensure a single instance doesn't get overwhelmed with the amount of bandwidth you are trying to push through it.  If you are looking for a single instance, you can still follow along.

Deploy the Appliance in Azure

In deploying the Virtual Palo Altos, the documentation recommends to create them via the Azure Marketplace (which can be found here: https://azuremarketplace.microsoft.com/en-us/marketplace/apps/paloaltonetworks.vmseries-ngfw?tab=Overview).  Personally, I'm not a big fan of deploying the appliance this way as I don't have as much control over naming conventions, don't have the ability to deploy more than one appliance for scale, cannot specify my availability set, cannot leverage managed disks, etc.  In addition, I noticed a really strange error that if you specify a password greater than 31 characters, the Palo Alto devices flat out won't deploy on Azure.  In this case, I've written a custom ARM template that leverages managed disks, availability sets, consistent naming nomenclature, proper VM sizing, and most importantly, let you define how many virtual instances you'd like to deploy for scaling.

Note: this article doesn't cover the concept of using Panorama, but that would centrally manage each of the scale-out instances in a "single pane of glass".  Below, we will cover setting up a node manually to get it working.  It is possible to create a base-line configuration file that joins Panorama post-deployment to bootstrap the nodes upon deployment of the ARM template.  The bootstrap file is not something I've incorporated into this template, but the template could easily be modified to do so.

With the above said, this article will cover what Palo Alto considers their Shared design model. Here is an example of what this visually looks like (taken from Palo Alto's Reference Architecture document listed in the notes section at the bottom of this article):

Shared design model as per Palo Alto's Reference Architecture

Microsoft also has a reference architecture document that talks through the deployment of virtual appliances, which can be found here: https://docs.microsoft.com/en-us/azure/architecture/reference-architectures/dmz/nva-ha

Below is a link to the ARM template I use.

PaloAlto-HA.json

Deployment of this template can be done by navigating to the Azure Portal (portal.azure.com), select Create a resource, type Template Deployment in the Azure Marketplace, click Create,  select Build your own template in the editor, and paste the code into the editor.

Alternatively, you can click this button here:

Here are some notes on what the parameters mean in the template:

VMsize: Per Palo Alto, the recommend VM sizes should be DS3, DS4, or DS5.  Documentation on this can be found here.

PASku: Here is where you can select to use bring-your-own-license or pay-as-you-go.  Plans are outlined here: https://azuremarketplace.microsoft.com/en-us/marketplace/apps/paloaltonetworks.vmseries-ngfw?tab=PlansAndPrice

PAVersion: The version of PanOS to deploy.

PACount: This defines how many virtual instances you want deployed and placed behind load balancers.

VNetName: The name of your virtual network you have created.

VNetRG: The name of the resource group your virtual network is in.  This may be the same as the Resource Group you are placing the Palos in, but this is a needed configurable option to prevent errors referencing a VNet in a different resource group.

envPrefix: All of the resources that get created (load balancer, virtual machines, public IPs, NICs, etc.) will use this naming nomenclature.

manPrivateIPPrefix, trustPrivateIPPrefix, untrustPrivateIPPrefix: Corresponding subnet address range.  These should be the first 3 octets of the range followed by a period.  For example, 10.5.6. would be a valid value.

manPrivateIPFirst, trustPrivateIPFirst, untrustPrivateIPFirst: The first usable IP address on the subnet specified.  For example, if my subnet is 10.4.255.0/24, I would need to specify 4 as my first usable address.

Username: this is the name of the privileged account that should be used to ssh and login to the PanOS web portal.

Password: Password to the privileged account used to ssh and login to the PanOS web portal.  Must be 31 characters or less due to Pan OS limitation.


Configure the Appliance

Once the virtual appliance has been deployed, we need to configure the Palo Alto device itself to enable connectivity on our Trust/Untrust interfaces.

Activate the licenses on the VM-Series firewall.

Follow these steps if using the BYOL version

  1. Create a Support Account.
  2. Register the VM-Series Firewall
    (with auth code)
    .
  3. On the firewall web interface, select Device tab -> Licenses
    and select Activate feature using authentication code.
  4. Enter the capacity auth-code that you registered on the support
    portal. The firewall will connect to the update server (updates.paloaltonetworks.com), and download
    the license and reboot automatically.  If this doesn't work, please continue below to configuring the interfaces of the device.
  5. Log back in to the web interface after reboot and confirm the following on the Dashboard:
    1. A valid serial number displays in Serial#.
      If the term Unknown displays, it means the device is not licensed. To view
      traffic logs on the firewall, you must install a valid capacity license.
    2. The VM Mode displays as Microsoft Azure.

Follow these steps if using the PAYG (Pay as you go) version

  1. Create a Support Account.
  2. Register the Usage-Based Model of
    the VM-Series Firewall in AWS and Azure (no auth code)
    .

Configure the Untrust/Trust interfaces

Configure the Untrust interface

  1. Select Network-> Interfaces ->Ethernet-> select the link for ethernet1/1 and configure as follows:
    1. Interface Type: Layer3 (default).
    2. On the Config tab, assign the interface to the Untrust-VR router.
    3. On the Config tab, expand the Security Zone drop-down and select New Zone. Define a new zone called Untrust, and then click OK.
  2. On the IPv4 tab, select DHCP Client if you plan to assign only one IP address on the interface. If you plan to assign more than one IP address select Static and manually enter the primary and secondary IP addresses assigned to the interface on the Azure portal.  The private IP address of the interface can be found by navigating to Virtual Machines -> YOURPALOMACHINE -> Networking and using the Private IP address specified on each tab.
    1. Note: Do not use the Public IP address to the Virtual Machine.  Azure automatically DNATs traffic to your private address so you will need to use the Private IP Address for your UnTrust interface.
  3. Clear the Automatically create default route to default gateway provided by server check box.
    1. Note: Disabling this option ensures that traffic handled by this interface does not flow directly to the default gateway in the VNet.
  4. Click OK

Note: For the untrust interface, within your Azure environment ensure you have a NSG associated to the untrust subnet or individual firewall interfaces as the template doesn't deploy this for you (I could add this in, but if you already had an NSG I don't want to overwrite it). As per Azure Load Balancer's documentation, you will need an NSG associated to the NICs or subnet to allow traffic in from the internet.

Configure the Trust Interface

  1. Select Network-> Interfaces ->Ethernet-> select the link for ethernet1/2 and configure as follows:
    1. Interface Type: Layer3 (default).
    2. On the Config tab, assign the interface to the Trust-VR router.
    3. On the Config tab, expand the Security Zone drop-down and select New Zone. Define a new zone called Trust, and then click OK.
  2. On the IPv4 tab, select DHCP Client if you plan to assign only one IP address on the interface. If you plan to assign more than one IP address select Static and manually enter the primary and secondary IP addresses assigned to the interface on the Azure portal. The private IP address of the interface can be found by navigating to Virtual Machines -> YOURPALOMACHINE -> Networking and using the Private IP address specified on each tab.
    1. Clear the Automatically create default route to default gateway provided by server check box.
      1. Note: Disabling this option ensures that traffic handled by this interface does not flow directly to the default gateway in the VNet.
  3. Click OK

Click Commit in the top right.  Verify that the link state for the interfaces is up (the interfaces should turn green in the Palo Alto user interface).

Define Static Routes

The Palo Alto will need to understand how to route traffic to the internet and how to route traffic to your subnets.  As you will see in this section, we will need two separate virtual routers to help handle the processing of health probes submitted from each of the Azure Load Balancers.

Create a new Virtual Router and  Static Route to the internet

  1. Select Network -> Virtual Router
  2. Click Add at the bottom
  3. Set the Name to Untrust-VR
  4. Select Static Routes -> IPv4 -> Add
  5. Create a Static Route to egress internet traffic
    1. Name: Internet
    2. Destination: 0.0.0.0/0
    3. Interface: ethernet 1/1
    4. Next Hop: IP Address
    5. IP Address: Use the IP address of the default gateway of your subnet the Untrust interface is deployed on
      1. Note: To find this, navigate to the Azure Portal (portal.azure.com) and select All Services -> Virtual Networks -> Your Virtual Network -> Subnets and use the first IP address of your subnet the untrust interface is on.  For example, is the address range of my subnet is 10.5.15.0/24, I would use 10.5.15.1 as my IP address.  If my subnet was 10.5.15.128/25, I would use 129 10.5.15.129 as my IP address
  6. Create a Static Route to move traffic from the internet to your trusted VR
    1. Name: Internal Routes
    2. Destination: your vnet address space
    3. Interface: None
    4. Next Hop: Next VR
      1. Trust-VR
  7. Click OK

Create a new Virtual Router and Static Route to your Azure Subnets

  1. Select Network -> Virtual Router
  2. Click Add at the bottom
  3. Set the Name to Trust-VR
  4. Select Static Routes -> IPv4 -> Add
  5. Create a Static Route to send traffic to Azure from your Trusted interface
    1. Name: AzureVNet
    2. Destination: your vnet address space
    3. Interface: ethernet 1/2
    4. Next Hop: IP Address
    5. IP Address: Use the IP address of the default gateway of your subnet the Trust interface is deployed on
      1. Note: To find this, navigate to the Azure Portal (portal.azure.com) and select All Services -> Virtual Networks -> Your Virtual Network -> Subnets and use the first IP address of your subnet the trust interface is on.  For example, if the address range of my subnet is 10.5.15.0/24, I would use 10.5.15.1 as my IP address.  If my subnet was 10.5.15.128/25, I would use 129 10.5.15.129 as my IP address
  6. Create a Static Route to move internet traffic received on Trust to your Untrust Virtual Router
    1. Name: Internet
    2. Destination: 0.0.0.0/0
    3. Interface: None
    4. Next Hop: Next VR
      1. Untrust-VR
  7. Click OK

Click Commit in the top right.

Configure Health Probes for Azure Load Balancers

If deploying the Scale-Out scenario, you will need to approve TCP probes from 168.63.129.16, which is the IP address of the Azure Load Balancer.  Azure health probes come from a specific IP address (168.63.129.16).  In this case, we need a static route to allow the response back to the load balancer.   For the purpose of this article, we will configure SSH on the Trust interface strictly for the Azure Load Balancer to contact to validate the Palo Alto instances are healthy.

Configure Palo Alto SSH Service for the interfaces

First we need to create an Interface Management Profile

  1. Select Network -> Network Profiles -> Interface Mgmt
  2. Click Add in the button left
  3. Use the following configuration
    1. Name: SSH-MP
    2. Administrative Management Services: SSH
    3. Permitted IP Addresses: 168.63.129.16/32
  4. Click OK

Next, we need to assign the profile to the Trust interface

  1. Select Network -> Interfaces ->select the link for ethernet1/2
  2. Select the Advanced tab
  3. Set the Management Profile to SSH-MP
  4. Click OK

Next, we need to assign the profile to the Untrust interface

  1. Select Network -> Interfaces ->select the link for ethernet1/1
  2. Select the Advanced tab
  3. Set the Management Profile to SSH-MP
  4. Click OK

Create a Static Route for the Azure Load Balancer Health Probes on the Untrust Interface

Next we need to tell the health probes to flow out of the Untrust interface due to our 0.0.0.0/0 rule.

  1. Select Network -> Virtual Router -> Untrust-VR
  2. Select Static Routes -> IPv4 -> Add
  3. Use the following configuration
    1. Name: AzureLBHealthProbe
    2. Destination: 168.63.129.16/32
    3. Interface: ethernet 1/1
    4. Next Hop: IP Address
    5. IP Address: Use the IP address of the default gateway of your subnet the Trust interface is deployed on
      1. Note: To find this, navigate to the Azure Portal (portal.azure.com) and select All Services -> Virtual Networks -> Your Virtual Network -> Subnets and use the first IP address of your subnet the trust interface is on.  For example, if the address range of my subnet is 10.5.15.0/24, I would use 10.5.15.1 as my IP address.  If my subnet was 10.5.15.128/25, I would use 129 10.5.15.129 as my IP address
  4. Click OK

Create a Static Route for the Azure Load Balancer Health Probes on the Trust Interface

Next we need to tell the health probes to flow out of the Trust interface due to our 0.0.0.0/0 rule.

  1. Select Network -> Virtual Router -> Trust-VR
  2. Select Static Routes -> IPv4 -> Add
  3. Use the following configuration
    1. Name: AzureLBHealthProbe
    2. Destination: 168.63.129.16/32
    3. Interface: ethernet 1/2
    4. Next Hop: IP Address
    5. IP Address: Use the IP address of the default gateway of your subnet the Trust interface is deployed on
      1. Note: To find this, navigate to the Azure Portal (portal.azure.com) and select All Services -> Virtual Networks -> Your Virtual Network -> Subnets and use the first IP address of your subnet the trust interface is on.  For example, if the address range of my subnet is 10.5.15.0/24, I would use 10.5.15.1 as my IP address.  If my subnet was 10.5.15.128/25, I would use 129 10.5.15.129 as my IP address
  4. Click OK

Click Commit in the top right.

Create a NAT rule for internal traffic destined to the internet

You will need to NAT all egress traffic destined to the internet via the address of the Untrust interface, so return traffic from the Internet comes back through the Untrust interface of the device.

  1. Navigate to Policies -> NAT
  2. Click Add
  3. On the General tab use the following configuration
    1. Name: UntrustToInternet
    2. Description: Rule to NAT all trusted traffic destined to the Internet to the Untrust interface
  4. On the Original Packet tab use the following configuration
    1. Source Zone: Click Add and select Trust
    2. Destination Zone: Untrust
    3. Destination Interface: ethernet 1/1
    4. Service: Check Any
    5. Source Address: Click Add, use the Internal Address space of your Trust zones
    6. Destination address: Check Any
  5. On the Translated Packet tab use the following configuration
    1. Translation Type: Dynamic IP and Port
    2. Address Type: Interface Address
    3. Interface: ethernet 1/1
    4.  IP Address: None
    5. Destination Address Translation Translation Type: None
  6. Click OK

Click Commit in the top right.

Update your Palo Alto appliance

By default, Palo Alto deploys 8.0.0 for the 8.0.X series and 8.1.0 for the 8.1.X series.  In this case, Palo Alto will strongly recommend you upgrade the appliance to the latest version of that series before helping you with support cases.

To do this, go to Device -> Dynamic Updates -> click Check Now in the bottom left and download the latest build from the list of available updates.

Please note: the update process will require a reboot of the device and can take 20 minutes or so.

Summary

At this point you should have a working scaled out Palo Alto deployment.  If all went well, I would recommend removing the public IP to the management interface or at least scoping it down to the single public IP address you are coming from.  You can find your public IP address by navigating here: https://jackstromberg.com/whats-my-ip-address/

References

Official documentation from Palo Alto on deploying the VM-Series on Azure (took me forever to find this and doesn't cover setting up the static routes or updating the appliance): https://docs.paloaltonetworks.com/vm-series/8-1/vm-series-deployment/set-up-the-vm-series-firewall-on-azure/deploy-the-vm-series-firewall-on-azure-solution-template.html

Official documentation from Palo Alto on Azure VM Sizing: https://knowledgebase.paloaltonetworks.com/KCSArticleDetail?id=kA10g000000ClD7CAK

Documentation on architecture for the VM-Series on Azure (click the little download button towards the top of the page to grab a copy of the PDF):  https://www.paloaltonetworks.com/resources/guides/azure-architecture-guide

Palo Alto Networks Visio & OmniGraffle Stencils: https://knowledgebase.paloaltonetworks.com/KCSArticleDetail?id=kA10g000000CmAJCA0

Neat video created by Palo Alto outlining the architecture of a scale-out VM-Series deployment: https://www.paloaltonetworks.com/resources/videos/vm-series-in-azure

Upcoming VMSS version of Palo Alto deployment: PaloAltoNetworks/azure-autoscaling: Azure autoscaling solution using VMSS (github.com)

Using Terraform with Azure VM Extensions

TLDR: There are two sections of this article; feel free to scroll down to the titles for the applicable section.

Using VM Extensions with Terraform to Domain Join Virtual Machines

VM Extensions are a fantastic way to yield post deployment configurations via template as code in Azure.  One of Azure's most common VM Extensions is the JoinADDomainExtension, which will join your Azure VM to an Active Directory machine after the machine has successfully been provisioned.  For the purposes of this artcicle, we will assume you have a VM called testvm in the East US region.

Typically, VM extensions can be configured via the following block of ARM Template code (a fully working example building the virtual and running the extension can be found here).

{
    "apiVersion": "2015-06-15",
    "type": "Microsoft.Compute/virtualMachines/extensions",
    "name": "testvm/joindomain",
    "location": "EastUS",
    "properties": {
        "publisher": "Microsoft.Compute",
        "type": "JsonADDomainExtension",
        "typeHandlerVersion": "1.3.2",
        "autoUpgradeMinorVersion": true,
        "settings": {
            "Name": "JACKSTROMBERG.COM",
            "OUPath": "OU=Users,OU=CustomOU,DC=jackstromberg,DC=com",
            "User": "JACKSTROMBERG.COM\\jack",
            "Restart": "true",
            "Options": "3"
        },
        "protectedSettings": {
            "Password": "SecretPassword!"
        }
    }
}

When looking at Terraform, the syntax is a bit different and there isn't much documentation on how to handle the settings and most importantly, the password/secret used when joining the machine to the domain.  In this case, here is working translation of the ARM template to Terraform.

resource "azurerm_virtual_machine_extension" "MYADJOINEDVMADDE" {
  name                 = "MYADJOINEDVMADDE"
  virtual_machine_id   = azurerm_virtual_machine.testvm.id
  publisher            = "Microsoft.Compute"
  type                 = "JsonADDomainExtension"
  type_handler_version = "1.3.2"

  # What the settings mean: https://docs.microsoft.com/en-us/windows/desktop/api/lmjoin/nf-lmjoin-netjoindomain

  settings = <<SETTINGS
    {
        "Name": "JACKSTROMBERG.COM",
        "OUPath": "OU=Users,OU=CustomOU,DC=jackstromberg,DC=com",
        "User": "JACKSTROMBERG.COM\\jack",
        "Restart": "true",
        "Options": "3"
    }
SETTINGS
  protected_settings = <<PROTECTED_SETTINGS
    {
      "Password": "SecretPassword!"
    }
  PROTECTED_SETTINGS
  depends_on = ["azurerm_virtual_machine.MYADJOINEDVM"]
}

The key pieces here are the SETTINGS and PROTECTED_SETTINGS blocks that allow you to pass the traditional JSON attributes as you would in the ARM template.  Luckily, terraform does a somewhat decent job documentation this on their public docs here, so if you have any additional questions on any of the attributes you can find them all here: https://www.terraform.io/docs/providers/azurerm/r/virtual_machine_extension.html

The last block of code I have specified at the very end is a depends_on statement.  This simpy ensures that this resource is not created until the Virtual Machine itself has successfully been provisioned and can be very beneficial if you have other scripts that may need to run prior to domain join.

Using VM Extensions with Terraform to customize a machine post deployment

Continuing along the lines of customizing a virtual machine post deployment, Azure has a handy dandy extension called CustomScriptExtension.  What this extension does is allow you to arbitrarily download and execute files (typically PowerShell) after a virtual machine has been deployed.  Unlike the domain join example above, Azure has extensive documentation on this extension and provides support for both Windows and Linux (click the links for Windows or Linux to see the Azure docs on this).

Following similar suite as the above Domain Join example, within the ARM world, we can leverage the following template to execute code post deployment:

{
    "apiVersion": "2018-06-01",
    "type": "Microsoft.Compute/virtualMachines/extensions",
    "name": "testvm",
    "location": "EastUS",
    "properties": {
        "publisher": "Microsoft.Azure.Extensions",
        "type": "CustomScript",
        "typeHandlerVersion": "2.1.3",
        "autoUpgradeMinorVersion": true,
        "settings": {
            "fileUris": [
                "script location"
            ]
        },
        "protectedSettings": {
            "commandToExecute": "myExecutionCommand",
            "storageAccountName": "mystorageaccountname",
            "storageAccountKey": "myStorageAccountKey"
        }
    }
}

When we look at the translation over to Terraform, for the most part the structure is the exact same.  Similar to our Active Directory Domain Join script above, the tricky piece is knowing to use the PROTECTED_SETTINGS to encapsulate our block of code that in this case authenticates to the Azure Storage Account to pull down our post-deployment script.  Now per the Azure documentation, those variables are optional; if the scripts you have don't contain sensitive information, you are more than welcome to simply specify the fileUri and specify the commandToExecute via the regular SETTINGS block.

resource "azurerm_virtual_machine_extension" "MYADJOINEDVMCSE" {
  name                 = "MYADJOINEDVMCSE"
  virtual_machine_id   = azurerm_virtual_machine.testvm.id
  publisher            = "Microsoft.Azure.Extensions"
  type                 = "CustomScript"
  type_handler_version = "2.1.3"

  # CustomVMExtension Documetnation: https://docs.microsoft.com/en-us/azure/virtual-machines/extensions/custom-script-windows

  settings = <<SETTINGS
    {
        "fileUris": ["https://mystorageaccountname.blob.core.windows.net/postdeploystuff/post-deploy.ps1"]
    }
SETTINGS
  protected_settings = <<PROTECTED_SETTINGS
    {
      "commandToExecute": "powershell -ExecutionPolicy Unrestricted -File post-deploy.ps1",
      "storageAccountName": "mystorageaccountname",
      "storageAccountKey": "myStorageAccountKey"
    }
  PROTECTED_SETTINGS
  depends_on = ["azurerm_virtual_machine_extension.MYADJOINEDVMADDE"]
}

At this point you should be able to leverage both extensions to join a machine to the domain and then customize virtually any aspect of the machine thereafter.

The only thing I'll leave you with is typically it is recommended to not leave clear-text passwords scattered through your templates.  In either case, I highly recommend looking at leveraging Azure Key Vault or an alternative solution that can ensure proper security in handling those secrets.

Notes

Aside from Terraform, one question I've received is what happens if the extension runs against a machine that is already domain joined?
A: The VM extension will still install against the Azure Virtual Machine, but will immediately return back the following response: "Join completed for Domain 'yourdomain.com'"

Specifically, the following is returned back to Azure: [{"version":"1","timestampUTC":"2019-03-27T16:30:57.9274393Z","status":{"name":"ADDomainExtension","operation":"Join Domain/Workgroup","status":"success","code":0,"formattedMessage":{"lang":"en-US","message":"Join completed for Domain 'yourdomain.com'"},"substatus":null}}]

What does Options mean for domain join?

A: Copied from here: The options are a set of bit flags that define the join options. Default value of 3 is a combination of NETSETUP_JOIN_DOMAIN (0x00000001) & NETSETUP_ACCT_CREATE (0x00000002) i.e. will join the domain and create the account on the domain. For more information see https://msdn.microsoft.com/en-us/library/aa392154(v=vs.85).aspx