DPM 2016 is primarily geared towards using mail servers that require authentication (rightfully so, that's a best security practice). However, many IT organizations have local mail relay servers with anonymous authentication that are used for several IT services in the organization. Unfortunately, DPM 2016 gets a bit wonky using unauthenticated mail servers and will likely give you a generic error that says:
And if you ignore the error and head over to the notifications tab to configure a notification, you will be presented with another generic error:
And if you are trying to configure scheduled emails you may receive an error about reporting services:
One thing I may do before getting too far ahead though is validate you can send an email from the DPM server. This can easily be done via PowerShell by executing the following command:
Send-MailMessage -SMTPServer localhost -To [email protected] -From [email protected] -Subject "Test Email from DPM Server" -Body "Howdy! This is a test from the DPM Sever. If you see this, mail relay is working!"
When executing the PowerShell command, it won't return anything, but you should hopefully see a message in your mailbox. If you do, you've at least ruled out network/mail issues.
Once you've ruled out connectivity/the mail server, we will complete the following steps below to configure DPM.
Configure E-mail for SQL Server Reporting Services
Create a Local User Account
Remove any artifacts left in the registry
Update the SMTP settings in DPM.
Configuration
Configure SQL Server Reporting Services
Open Reporting Services Configuration Manager
Sign into your DPM instance
Select E-mail Settings and leverage the following configuration
Open Computer Management, expand Local Users and Groups, select Users, and Create a new local user on the machine
Create the user (I used anonemail as the account name, but anything can be specified)
Remove all group membership
This account doesn't need to be a part of any group, including the Users group
This account should not be a part of administrators (I've seen other blog posts mention you must use administrator, that is 100% not necessary and can be considered a security risk)
Ensure the account is enabled
A disabled account will not work
Cleanup the registry
Open registry editor (regedit.msc)
Navigate to HKEY_LOCAL_MACHINE\Software\Microsoft\Microsoft Data Protection Manager\Notification
Delete the following keys (if they exist):
SmtpUserName
SmtpPassword
Reboot the DPM Server
Technically, you could restart two services: SQL Server Reporting Services instance for DPM and the DPM service, but a reboot never hurts 😉
Configure DPM to use SMTP relay
Close out of the DPM and reopen
Select Reporting, waiting for the screen to finish loading, and then select Action, Options
When installing DPM 2016, you may get a really generic error during the "Prerequisites check" during installation. Looking online, there's a ton of individuals that have this issue, but no one correlates the log files to specifically what is needed to solve each problem (yep, "each" problem, Error 4387 is a generic catch all for several issues during the prerequisits check).
Before I get into the article, the too long didn't read (TLDR) version is make sure you are using both SQL Server 2016 (no service pack) and SSMS 16.5 or earlier to successfully install DPM 2016.
To get a bit more technical and find out what's going on, open up the DPM Installation logs after you receive the error. The installation log files can be found by browsing to %ProgramFiles%\Microsoft System Center 2016\DPM\DPMLogs. Documentation on where log files are stored by DPM can be found here: https://docs.microsoft.com/en-us/system-center/dpm/set-up-dpm-logging?view=sc-dpm-2016
Here's a copy of my DpmSetup.log file, in which when looking through it, there isn't a clear cut answer, just this generic line at the bottom ([3/1/2019 6:22:54 AM] *** Error : CurrentDomain_UnhandledException).
[3/1/2019 6:21:16 AM] Information : Microsoft System Center 2016 Data Protection Manager setup started.
[3/1/2019 6:21:16 AM] Data : Mode of setup = User interface
[3/1/2019 6:21:16 AM] Data : OSVersion = Microsoft Windows NT 10.0.14393.0
[3/1/2019 6:21:16 AM] Information : Check if the media is removable
[3/1/2019 6:21:16 AM] Data : Folder Path = C:\Program Files\Microsoft System Center 2016\DPM
[3/1/2019 6:21:16 AM] Data : Drive Name = C:\
[3/1/2019 6:21:16 AM] Data : Drive Type = 3
[3/1/2019 6:21:16 AM] Information : Check attributes of the directory
[3/1/2019 6:21:16 AM] Data : Folder Path = C:\Program Files\Microsoft System Center 2016\DPM
[3/1/2019 6:21:16 AM] Data : File Attributes = Directory
[3/1/2019 6:21:16 AM] Information : Check if the media is removable
[3/1/2019 6:21:16 AM] Data : Folder Path = C:\Program Files\Microsoft Data Protection Manager
[3/1/2019 6:21:16 AM] Data : Drive Name = C:\
[3/1/2019 6:21:16 AM] Data : Drive Type = 3
[3/1/2019 6:21:16 AM] Information : Check attributes of the directory
[3/1/2019 6:21:16 AM] Data : Folder Path = C:\Program Files\Microsoft Data Protection Manager
[3/1/2019 6:21:16 AM] * Exception : Ignoring the following exception intentionally => System.IO.FileNotFoundException: Could not find file 'C:\Program Files\Microsoft Data Protection Manager'.
File name: 'C:\Program Files\Microsoft Data Protection Manager'
at System.IO.__Error.WinIOError(Int32 errorCode, String maybeFullPath)
at System.IO.File.GetAttributes(String path)
at Microsoft.Internal.EnterpriseStorage.Dls.Setup.Wizard.InstallLocationValidation.CheckForDirectoryAttributes(String path)
[3/1/2019 6:21:16 AM] Information : Check if the media is removable
[3/1/2019 6:21:16 AM] Data : Folder Path = C:\Program Files\Microsoft System Center 2016\DPM\DPM\DPMDB
[3/1/2019 6:21:16 AM] Data : Drive Name = C:\
[3/1/2019 6:21:16 AM] Data : Drive Type = 3
[3/1/2019 6:21:16 AM] Information : Check attributes of the directory
[3/1/2019 6:21:16 AM] Data : Folder Path = C:\Program Files\Microsoft System Center 2016\DPM\DPM\DPMDB
[3/1/2019 6:21:16 AM] * Exception : Ignoring the following exception intentionally => System.IO.DirectoryNotFoundException: Could not find a part of the path 'C:\Program Files\Microsoft System Center 2016\DPM\DPM\DPMDB'.
at System.IO.__Error.WinIOError(Int32 errorCode, String maybeFullPath)
at System.IO.File.GetAttributes(String path)
at Microsoft.Internal.EnterpriseStorage.Dls.Setup.Wizard.InstallLocationValidation.CheckForDirectoryAttributes(String path)
[3/1/2019 6:21:17 AM] Information : The setup wizard is initialized.
[3/1/2019 6:21:17 AM] Information : Starting the setup wizard.
[3/1/2019 6:21:17 AM] Information : <<< Dialog >>> Welcome Page : Entering
[3/1/2019 6:22:33 AM] Information : <<< Dialog >>> Welcome Page : Leaving
[3/1/2019 6:22:33 AM] Information : <<< Dialog >>> Inspect Page : Entering
[3/1/2019 6:22:41 AM] Information : Query WMI provider for path of configuration file for SQL Server 2008 Reporting Services.
[3/1/2019 6:22:41 AM] Information : Querying WMI Namespace: \\DPM-SERVER\root\Microsoft\SqlServer\ReportServer\RS_DPM\V13\admin for query: SELECT * FROM MSReportServer_ConfigurationSetting WHERE InstanceName='DPM'
[3/1/2019 6:22:42 AM] Data : Path of configuration file for SQL Server 2008 Reporting Services = C:\Program Files\Microsoft SQL Server\MSRS13.DPM\Reporting Services\ReportServer\RSReportServer.config
[3/1/2019 6:22:42 AM] * Exception : => System.IO.FileNotFoundException: Could not load file or assembly 'Microsoft.SqlServer.Smo, Version=10.0.0.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91' or one of its dependencies. The system cannot find the file specified.
File name: 'Microsoft.SqlServer.Smo, Version=10.0.0.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91'
at Microsoft.Internal.EnterpriseStorage.Dls.Setup.Helpers.MiscHelper.IsSqlClustered(String sqlMachineName, String sqlInstanceName)
at Microsoft.Internal.EnterpriseStorage.Dls.Setup.Helpers.MiscHelper.IsMachineClustered(String sqlMachineName, String sqlInstanceName)
WRN: Assembly binding logging is turned OFF.
To enable assembly bind failure logging, set the registry value [HKLM\Software\Microsoft\Fusion!EnableLog] (DWORD) to 1.
Note: There is some performance penalty associated with assembly bind failure logging.
To turn this feature off, remove the registry value [HKLM\Software\Microsoft\Fusion!EnableLog].
[3/1/2019 6:22:42 AM] * Exception : => System.Management.ManagementException: Invalid namespace
at System.Management.ManagementException.ThrowWithExtendedInfo(ManagementStatus errorCode)
at System.Management.ManagementScope.InitializeGuts(Object o)
at System.Management.ManagementScope.Initialize()
at System.Management.ManagementObjectSearcher.Initialize()
at System.Management.ManagementObjectSearcher.Get()
at Microsoft.Internal.EnterpriseStorage.Dls.Setup.Helpers.WmiHelper.IsMachineClustered(String machineName, String instanceName)
[3/1/2019 6:22:42 AM] Information : OS >= win 8 , enable Dedupe role
[3/1/2019 6:22:53 AM] Information : output : True
..
error :
[3/1/2019 6:22:53 AM] Data : Path of inspection output xml = C:\Program Files\Microsoft System Center 2016\DPM\DPMLogs\InspectReport.xml
[3/1/2019 6:22:53 AM] Information : Instantiating inspect component.
[3/1/2019 6:22:53 AM] Data : Path of output xml = C:\Program Files\Microsoft System Center 2016\DPM\DPMLogs\InspectReport.xml
[3/1/2019 6:22:53 AM] Information : Deserializing the check XML from path : C:\Users\labuser.CONTOSO\AppData\Local\Temp\DPM8AFA.tmp\DPM2012\Setup\checks.xml
[3/1/2019 6:22:53 AM] Information : Loading the check XML from path : C:\Users\labuser.CONTOSO\AppData\Local\Temp\DPM8AFA.tmp\DPM2012\Setup\checks.xml
[3/1/2019 6:22:54 AM] Information : Deserialising the scenario XML from path : C:\Users\labuser.CONTOSO\AppData\Local\Temp\DPM8AFA.tmp\DPM2012\Setup\scenarios.xml
[3/1/2019 6:22:54 AM] Information : Loading the check XML from path : C:\Users\labuser.CONTOSO\AppData\Local\Temp\DPM8AFA.tmp\DPM2012\Setup\scenarios.xml
[3/1/2019 6:22:54 AM] Information : Getting scenarios for the product: DPM
[3/1/2019 6:22:54 AM] Information : Getting scenarios for DPM
[3/1/2019 6:22:54 AM] Information : Getting scenario for Mode:Install, DbLocation:Remote, SKU:Retail and CCMode:NotApplicable
[3/1/2019 6:22:54 AM] *** Error : Initialize the SQLSetUpHelper Object
[3/1/2019 6:22:54 AM] Information : [SQLSetupHelper.GetWMIReportingNamespace]. Reporting Namespace found. Reporting Namespace : V13
[3/1/2019 6:22:54 AM] Information : [SQLSetupHelper.GetWMISqlServerNamespace]. SQL Namespace found. SQL Namespace : \\DPM-SERVER\root\Microsoft\SqlServer\ComputerManagement13
[3/1/2019 6:22:54 AM] Information : Query WMI provider for SQL Server 2008.
[3/1/2019 6:22:54 AM] Information : Querying WMI Namespace: \\DPM-SERVER\root\Microsoft\SqlServer\ComputerManagement13 for query: Select * from SqlServiceAdvancedProperty where ServiceName='MSSQL$DPM' and PropertyName='Version'
[3/1/2019 6:22:54 AM] Information : SQL Server 2008 R2 SP2 instance DPM is present on this system.
[3/1/2019 6:22:54 AM] Information : Query WMI provider for SQL Server 2008.
[3/1/2019 6:22:54 AM] Information : Querying WMI Namespace: \\DPM-SERVER\root\Microsoft\SqlServer\ComputerManagement13 for query: Select * from SqlServiceAdvancedProperty where ServiceName='MSSQL$DPM' and PropertyName='Version'
[3/1/2019 6:22:54 AM] Information : [SQLSetupHelper.GetSQLDepedency]. Reporting Namespace and SQL namespace for installed SQL server which will be used as DPM DB. Reporting Namespace : \\DPM-SERVER\root\Microsoft\SqlServer\ReportServer\RS_DPM\V13\admin SQL Namespace : \\DPM-SERVER\root\Microsoft\SqlServer\ComputerManagement13
[3/1/2019 6:22:54 AM] Information : Check if SQL Server 2012 Service Pack 1 Tools is installed.
[3/1/2019 6:22:54 AM] Information : [SQLSetupHelper.GetSqlSetupRegKeyPath]. Registry Key path that contains SQL tools location: Software\Microsoft\Microsoft SQL Server\140\Tools\Setup\
[3/1/2019 6:22:54 AM] Information : Inspect.CheckSqlServerTools : MsiQueryProductState returned : INSTALLSTATE_DEFAULT
[3/1/2019 6:22:54 AM] *** Error : CurrentDomain_UnhandledException
Digging some more, I found that DPM seems to also place logs within the %temp% folder. Within this folder, I found that a tmpXXX.xml file was being created each time I ran through the installer and triggered an error. Upon opening the file, I see the following:
Looking through the above stack trace, I see hints that this is to SQL Server and in this case I'm receiving a null value for what looks like a version. So after reading other posts online, everyone said to downgrade to SQL Server 2016 RTM.
After downgrading to SQL Server 2016 RTM, I noticed I still received Error ID: 4387. This time I don't see any files within the %temp% directory, but I did find in the DPMSetup.log file (within the DPMLogs directory) the following log:
[3/8/2019 5:13:09 AM] Information : Microsoft System Center 2016 Data Protection Manager setup started.
[3/8/2019 5:13:09 AM] Data : Mode of setup = User interface
[3/8/2019 5:13:09 AM] Data : OSVersion = Microsoft Windows NT 10.0.14393.0
[3/8/2019 5:13:09 AM] Information : Check if the media is removable
[3/8/2019 5:13:09 AM] Data : Folder Path = C:\Program Files\Microsoft System Center 2016\DPM
[3/8/2019 5:13:09 AM] Data : Drive Name = C:\
[3/8/2019 5:13:09 AM] Data : Drive Type = 3
[3/8/2019 5:13:09 AM] Information : Check attributes of the directory
[3/8/2019 5:13:09 AM] Data : Folder Path = C:\Program Files\Microsoft System Center 2016\DPM
[3/8/2019 5:13:09 AM] Data : File Attributes = Directory
[3/8/2019 5:13:09 AM] Information : Check if the media is removable
[3/8/2019 5:13:09 AM] Data : Folder Path = C:\Program Files\Microsoft Data Protection Manager
[3/8/2019 5:13:09 AM] Data : Drive Name = C:\
[3/8/2019 5:13:09 AM] Data : Drive Type = 3
[3/8/2019 5:13:09 AM] Information : Check attributes of the directory
[3/8/2019 5:13:09 AM] Data : Folder Path = C:\Program Files\Microsoft Data Protection Manager
[3/8/2019 5:13:09 AM] * Exception : Ignoring the following exception intentionally => System.IO.FileNotFoundException: Could not find file 'C:\Program Files\Microsoft Data Protection Manager'.
File name: 'C:\Program Files\Microsoft Data Protection Manager'
at System.IO.__Error.WinIOError(Int32 errorCode, String maybeFullPath)
at System.IO.File.GetAttributes(String path)
at Microsoft.Internal.EnterpriseStorage.Dls.Setup.Wizard.InstallLocationValidation.CheckForDirectoryAttributes(String path)
[3/8/2019 5:13:09 AM] Information : Check if the media is removable
[3/8/2019 5:13:09 AM] Data : Folder Path = C:\Program Files\Microsoft System Center 2016\DPM\DPM\DPMDB
[3/8/2019 5:13:09 AM] Data : Drive Name = C:\
[3/8/2019 5:13:09 AM] Data : Drive Type = 3
[3/8/2019 5:13:09 AM] Information : Check attributes of the directory
[3/8/2019 5:13:09 AM] Data : Folder Path = C:\Program Files\Microsoft System Center 2016\DPM\DPM\DPMDB
[3/8/2019 5:13:09 AM] * Exception : Ignoring the following exception intentionally => System.IO.DirectoryNotFoundException: Could not find a part of the path 'C:\Program Files\Microsoft System Center 2016\DPM\DPM\DPMDB'.
at System.IO.__Error.WinIOError(Int32 errorCode, String maybeFullPath)
at System.IO.File.GetAttributes(String path)
at Microsoft.Internal.EnterpriseStorage.Dls.Setup.Wizard.InstallLocationValidation.CheckForDirectoryAttributes(String path)
[3/8/2019 5:13:10 AM] Information : The setup wizard is initialized.
[3/8/2019 5:13:10 AM] Information : Starting the setup wizard.
[3/8/2019 5:13:10 AM] Information : <<< Dialog >>> Welcome Page : Entering
[3/8/2019 5:13:55 AM] Information : <<< Dialog >>> Welcome Page : Leaving
[3/8/2019 5:13:55 AM] Information : <<< Dialog >>> Inspect Page : Entering
[3/8/2019 5:14:06 AM] Information : Query WMI provider for path of configuration file for SQL Server 2008 Reporting Services.
[3/8/2019 5:14:06 AM] Information : Querying WMI Namespace: \\DPM-SERVER\root\Microsoft\SqlServer\ReportServer\RS_DPM\V13\admin for query: SELECT * FROM MSReportServer_ConfigurationSetting WHERE InstanceName='DPM'
[3/8/2019 5:14:06 AM] Data : Path of configuration file for SQL Server 2008 Reporting Services = C:\Program Files\Microsoft SQL Server\MSRS13.DPM\Reporting Services\ReportServer\RSReportServer.config
[3/8/2019 5:14:06 AM] * Exception : => System.IO.FileNotFoundException: Could not load file or assembly 'Microsoft.SqlServer.Smo, Version=10.0.0.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91' or one of its dependencies. The system cannot find the file specified.
File name: 'Microsoft.SqlServer.Smo, Version=10.0.0.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91'
at Microsoft.Internal.EnterpriseStorage.Dls.Setup.Helpers.MiscHelper.IsSqlClustered(String sqlMachineName, String sqlInstanceName)
at Microsoft.Internal.EnterpriseStorage.Dls.Setup.Helpers.MiscHelper.IsMachineClustered(String sqlMachineName, String sqlInstanceName)
WRN: Assembly binding logging is turned OFF.
To enable assembly bind failure logging, set the registry value [HKLM\Software\Microsoft\Fusion!EnableLog] (DWORD) to 1.
Note: There is some performance penalty associated with assembly bind failure logging.
To turn this feature off, remove the registry value [HKLM\Software\Microsoft\Fusion!EnableLog].
[3/8/2019 5:14:06 AM] * Exception : => System.Management.ManagementException: Invalid namespace
at System.Management.ManagementException.ThrowWithExtendedInfo(ManagementStatus errorCode)
at System.Management.ManagementScope.InitializeGuts(Object o)
at System.Management.ManagementScope.Initialize()
at System.Management.ManagementObjectSearcher.Initialize()
at System.Management.ManagementObjectSearcher.Get()
at Microsoft.Internal.EnterpriseStorage.Dls.Setup.Helpers.WmiHelper.IsMachineClustered(String machineName, String instanceName)
[3/8/2019 5:14:06 AM] Information : OS >= win 8 , enable Dedupe role
[3/8/2019 5:14:07 AM] Information : output : True
..
error :
[3/8/2019 5:14:08 AM] Data : Path of inspection output xml = C:\Program Files\Microsoft System Center 2016\DPM\DPMLogs\InspectReport.xml
[3/8/2019 5:14:08 AM] Information : Instantiating inspect component.
[3/8/2019 5:14:08 AM] Data : Path of output xml = C:\Program Files\Microsoft System Center 2016\DPM\DPMLogs\InspectReport.xml
[3/8/2019 5:14:08 AM] Information : Deserializing the check XML from path : C:\Users\labuser.CONTOSO\AppData\Local\Temp\DPM6EC.tmp\DPM2012\Setup\checks.xml
[3/8/2019 5:14:08 AM] Information : Loading the check XML from path : C:\Users\labuser.CONTOSO\AppData\Local\Temp\DPM6EC.tmp\DPM2012\Setup\checks.xml
[3/8/2019 5:14:08 AM] Information : Deserialising the scenario XML from path : C:\Users\labuser.CONTOSO\AppData\Local\Temp\DPM6EC.tmp\DPM2012\Setup\scenarios.xml
[3/8/2019 5:14:08 AM] Information : Loading the check XML from path : C:\Users\labuser.CONTOSO\AppData\Local\Temp\DPM6EC.tmp\DPM2012\Setup\scenarios.xml
[3/8/2019 5:14:08 AM] Information : Getting scenarios for the product: DPM
[3/8/2019 5:14:08 AM] Information : Getting scenarios for DPM
[3/8/2019 5:14:08 AM] Information : Getting scenario for Mode:Install, DbLocation:Remote, SKU:Retail and CCMode:NotApplicable
[3/8/2019 5:14:08 AM] *** Error : Initialize the SQLSetUpHelper Object
[3/8/2019 5:14:08 AM] Information : [SQLSetupHelper.GetWMIReportingNamespace]. Reporting Namespace found. Reporting Namespace : V13
[3/8/2019 5:14:08 AM] Information : [SQLSetupHelper.GetWMISqlServerNamespace]. SQL Namespace found. SQL Namespace : \\DPM-SERVER\root\Microsoft\SqlServer\ComputerManagement13
[3/8/2019 5:14:08 AM] Information : Query WMI provider for SQL Server 2008.
[3/8/2019 5:14:08 AM] Information : Querying WMI Namespace: \\DPM-SERVER\root\Microsoft\SqlServer\ComputerManagement13 for query: Select * from SqlServiceAdvancedProperty where ServiceName='MSSQL$DPM' and PropertyName='Version'
[3/8/2019 5:14:08 AM] Information : SQL Server 2008 R2 SP2 instance DPM is present on this system.
[3/8/2019 5:14:08 AM] Information : Query WMI provider for SQL Server 2008.
[3/8/2019 5:14:08 AM] Information : Querying WMI Namespace: \\DPM-SERVER\root\Microsoft\SqlServer\ComputerManagement13 for query: Select * from SqlServiceAdvancedProperty where ServiceName='MSSQL$DPM' and PropertyName='Version'
[3/8/2019 5:14:08 AM] Information : [SQLSetupHelper.GetSQLDepedency]. Reporting Namespace and SQL namespace for installed SQL server which will be used as DPM DB. Reporting Namespace : \\DPM-SERVER\root\Microsoft\SqlServer\ReportServer\RS_DPM\V13\admin SQL Namespace : \\DPM-SERVER\root\Microsoft\SqlServer\ComputerManagement13
[3/8/2019 5:14:08 AM] Information : Check if SQL Server 2012 Service Pack 1 Tools is installed.
[3/8/2019 5:14:08 AM] Information : [SQLSetupHelper.GetSqlSetupRegKeyPath]. Registry Key path that contains SQL tools location: Software\Microsoft\Microsoft SQL Server\140\Tools\Setup\
[3/8/2019 5:14:08 AM] Information : Inspect.CheckSqlServerTools : MsiQueryProductState returned : INSTALLSTATE_DEFAULT
[3/8/2019 5:14:08 AM] *** Error : CurrentDomain_UnhandledException
Looking at the above log, the last line hints we are looking for SQL Server Tools (in this case, what looks like some crazy old hints to depencies on SQL Server 2012). Unfortunately, installation of SQL Server 2016 will provide you the recommendation to grab SQL Server Management Studio 17.X, however DPM 2016 will only install with SQL Server Management Studio 16.5.X. You will need to uninstall the 17.X version of SSMS and install the 16.5.X build from the link below: https://docs.microsoft.com/en-us/sql/ssms/sql-server-management-studio-changelog-ssms?view=sql-server-2017#download-ssms-1653
Alas! Upon installation and run through the DPM installation, no more Error 4387! Once DPM is installed, you can safely upgrade your SQL Server instance to 2017 if needed.
Hope this helps someone else! DPM can be picky and unforgiving in nature, but if you abide by exactly what their documentation calls out to a T and not venture anything outside of those parameters, you should be golden 🙂
Closing notes, if the above items didn't solve your problem. Please post your logs and let's troubleshoot to document all solutions needed for all error logs. Thank you!
Here is a recap of some of the reflections I have with deploying Palo Alto's VM-Series Virtual Appliance on Azure. This is more of a reflection of the steps I took rather than a guide, but you can use the information below as you see fit. At a high level, you will need to deploy the device on Azure and then configure the internal "guts" of the Palo Alto to allow it to route traffic properly on your Virtual Network (VNet) in Azure. The steps outlined should work for both the 8.0 and 8.1 versions of the Palo Alto VM-Series appliance.
Please note, this tutorial also assumes you are looking to deploy a scale-out architecture. This can help ensure a single instance doesn't get overwhelmed with the amount of bandwidth you are trying to push through it. If you are looking for a single instance, you can still follow along.
Deploy the Appliance in Azure
In deploying the Virtual Palo Altos, the documentation recommends to create them via the Azure Marketplace (which can be found here: https://azuremarketplace.microsoft.com/en-us/marketplace/apps/paloaltonetworks.vmseries-ngfw?tab=Overview). Personally, I'm not a big fan of deploying the appliance this way as I don't have as much control over naming conventions, don't have the ability to deploy more than one appliance for scale, cannot specify my availability set, cannot leverage managed disks, etc. In addition, I noticed a really strange error that if you specify a password greater than 31 characters, the Palo Alto devices flat out won't deploy on Azure. In this case, I've written a custom ARM template that leverages managed disks, availability sets, consistent naming nomenclature, proper VM sizing, and most importantly, let you define how many virtual instances you'd like to deploy for scaling.
Note: this article doesn't cover the concept of using Panorama, but that would centrally manage each of the scale-out instances in a "single pane of glass". Below, we will cover setting up a node manually to get it working. It is possible to create a base-line configuration file that joins Panorama post-deployment to bootstrap the nodes upon deployment of the ARM template. The bootstrap file is not something I've incorporated into this template, but the template could easily be modified to do so.
With the above said, this article will cover what Palo Alto considers their Shared design model. Here is an example of what this visually looks like (taken from Palo Alto's Reference Architecture document listed in the notes section at the bottom of this article):
Deployment of this template can be done by navigating to the Azure Portal (portal.azure.com), select Create a resource, type Template Deployment in the Azure Marketplace, click Create, select Build your own template in the editor, and paste the code into the editor.
Alternatively, you can click this button here:
Here are some notes on what the parameters mean in the template:
VMsize: Per Palo Alto, the recommend VM sizes should be DS3, DS4, or DS5. Documentation on this can be found here.
PACount: This defines how many virtual instances you want deployed and placed behind load balancers.
VNetName: The name of your virtual network you have created.
VNetRG: The name of the resource group your virtual network is in. This may be the same as the Resource Group you are placing the Palos in, but this is a needed configurable option to prevent errors referencing a VNet in a different resource group.
envPrefix: All of the resources that get created (load balancer, virtual machines, public IPs, NICs, etc.) will use this naming nomenclature.
manPrivateIPPrefix, trustPrivateIPPrefix, untrustPrivateIPPrefix: Corresponding subnet address range. These should be the first 3 octets of the range followed by a period. For example, 10.5.6. would be a valid value.
manPrivateIPFirst, trustPrivateIPFirst, untrustPrivateIPFirst: The first usable IP address on the subnet specified. For example, if my subnet is 10.4.255.0/24, I would need to specify 4 as my first usable address.
Username: this is the name of the privileged account that should be used to ssh and login to the PanOS web portal.
Password: Password to the privileged account used to ssh and login to the PanOS web portal. Must be 31 characters or less due to Pan OS limitation.
Configure the Appliance
Once the virtual appliance has been deployed, we need to configure the Palo Alto device itself to enable connectivity on our Trust/Untrust interfaces.
On the firewall web interface, select Device tab -> Licenses and select Activate feature using authentication code.
Enter the capacity auth-code that you registered on the support portal. The firewall will connect to the update server (updates.paloaltonetworks.com), and download the license and reboot automatically. If this doesn't work, please continue below to configuring the interfaces of the device.
Log back in to the web interface after reboot and confirm the following on the Dashboard:
A valid serial number displays in Serial#. If the term Unknown displays, it means the device is not licensed. To view traffic logs on the firewall, you must install a valid capacity license.
The VM Mode displays as Microsoft Azure.
Follow these steps if using the PAYG (Pay as you go) version
Select Network-> Interfaces ->Ethernet-> select the link for ethernet1/1 and configure as follows:
Interface Type: Layer3 (default).
On the Config tab, assign the interface to the Untrust-VR router.
On the Config tab, expand the Security Zone drop-down and select New Zone. Define a new zone called Untrust, and then click OK.
On the IPv4 tab, select DHCP Client if you plan to assign only one IP address on the interface. If you plan to assign more than one IP address select Static and manually enter the primary and secondary IP addresses assigned to the interface on the Azure portal. The private IP address of the interface can be found by navigating to Virtual Machines -> YOURPALOMACHINE -> Networking and using the Private IP address specified on each tab.
Note: Do not use the Public IP address to the Virtual Machine. Azure automatically DNATs traffic to your private address so you will need to use the Private IP Address for your UnTrust interface.
Clear the Automatically create default route to default gateway provided by server check box.
Note: Disabling this option ensures that traffic handled by this interface does not flow directly to the default gateway in the VNet.
Click OK
Note: For the untrust interface, within your Azure environment ensure you have a NSG associated to the untrust subnet or individual firewall interfaces as the template doesn't deploy this for you (I could add this in, but if you already had an NSG I don't want to overwrite it). As per Azure Load Balancer's documentation, you will need an NSG associated to the NICs or subnet to allow traffic in from the internet.
Configure the Trust Interface
Select Network-> Interfaces ->Ethernet-> select the link for ethernet1/2 and configure as follows:
Interface Type: Layer3 (default).
On the Config tab, assign the interface to the Trust-VR router.
On the Config tab, expand the Security Zone drop-down and select New Zone. Define a new zone called Trust, and then click OK.
On the IPv4 tab, select DHCP Client if you plan to assign only one IP address on the interface. If you plan to assign more than one IP address select Static and manually enter the primary and secondary IP addresses assigned to the interface on the Azure portal. The private IP address of the interface can be found by navigating to Virtual Machines -> YOURPALOMACHINE-> Networking and using the Private IP address specified on each tab.
Clear the Automatically create default route to default gateway provided by server check box.
Note: Disabling this option ensures that traffic handled by this interface does not flow directly to the default gateway in the VNet.
Click OK
Click Commit in the top right. Verify that the link state for the interfaces is up (the interfaces should turn green in the Palo Alto user interface).
Define Static Routes
The Palo Alto will need to understand how to route traffic to the internet and how to route traffic to your subnets. As you will see in this section, we will need two separate virtual routers to help handle the processing of health probes submitted from each of the Azure Load Balancers.
Create a new Virtual Router and Static Route to the internet
Select Network -> Virtual Router
Click Add at the bottom
Set the Name to Untrust-VR
Select Static Routes -> IPv4 -> Add
Create a Static Route to egress internet traffic
Name: Internet
Destination: 0.0.0.0/0
Interface: ethernet 1/1
Next Hop: IP Address
IP Address: Use the IP address of the default gateway of your subnet the Untrust interface is deployed on
Note: To find this, navigate to the Azure Portal (portal.azure.com) and select All Services -> Virtual Networks -> Your Virtual Network -> Subnets and use the first IP address of your subnet the untrust interface is on. For example, is the address range of my subnet is 10.5.15.0/24, I would use 10.5.15.1 as my IP address. If my subnet was 10.5.15.128/25, I would use 129 10.5.15.129 as my IP address
Create a Static Route to move traffic from the internet to your trusted VR
Name: Internal Routes
Destination: your vnet address space
Interface: None
Next Hop: Next VR
Trust-VR
Click OK
Create a new Virtual Router and Static Route to your Azure Subnets
Select Network -> Virtual Router
Click Add at the bottom
Set the Name to Trust-VR
Select Static Routes -> IPv4 -> Add
Create a Static Route to send traffic to Azure from your Trusted interface
Name: AzureVNet
Destination: your vnet address space
Interface: ethernet 1/2
Next Hop: IP Address
IP Address: Use the IP address of the default gateway of your subnet the Trust interface is deployed on
Note: To find this, navigate to the Azure Portal (portal.azure.com) and select All Services -> Virtual Networks -> Your Virtual Network -> Subnets and use the first IP address of your subnet the trust interface is on. For example, if the address range of my subnet is 10.5.15.0/24, I would use 10.5.15.1 as my IP address. If my subnet was 10.5.15.128/25, I would use 129 10.5.15.129 as my IP address
Create a Static Route to move internet traffic received on Trust to your Untrust Virtual Router
Name: Internet
Destination: 0.0.0.0/0
Interface: None
Next Hop: Next VR
Untrust-VR
Click OK
Click Commit in the top right.
Configure Health Probes for Azure Load Balancers
If deploying the Scale-Out scenario, you will need to approve TCP probes from 168.63.129.16, which is the IP address of the Azure Load Balancer. Azure health probes come from a specific IP address (168.63.129.16). In this case, we need a static route to allow the response back to the load balancer. For the purpose of this article, we will configure SSH on the Trust interface strictly for the Azure Load Balancer to contact to validate the Palo Alto instances are healthy.
Configure Palo Alto SSH Service for the interfaces
First we need to create an Interface Management Profile
Next, we need to assign the profile to the Trust interface
Select Network -> Interfaces ->select the link for ethernet1/2
Select the Advanced tab
Set the Management Profile to SSH-MP
Click OK
Next, we need to assign the profile to the Untrust interface
Select Network -> Interfaces ->select the link for ethernet1/1
Select the Advanced tab
Set the Management Profile to SSH-MP
Click OK
Create a Static Route for the Azure Load Balancer Health Probes on the Untrust Interface
Next we need to tell the health probes to flow out of the Untrust interface due to our 0.0.0.0/0 rule.
Select Network -> Virtual Router -> Untrust-VR
Select Static Routes -> IPv4 -> Add
Use the following configuration
Name: AzureLBHealthProbe
Destination: 168.63.129.16/32
Interface: ethernet 1/1
Next Hop: IP Address
IP Address: Use the IP address of the default gateway of your subnet the Trust interface is deployed on
Note: To find this, navigate to the Azure Portal (portal.azure.com) and select All Services -> Virtual Networks -> Your Virtual Network -> Subnets and use the first IP address of your subnet the trust interface is on. For example, if the address range of my subnet is 10.5.15.0/24, I would use 10.5.15.1 as my IP address. If my subnet was 10.5.15.128/25, I would use 129 10.5.15.129 as my IP address
Click OK
Create a Static Route for the Azure Load Balancer Health Probes on the Trust Interface
Next we need to tell the health probes to flow out of the Trust interface due to our 0.0.0.0/0 rule.
Select Network -> Virtual Router -> Trust-VR
Select Static Routes -> IPv4 -> Add
Use the following configuration
Name: AzureLBHealthProbe
Destination: 168.63.129.16/32
Interface: ethernet 1/2
Next Hop: IP Address
IP Address: Use the IP address of the default gateway of your subnet the Trust interface is deployed on
Note: To find this, navigate to the Azure Portal (portal.azure.com) and select All Services -> Virtual Networks -> Your Virtual Network -> Subnets and use the first IP address of your subnet the trust interface is on. For example, if the address range of my subnet is 10.5.15.0/24, I would use 10.5.15.1 as my IP address. If my subnet was 10.5.15.128/25, I would use 129 10.5.15.129 as my IP address
Click OK
Click Commit in the top right.
Create a NAT rule for internal traffic destined to the internet
You will need to NAT all egress traffic destined to the internet via the address of the Untrust interface, so return traffic from the Internet comes back through the Untrust interface of the device.
Navigate to Policies -> NAT
Click Add
On the General tab use the following configuration
Name: UntrustToInternet
Description: Rule to NAT all trusted traffic destined to the Internet to the Untrust interface
On the Original Packet tab use the following configuration
Source Zone: Click Add and select Trust
Destination Zone: Untrust
Destination Interface: ethernet 1/1
Service: Check Any
Source Address: Click Add, use the Internal Address space of your Trust zones
Destination address: Check Any
On the TranslatedPacket tab use the following configuration
By default, Palo Alto deploys 8.0.0 for the 8.0.X series and 8.1.0 for the 8.1.X series. In this case, Palo Alto will strongly recommend you upgrade the appliance to the latest version of that series before helping you with support cases.
To do this, go to Device -> Dynamic Updates -> click Check Now in the bottom left and download the latest build from the list of available updates.
Please note: the update process will require a reboot of the device and can take 20 minutes or so.
Summary
At this point you should have a working scaled out Palo Alto deployment. If all went well, I would recommend removing the public IP to the management interface or at least scoping it down to the single public IP address you are coming from. You can find your public IP address by navigating here: https://jackstromberg.com/whats-my-ip-address/
TLDR: There are two sections of this article; feel free to scroll down to the titles for the applicable section.
Using VM Extensions with Terraform to Domain Join Virtual Machines
VM Extensions are a fantastic way to yield post deployment configurations via template as code in Azure. One of Azure's most common VM Extensions is the JoinADDomainExtension, which will join your Azure VM to an Active Directory machine after the machine has successfully been provisioned. For the purposes of this artcicle, we will assume you have a VM called testvm in the East US region.
Typically, VM extensions can be configured via the following block of ARM Template code (a fully working example building the virtual and running the extension can be found here).
When looking at Terraform, the syntax is a bit different and there isn't much documentation on how to handle the settings and most importantly, the password/secret used when joining the machine to the domain. In this case, here is working translation of the ARM template to Terraform.
The key pieces here are the SETTINGS and PROTECTED_SETTINGS blocks that allow you to pass the traditional JSON attributes as you would in the ARM template. Luckily, terraform does a somewhat decent job documentation this on their public docs here, so if you have any additional questions on any of the attributes you can find them all here: https://www.terraform.io/docs/providers/azurerm/r/virtual_machine_extension.html
The last block of code I have specified at the very end is a depends_on statement. This simpy ensures that this resource is not created until the Virtual Machine itself has successfully been provisioned and can be very beneficial if you have other scripts that may need to run prior to domain join.
Using VM Extensions with Terraform to customize a machine post deployment
Continuing along the lines of customizing a virtual machine post deployment, Azure has a handy dandy extension called CustomScriptExtension. What this extension does is allow you to arbitrarily download and execute files (typically PowerShell) after a virtual machine has been deployed. Unlike the domain join example above, Azure has extensive documentation on this extension and provides support for both Windows and Linux (click the links for Windows or Linux to see the Azure docs on this).
Following similar suite as the above Domain Join example, within the ARM world, we can leverage the following template to execute code post deployment:
When we look at the translation over to Terraform, for the most part the structure is the exact same. Similar to our Active Directory Domain Join script above, the tricky piece is knowing to use the PROTECTED_SETTINGS to encapsulate our block of code that in this case authenticates to the Azure Storage Account to pull down our post-deployment script. Now per the Azure documentation, those variables are optional; if the scripts you have don't contain sensitive information, you are more than welcome to simply specify the fileUri and specify the commandToExecute via the regular SETTINGS block.
At this point you should be able to leverage both extensions to join a machine to the domain and then customize virtually any aspect of the machine thereafter.
The only thing I'll leave you with is typically it is recommended to not leave clear-text passwords scattered through your templates. In either case, I highly recommend looking at leveraging Azure Key Vault or an alternative solution that can ensure proper security in handling those secrets.
Notes
Aside from Terraform, one question I've received is what happens if the extension runs against a machine that is already domain joined? A: The VM extension will still install against the Azure Virtual Machine, but will immediately return back the following response: "Join completed for Domain 'yourdomain.com'"
Specifically, the following is returned back to Azure: [{"version":"1","timestampUTC":"2019-03-27T16:30:57.9274393Z","status":{"name":"ADDomainExtension","operation":"Join Domain/Workgroup","status":"success","code":0,"formattedMessage":{"lang":"en-US","message":"Join completed for Domain 'yourdomain.com'"},"substatus":null}}]
What does Options mean for domain join?
A: Copied from here: The options are a set of bit flags that define the join options. Default value of 3 is a combination of NETSETUP_JOIN_DOMAIN (0x00000001) & NETSETUP_ACCT_CREATE (0x00000002) i.e. will join the domain and create the account on the domain. For more information see https://msdn.microsoft.com/en-us/library/aa392154(v=vs.85).aspx
With Azure PowerShell modules changing all the time and the recent introduction of the PowerShell modules being renamed from AzureRm to Az, you may want to totally uninstall all modules and reinstall to make sure you are using the latest and greatest modules.
To do so, StackOverflow user BlueSky, wrote a handy dandy script that will go through and cleanup all the Azure(RM)(AD) modules. Simply open up PowerShell as an Administrator and execute the following PowerShell workflow/commands:
The thing about the PowerShell script above being a workflow is this allows you to remove all the modules in parallel vs one-by-one. Here's a screenshot of the script in action.
From a performance perspective, PHP applications running on Azure App Services tend to perform better on Linux than Windows. While Azure provides a Drupal template in their Marketplace, it deploys to a regular Windows based App Service and installs version 8.3.3 (where as of the time of writing this article; 9/10/2018, the latest Drupal version is 8.6.1)
In this case, Microsoft has published a set of templates that provide flexibility to choose the Drupal version, deploy nginx, install PHP, and allow flexibility in installing any modules. The templates are currently deployed and maintenace on GitHub, which can be found here: https://github.com/Azure/app-service-quickstart-docker-images/tree/master/drupal-nginx-fpm
Download and install Visual Studio Code (free lightweight code editor for Windows, Linux, and Mac)
Note: I'm using a Windows 10 machine while writing this tutorial. There will be some steps, like running Git Bash on Windows vs running Git natively on Linux. Likely, you can just run Git from a regular terminal session and you'll be fine on the Linux/Mac side.
Note: Unfortunately, we cannot just clone a specific directory easily, we have to download all the files. This particular GitHub project contains several projects, so it'll be about a 50MB download as a heads up
Note: The --config core.autocrlf=input is used to prevent windows from using crlf vs lf's for line returns. If you don't specify this, you might receive the following error if you tried running your docker container after being built:
standard_init_linux.go:190: exec user process caused "no such file or directory"
Navigate into the Drupal directory
cd app-service-quickstart-docker-images/drupal-nginx-fpm/0.45
Modify the scripts to your desire
I personally prefer not to have PHPMyAdmin or MariaDB installed as I will leverage Azure MySQL PaaS services for the database. In this case, I went ahead and modified the Dockerfile document accordingly.
Build the Docker container
Execute the following command to build your container:
Navigate to Create a resource -> Web App. Select Docker as the OS type, select Configure container, and leverage the following settings:
Image Source: Azure Container Registry
Registry: jackdrupalregistry
Image: azuredrupal
Tag: 0.45
Navigate to All Services -> App Services -> Your App Service -> Application settings and set WEBSITES_ENABLE_APP_SERVICE_STORAGE to true, and click Save to help ensure data persists. Essentially, anything you write to /home will persist. Anything else will be reset when the container gets rebuilt.
Create a MySQL Database
Navigate to All Services -> App Services -> Your App Service -> Properties and write down the Outbound IP Addresses; we will use these later.
Select Create a Service -> Azure Database for MySQL -> Create -> create a blank database
Select Connection security and enter the Outbound IP Addresses from your App Service and click Save
Note: I haven't found a way to get Drupal to allow SSL Connections, which would certainly be a best practice. In this case, on the same Connection security blade, go ahead and set Enforce SSL Connection to Disabled. If someone knows how to do this, please put a comment below, so I can update this guide.
Go back to the Overview section and write down the Server admin login name and Server name; we will use these during the Drupal setup
Configure Drupal
At this point, go ahead and browse out to your App Service. You should have all the necessary details to complete the installation setup. Once completed, you should see the Welcome to Drupal Side splash page.
Notes:
Email:
Upon installation of Drupal you'll receive an error that Drupal cannot send email. Azure Web Apps don't allow open relay, so you will need to use a 3rd party mail service like SendGrid or Mailchimp to relay emails.
Helpful docker commands:
docker images
docker run -it azuredrupal:test
docker ps -a
docker rm containeridea
docker rmi image
Other deployment strategies:
In addition to deploying through the portal, you could easily deploy via PowerShell, Azure CLI, or ARM template. Here's an Azure CLI 2.0 example of how to deploy (note: the script below uses PowerShell variables for demonstration, please substitute those as needed):
$resourceGroupName = "Drupal-Test"
$planName = $resourceGroupName
$appName = $planName
$containerName = "appsvcorg/drupal-nginx-fpm:0.45"
$location = "West US"
az group create -l $location -n $resourceGroupName
az appservice plan create `
-n $planName `
-g $resourceGroupName `
--sku S3 --is-linux
az webapp create `
--resource-group $resourceGroupName `
--plan $planName `
--name $appName `
--deployment-container-image-name $containerName
az webapp config appsettings set `
--resource-group $resourceGroupName `
--name $appName `
--settings WEBSITES_ENABLE_APP_SERVICE_STORAGE="true"
az webapp config appsettings set `
--resource-group $resourceGroupName `
--name $appName `
--settings WEBSITES_CONTAINER_START_TIME_LIMIT="600"
# please modify DB settings according to current condition
az webapp config appsettings set `
--resource-group $resourceGroupName `
--name $appName `
--settings DATABASE_HOST="drupaldb.mysql.database.azure.com" `
DATABASE_NAME="drupaldb" `
DATABASE_USERNAME="user@drupaldb" `
DATABASE_PASSWORD="abcdefghijklmnopqrstuvwxyz"
Hiding users from the Global Address List (GAL) is a fairly straight forward when the user is a cloud account. Simply "Hide from address list" from the Exchange Online console or run some quick powershell:
Hiding users from the GAL is fairly straight forward when the user is synchronized from on-premises as well. Simply edit the attribute of the user object, set msExchHideFromAddressLists to True, and do a sync. The problem though is what happens if you don't have the msExchHideFromAddressLists attribute in Active Directory?
Well, you can either extend your Active Directory Schema for Exchange, which is not something that you can easily roll back if something goes wrong and arguably adds a ton of attributes that likely will be never used. Or, you can simply create a custom sync rule within Azure AD Connect that flows the value from a different attribute.
This article will go over how to sync a custom attribute from on-premises to Azure AD to hide a user from the GAL, without the need of extending your Active Directory schema. In this case, we are going to use an attribute called msDS-cloudExtensionAttributeX (where X is the number of the attribute that is free/not being used within your directory). The msDS-cloudExtensionAttribute(s) were introduced in Windows Server 2012 and has 20 different numbers to allow flexibility for these types of scenarios. Now some customers may gravitate towards using a different attribute like showInAddressBook. The problem with the showInAddressBook is this attribute is referenced by very old versions of Exchange (which I'm sure people would never be running 😉 ) and is looking for the format of the common name of an object (not what we want). In this case, easiest way to move forward is to simply use the msDS-cloudExtensionAttributes.
Step 1: Scope in the msDS-cloudExtensionAttribute for Azure AD Connect
Open the Azure AD Connect Synchronization Service
Navigate to the Connectors tab, select your Active Directory (not the domain.onmicrosoft.com entry), and select Properties
In the top right, click on Show All, scroll down and find msDS-CloudExtensionAttribute1 (you can use any of the numbers 1-20, just make sure to check the box you are using), and select OK
Step 2: Create a custom sync rule
Open up the Azure AD Connect Synchronization Rules Editor
Click on the Add new rule button (make sure direction in the top left shows Inbound)
Enter the following for the description:
Name: Hide user from GAL
Description: If msDS-CloudExtensionAttribute1 attribute is set to HideFromGAL, hide from Exchange Online GAL
Connected System: Your Active Directory Domain Name
Connected System Object Type: user
Metaverse Object Type: person
Link Type: Join
Precedence: 50 (this can be any number less than 100. Just make sure you don't duplicate numbers if you have other custom rules or you'll receive a dead-lock error from SQL Server)
Click Next > on Scoping filter and Join rules, those can remain blank
Enter the following Transformation page, click the Add transformation button, fill out the form with the values below, and then click Add
FlowType: Expression
Target Attribute: msExchHideFromAddressLists
Source:
Open up Windows PowerShell on the Azure AD Connect Server
Execute the following command: Start-ADSyncSyncCycle -PolicyType Initial
Step 4: Hide a user from Active Directory
Open Active Directory Users and Computers, find the user you want to hide from the GAL, right click select Properties
Select the Attributes Editor tab, find msDS-cloudExtensionAttribute1, and enter the value HideFromGAL (note, this is case sensitive), click OK and OK to close out of the editor.
Note: if you don't see the Attribute Editor tab in the previous step, within Active Directory Users and Computers, click on View in the top menu and select Advanced Features
Step 5: Validation
Open the Azure AD Connect Synchronization Service
On the Operations tab, if you haven't seen a Delta Synchronization, manually trigger the Delta sync to pick up the change you made in Active Directory
Select the Export for the domain.onmicrosoft.com connecter and you should see 1 Updates
Select the user account that is listed and click Properties. On the Connector Space Object Properties, you should see Azure AD Connect triggered an add to Azure AD to set msExchHideFromAddressLists set to true
There ya have it! An easy way to hide users from the GAL with minimal risk to ongoing operations. Due to the way Azure AD Connect upgrades, our sync rule will persist fine during regular updates/patches released.
Per Microsoft: Some packages may not install using pip when run on Azure. It may simply be that the package is not available on the Python Package Index. It could be that a compiler is required (a compiler is not available on the machine running the web app in Azure App Service).
Example, you may receive an error like this when trying to install a specific package (in this case, trying to install Pandas):
Command: "D:\home\site\deployments\tools\deploy.cmd"
Handling python deployment.
KuduSync.NET from: 'D:\home\site\repository' to: 'D:\home\site\wwwroot'
Copying file: 'requirements.txt'
Detected requirements.txt. You can skip Python specific steps with a .skipPythonDeployment file.
Detecting Python runtime from runtime.txt
Detected python-2.7Found compatible virtual environment.Pip install requirements.Downloading/unpacking Flask==0.12.1 (from -r requirements.txt (line 1))Downloading/unpacking numpy==1.15.0rc2 (from -r requirements.txt (line 2))Downloading/unpacking pandas==0.22.0 (from -r requirements.txt (line 3)) Running setup.py (path:D:\home\site\wwwroot\env\build\pandas\setup.py) egg_info for package pandas Could not locate executable g77
Could not locate executable f77
Could not locate executable ifort
Could not locate executable ifl
Could not locate executable f90
Could not locate executable efl
Could not locate executable gfortran
Could not locate executable f95
Could not locate executable g95
Could not locate executable effort
Could not locate executable efc
don't know how to compile Fortran code on platform 'nt'
non-existing path in 'numpy\\distutils': 'site.cfg'
Running from numpy source directory. d:\local\temp\easy_install-dsrz9g\numpy-1.15.0rc2\setup.py:385: UserWarning: Unrecognized setuptools command, proceeding with generating Cython sources and expanding templates run_build = parse_setuppy_commands() D:\python27\Lib\distutils\dist.py:267: UserWarning: Unknown distribution option: 'python_requires' warnings.warn(msg) d:\local\temp\easy_install-dsrz9g\numpy-1.15.0rc2\numpy\distutils\system_info.py:625: UserWarning: Atlas (http://math-atlas.sourceforge.net/) libraries not found. Directories to search for the libraries can be specified in the numpy/distutils/site.cfg file (section [atlas]) or by setting the ATLAS environment variable. self.calc_info() d:\local\temp\easy_install-dsrz9g\numpy-1.15.0rc2\numpy\distutils\system_info.py:625: UserWarning: Blas (http://www.netlib.org/blas/) libraries not found. Directories to search for the libraries can be specified in the numpy/distutils/site.cfg file (section [blas]) or by setting the BLAS environment variable. self.calc_info() d:\local\temp\easy_install-dsrz9g\numpy-1.15.0rc2\numpy\distutils\system_info.py:625: UserWarning: Blas (http://www.netlib.org/blas/) sources not found. Directories to search for the sources can be specified in the numpy/distutils/site.cfg file (section [blas_src]) or by setting the BLAS_SRC environment variable. self.calc_info() d:\local\temp\easy_install-dsrz9g\numpy-1.15.0rc2\numpy\distutils\system_info.py:625: UserWarning: Lapack (http://www.netlib.org/lapack/) libraries not found. Directories to search for the libraries can be specified in the numpy/distutils/site.cfg file (section [lapack]) or by setting the LAPACK environment variable. self.calc_info() d:\local\temp\easy_install-dsrz9g\numpy-1.15.0rc2\numpy\distutils\system_info.py:625: UserWarning: Lapack (http://www.netlib.org/lapack/) sources not found. Directories to search for the sources can be specified in the numpy/distutils/site.cfg file (section [lapack_src]) or by setting the LAPACK_SRC environment variable. self.calc_info() D:\python27\Lib\distutils\dist.py:267: UserWarning: Unknown distribution option: 'define_macros' warnings.warn(msg) Traceback (most recent call last): File "<string>", line 17, in <module> File "D:\home\site\wwwroot\env\build\pandas\setup.py", line 743, in <module> **setuptools_kwargs) File "D:\python27\Lib\distutils\core.py", line 111, in setup _setup_distribution = dist = klass(attrs) File "D:\home\site\wwwroot\env\lib\site-packages\setuptools\dist.py", line 262, in __init__ self.fetch_build_eggs(attrs['setup_requires']) File "D:\home\site\wwwroot\env\lib\site-packages\setuptools\dist.py", line 287, in fetch_build_eggs replace_conflicting=True, File "D:\home\site\wwwroot\env\lib\site-packages\pkg_resources.py", line 614, in resolve dist = best[req.key] = env.best_match(req, ws, installer) File "D:\home\site\wwwroot\env\lib\site-packages\pkg_resources.py", line 857, in best_match return self.obtain(req, installer) File "D:\home\site\wwwroot\env\lib\site-packages\pkg_resources.py", line 869, in obtain return installer(requirement) File "D:\home\site\wwwroot\env\lib\site-packages\setuptools\dist.py", line 338, in fetch_build_egg return cmd.easy_install(req) File "D:\home\site\wwwroot\env\lib\site-packages\setuptools\command\easy_install.py", line 613, in easy_install return self.install_item(spec, dist.location, tmpdir, deps) File "D:\home\site\wwwroot\env\lib\site-packages\setuptools\command\easy_install.py", line 643, in install_item dists = self.install_eggs(spec, download, tmpdir) File "D:\home\site\wwwroot\env\lib\site-packages\setuptools\command\easy_install.py", line 833, in install_eggs return self.build_and_install(setup_script, setup_base) File "D:\home\site\wwwroot\env\lib\site-packages\setuptools\command\easy_install.py", line 1055, in build_and_install self.run_setup(setup_script, setup_base, args) File "D:\home\site\wwwroot\env\lib\site-packages\setuptools\command\easy_install.py", line 1043, in run_setup raise DistutilsError("Setup script exited with %s" % (v.args[0],)) distutils.errors.DistutilsError: Setup script exited with error: Microsoft Visual C++ 9.0 is required (Unable to find vcvarsall.bat). Get it from http://aka.ms/vcpython27 Complete output from command python setup.py egg_info:Could not locate executable g77
Could not locate executable f77
Could not locate executable ifort
Could not locate executable ifl
Could not locate executable f90
Could not locate executable efl
Could not locate executable gfortran
Could not locate executable f95
Could not locate executable g95
Could not locate executable effort
Could not locate executable efcdon't know how to compile Fortran code on platform 'nt'non-existing path in 'numpy\\distutils': 'site.cfg'Running from numpy source directory.d:\local\temp\easy_install-dsrz9g\numpy-1.15.0rc2\setup.py:385: UserWarning: Unrecognized setuptools command, proceeding with generating Cython sources and expanding templates run_build = parse_setuppy_commands()D:\python27\Lib\distutils\dist.py:267: UserWarning: Unknown distribution option: 'python_requires' warnings.warn(msg)
d:\local\temp\easy_install-dsrz9g\numpy-1.15.0rc2\numpy\distutils\system_info.py:625: UserWarning: Atlas (http://math-atlas.sourceforge.net/) libraries not found. Directories to search for the libraries can be specified in the numpy/distutils/site.cfg file (section [atlas]) or by setting the ATLAS environment variable. self.calc_info()d:\local\temp\easy_install-dsrz9g\numpy-1.15.0rc2\numpy\distutils\system_info.py:625: UserWarning: Blas (http://www.netlib.org/blas/) libraries not found. Directories to search for the libraries can be specified in the numpy/distutils/site.cfg file (section [blas]) or by setting the BLAS environment variable. self.calc_info()d:\local\temp\easy_install-dsrz9g\numpy-1.15.0rc2\numpy\distutils\system_info.py:625: UserWarning: Blas (http://www.netlib.org/blas/) sources not found. Directories to search for the sources can be specified in the numpy/distutils/site.cfg file (section [blas_src]) or by setting the BLAS_SRC environment variable. self.calc_info()d:\local\temp\easy_install-dsrz9g\numpy-1.15.0rc2\numpy\distutils\system_info.py:625: UserWarning: Lapack (http://www.netlib.org/lapack/) libraries not found. Directories to search for the libraries can be specified in the numpy/distutils/site.cfg file (section [lapack]) or by setting the LAPACK environment variable. self.calc_info()d:\local\temp\easy_install-dsrz9g\numpy-1.15.0rc2\numpy\distutils\system_info.py:625: UserWarning: Lapack (http://www.netlib.org/lapack/) sources not found. Directories to search for the sources can be specified in the numpy/distutils/site.cfg file (section [lapack_src]) or by setting the LAPACK_SRC environment variable. self.calc_info()D:\python27\Lib\distutils\dist.py:267: UserWarning: Unknown distribution option: 'define_macros' warnings.warn(msg)Traceback (most recent call last): File "<string>", line 17, in <module> File "D:\home\site\wwwroot\env\build\pandas\setup.py", line 743, in <module> **setuptools_kwargs) File "D:\python27\Lib\distutils\core.py", line 111, in setup _setup_distribution = dist = klass(attrs) File "D:\home\site\wwwroot\env\lib\site-packages\setuptools\dist.py", line 262, in __init__ self.fetch_build_eggs(attrs['setup_requires']) File "D:\home\site\wwwroot\env\lib\site-packages\setuptools\dist.py", line 287, in fetch_build_eggs replace_conflicting=True, File "D:\home\site\wwwroot\env\lib\site-packages\pkg_resources.py", line 614, in resolve dist = best[req.key] = env.best_match(req, ws, installer) File "D:\home\site\wwwroot\env\lib\site-packages\pkg_resources.py", line 857, in best_match return self.obtain(req, installer) File "D:\home\site\wwwroot\env\lib\site-packages\pkg_resources.py", line 869, in obtain return installer(requirement) File "D:\home\site\wwwroot\env\lib\site-packages\setuptools\dist.py", line 338, in fetch_build_egg return cmd.easy_install(req) File "D:\home\site\wwwroot\env\lib\site-packages\setuptools\command\easy_install.py", line 613, in easy_install return self.install_item(spec, dist.location, tmpdir, deps) File "D:\home\site\wwwroot\env\lib\site-packages\setuptools\command\easy_install.py", line 643, in install_item dists = self.install_eggs(spec, download, tmpdir) File "D:\home\site\wwwroot\env\lib\site-packages\setuptools\command\easy_install.py", line 833, in install_eggs return self.build_and_install(setup_script, setup_base) File "D:\home\site\wwwroot\env\lib\site-packages\setuptools\command\easy_install.py", line 1055, in build_and_install self.run_setup(setup_script, setup_base, args) File "D:\home\site\wwwroot\env\lib\site-packages\setuptools\command\easy_install.py", line 1043, in run_setup raise DistutilsError("Setup script exited with %s" % (v.args[0],))distutils.errors.DistutilsError: Setup script exited with error: Microsoft Visual C++ 9.0 is required (Unable to find vcvarsall.bat). Get it from http://aka.ms/vcpython27----------------------------------------Cleaning up...Command python setup.py egg_info failed with error code 1 in D:\home\site\wwwroot\env\build\pandasStoring debug log for failure in D:\home\pip\pip.logAn error has occurred during web site deployment.\r\nD:\Program Files (x86)\SiteExtensions\Kudu\75.10629.3460\bin\Scripts\starter.cmd "D:\home\site\deployments\tools\deploy.cmd"
This guide is a reflection on how to use Wheel files to install Modules that cannot natively be installed via pip due to a compiler missing in the Azure App Service:
Add the following item as the first line to the document:
--find-links wheelhouse
Note: If you do not have a requirements.txt file, you can simply create a new text document and add this line to it. The requirements.txt file is what allows the Azure App Service to automatically go out and try and download packages you may need for your application. Official documentation on this file is found here: https://docs.microsoft.com/en-us/azure/app-service/web-sites-python-configure#package-management
Within the debug console, navigate to your version of Python.
Note: The default Python versions in an Azure App Service are 2.7 and 3.4; however since Wheel will need to install some files, you cannot leverage the default directories of D:\Python27 for v2.7 and D:\Python34 for v3.4
Installing NodeJS on a Raspberry Pi can be a bit tricky. Over the years, the ARM based processor has gone through several versions (ARMv6, ARMv7, and ARMv8), in which there are different flavors of NodeJS to each of these architectures.
Depending on the version you have, you will need to manually install NodeJS vs grabbing the packages via a traditional apt-get install nodejs.
Step 1: Validate what version of the ARM chipset you have
First let's find out what ARM version you have for your Raspberry Pi. To do that, execute the following command:
uname -m
You should receive something like: armv61
Step 2: Find the latest package to download from nodeJS's website
Navigate to https://nodejs.org/en/download/ and scroll down to the latest Linux Binaries for ARM that match your instance. Right click and copy the address to the instance that matches your processor's architecture. For example, if you saw armv61, you'd copy the download for ARMv6
Step 3: Download and install nodeJS
Within your SSH/console session on the Raspberry Pi, change to your local home directory and execute the following command (substituting in the URL you copied in the previous step in what's outlined in redbelow). For example:
cd ~
wget https://nodejs.org/dist/v8.11.3/node-v8.11.3-linux-armv6l.tar.xz
Next, extract the tarball (substituting in the name of the tarball you downloaded in the previous step) and change the directory to the extracted files
tar -xvf node-v8.11.3-linux-armv6l.tar.xz
cd node-v8.11.3-linux-armv6l
Next, remove a few files that aren't used and copy the files to /usr/local
Growing up it was always common to spin up a "LAMP" box to host a website. The typical setup was: Linux Apache MySQL PHP
Over the past few years, this model has slightly changed due to new open source technologies bringing new ideas to solve performance and licensing issues at massive scale. In this tutorial, we are going to look at setting up a LEMP box on Debian Stretch (9.1). Linux nginx [engine x] MariaDB PHP
Please note, MariaDB could easily be swapped out with MySQL in this tutorial, however many have opted to jump over to MariaDB as an open source alternative (actually designed by the original developers of MySQL) over fear Oracle may close source MySQL.
Installing Linux
This tutorial assumes you already have either a copy of Ubuntu 14+ or Debian 7+. This probably works on earlier versions as well, but I haven't tested them. On a side note, I typically don't install Linux builds with an interactive desktop environment, so grab yourself a copy of Putty and ssh in or open up Terminal if you have interactive access to the Desktop Environment. Before continuing, go ahead and update apt-get repos and upgrade any packages currently installed:
apt-get update && apt-get upgrade
Installing nginx
Grab a copy of nginx
apt-get install nginx
Installing MariaDB
Grab a copy of MariaDB
apt-get install mariadb-server
Installing PHP
In this case, I want to roll with PHP7. You can specify php5 or php7 depending on your application, but PHP7 has some great performance enhancements, so for new apps, I'd leverage it. The biggest thing here is to make sure you use the FastCGI Process Manager package. If you specify just php or php7, package manager will pull down apache2 as a dependency. That is not what we want in our LEMP stack.
apt-get install php7.3-fpm
Once installed, fire up your favorite text editor (it's ok if it's vi :)) and edit the default site for nginx
vi /etc/nginx/sites-enabled/default
Search for the comment # Add index.php to the list if you are using PHP and add index.php to the line below it. For example:
index index.html index.htm index.php index.nginx-debian.html;
Next, find the comment # pass PHP scripts to FastCGI server and change the block of code to the following to tell nginx to process .PHP files with FastCGI-PHP:
# pass PHP scripts to FastCGI server
#
location ~ \.php$ {
include snippets/fastcgi-php.conf;
#
# # With php-fpm (or other unix sockets):
fastcgi_pass unix:/var/run/php/php7.3-fpm.sock;
# # With php-cgi (or other tcp sockets):
# fastcgi_pass 127.0.0.1:9000;
}
Save the file. If using vi, you can do that by executing :wq
Next, reload the nginx service to pickup the new changes to our configuration:
service nginx reload
Test
At this point, we can create a php file to validate things are working well. Go ahead and create a newfile /var/www/html/info.php and add the following line:
<?php
phpinfo();
If you see a page listing the PHP version and the corresponding environment configuration, congratulations, you have finished setting up your new LEMP stack! 🙂