My PowerShell scripts to encrypt Azure VM disks

This is my steps that I took from this very long document.

First we need to create a Key vault and then an AAD application, then you connect them. Make note of the output of $aadClientID.

$Location="East US"

#Create New KeyVault
New-AzureRmKeyVault -VaultName $KeyVaultName -ResourceGroupName $ResourceGroupName -Location $Location

#Create New AAD Application
$aadClientSecret = "YourLongSecret"
$azureAdApplication = New-AzureRmADApplication -DisplayName "Encryption-EastUS" -HomePage "https://IThinkAnythingCanGoHere" -IdentifierUris "https://IThinkAnythingCanGoHereURi" -Password $aadClientSecret
$servicePrincipal = New-AzureRmADServicePrincipal -ApplicationId $azureAdApplication.ApplicationId
$aadClientID = $azureAdApplication.ApplicationId
Set-AzureRmKeyVaultAccessPolicy -VaultName $KeyVaultName -ServicePrincipalName $aadClientID -PermissionsToKeys all -PermissionsToSecrets all -ResourceGroupName $ResourceGroupName;
Set-AzureRmKeyVaultAccessPolicy -VaultName $KeyVaultName -EnabledForDiskEncryption

Once that is setup, you can encrypt a VM:

$Location="East US"

$aadClientSecret = "YourLongSecret"
$aadClientID = "YouMadeNoteOfThisAbove"
$KeyVault = Get-AzureRmKeyVault -VaultName $KeyVaultName -ResourceGroupName $ResourceGroupName;
$diskEncryptionKeyVaultUrl = $KeyVault.VaultUri;
$KeyVaultResourceId = $KeyVault.ResourceId;

Set-AzureRmVMDiskEncryptionExtension -ResourceGroupName $ResourceGroupName -VMName $vmName -AadClientID $aadClientID -AadClientSecret $aadClientSecret -DiskEncryptionKeyVaultUrl $diskEncryptionKeyVaultUrl -DiskEncryptionKeyVaultId $KeyVaultResourceId;

If you did not make note of your aadClientID, then you run:


And the ApplicationId is what you are looking for.

I forgot how I set this up, so I went back and made some notes, and now I hope this helps someone.


Using git and a post-recive to update production node.js apps.

I have been trying to figure out the best way to deploy and maintain node.js apps in development and production. If I have a local git repo on my machine, and I want to push it to production, what is the best way to do this? I don’t think the .git files should be there. I also don’t keep my modules in the repo, so I need a way to push updates, and make sure the newest dependencies are on the server.
I figured out that people are using a post-recieve script to update the site. This is what I ended up with. You put it in a file named post-receive in the hooks folder (on the server not on your local repo)

git --work-tree=$GIT_WORK_TREE checkout --force
npm install

I may take this a step further and recycle pm2, but that is another post!


Using PowerShell to extract all contacts from MS CRM 2011

We are moving to Salesforce from MSCRM 2011. We need to get our data out so we can import into Salesforce. Here is the PowerShell script I am using to export contacts to csv.

$url="`$filter=StatusCode/Value eq 1"

$assembly = [Reflection.Assembly]::LoadWithPartialName("System.Web.Extensions")
$output = @()

while ($url){
    function GetData ($url) {
    $webclient = new-object System.Net.WebClient
    $webclient.UseDefaultCredentials = $true
    $webclient.Headers.Add("Accept", "application/json")
    $webclient.Headers.Add("Content-Type", "application/json; charset=utf-8");
    return $data
    $data=GetData($url) | ConvertFrom-Json
    $output += $data
    write-host $count
    if ($data.d.__next){
    else {

$output.d.results | Select -ExcludeProperty ParentCustomerId,__metadata @{l="ParentCustomerID";e={$_.ParentCustomerID.Id}},* | Export-Csv -NoTypeInformation C:\Contact.csv

Hope that helps someone.


When using PowerShell to pull REST data from MS CRM, escape `$filter !

Note to self.

When trying to filter a REST response in PowerShell, by using the “$filter” parameter in the url (as with MS CRM 2011), you must escape the “$” with “`$”.

For example:

Does not work:
$url=”$filter=StateCode/Value eq 0″

$url=”`$filter=StateCode/Value eq 0″

Gets me every time, and I can’t figure out why my filters are being ignored!


Using jsforce and node.js to connect to Salesforce

I wanted to write a node.js app to pull data from Salesforce. I found the NPM library jsforce. I added it to my packages in my package.json:

  "dependencies": {
    "express": "*",
    "dotenv": "*",
    "jsforce": "*"

I also added “dotenv” which I am using to load my client secret and all configuration data from a hidden .env file. This is not in my git repo, so I can have different values in production and development.

Here is what I have in my .env file:

[email protected]

Here is the code to pull in the .env values, define the oauth2 connection and login.

var dotenv         = require('dotenv').load();
var conn = new jsforce.Connection({
  oauth2 : {
      loginUrl : process.env.LOGINURL,
      clientId : process.env.CLIENTID,
      clientSecret : process.env.CLIENTSECRET,
      redirectUri : process.env.REDIRECTURI
var username = process.env.USERNAME;
var password = process.env.PASSWORD;
conn.login(username, password, function(err, userInfo) {
  if (err) { return console.error(err); }
  console.log("User ID: " +;
  console.log("Org ID: " + userInfo.organizationId);

Once connected and logged in, we can query using SOQL. This is a query to pull All Opportunities, their contacts and contact roles, and their team members and the team member roles. If that makes sense. I am using this query to show the relationships between Opportunities and their Contacts and team members using d3.js. More on that later.

    var query = "SELECT Id, Name,(SELECT Contact.Name,Contact.Email,Contact.Id,Contact.AccountId,ContactId,Role,Contact.Account.Name FROM OpportunityContactRoles),(SELECT User.Name,User.Email,User.Id,UserId,TeamMemberRole FROM OpportunityTeamMembers) FROM Opportunity"
    conn.query(query, function(err, results) {
      if (err) { return console.error(err); }
      console.log("Query: " + results.totalSize)
      console.log(JSON.stringify(results, null, 2))

My script/procedure to move Hyper-V VMs to Azure

We have been moving resources from ESXi to Hyper-V to Azure. ESXi to Hyper-V is done via the Microsoft Virtual Machine Converter (MVMC). Here is the Checklist/Script/Procedure I have been using to get Hyper-V to Azure.

  1. Once machine is in Hyper-V, make sure the VMs HDs are VHD and not VHDX
  2. Make sure DHCP is set on the VM
  3. Make sure RDP is enabled (ours is set via group policy)
  4. Power down VM
  5. Run the PowerShell below to add the HD (Add-AzurermVhd), and create a new VM in Azure:
$Location="East US2"
$DestinationSystemDiskUri= "http://$$VMName-System.vhd"
$DestinationDataDiskUri= "http://$$VMName-Data.vhd"
Add-AzurermVhd -Destination $DestinationSystemDiskUri -LocalFilePath $SourceSystemLocalFilePath -ResourceGroupName $ResourceGroupName
if ($DataDisk){
Add-AzurermVhd -Destination $DestinationDataDiskUri -LocalFilePath $SourceDataLocalFilePath -ResourceGroupName $ResourceGroupName
#region Build New VM
$DestinationVM = New-AzureRmVMConfig -vmName $vmName -vmSize $DestinationVMSize -AvailabilitySetId $(Get-AzureRmAvailabilitySet -ResourceGroupName $ResourceGroupName -Name $DestinationAvailabilitySet).Id
$vnet = Get-AzureRmVirtualNetwork -Name $DestinationNetworkName -ResourceGroupName $ResourceGroupName
$subnet = $vnet.Subnets | where {$_.Name -eq $DestinationNetworkSubnet}
$nic = New-AzureRmNetworkInterface -Name $nicName -ResourceGroupName $ResourceGroupName -Location $Location -SubnetId $Subnet.Id -PrivateIpAddress $PrivateIpAddress
$DestinationVM = Add-AzureRmVMNetworkInterface -VM $DestinationVM -Id $nic.Id
$DestinationSystemDiskUri = $DestinationSystemDiskUri
$DestinationDataDiskUri = $DestinationDataDiskUri
If ($OSType -eq "Windows"){
$DestinationVM = Set-AzureRmVMOSDisk -VM $DestinationVM -Name $DestinationSystemDiskName -VhdUri $DestinationSystemDiskUri -Windows -CreateOption attach
if ($DataDisk){
$DestinationVM = Add-AzureRmVMDataDisk -VM $DestinationVM -Name $DestinationDataDiskName -VhdUri $DestinationDataDiskUri -CreateOption attach -DiskSizeInGB $DatDiskSize
New-AzureRmVM -ResourceGroupName $resourceGroupName -Location $Location -VM $DestinationVM

The most important part is to use “-attach” with “Set-AzureRmVMOSDisk”

Hope that helps someone.


Using Let’s Encrypt, cerbot-auto with Apache on CentOS 6

There are plenty of better documented examples out there, so this is more of a note to self.

cd /opt
mkdir YourDir
cd YourDir/
chmod a+x certbot-auto

/certbot-auto --apache certonly -d -d -d -d -d -d -d -d

The name on the cert will be the first domain you list int he command above. All the other names will be part of the SAN cert.

And to renew, cron this up:
/opt/YourDir/certbot-auto renew


Using ADFS for authenticating apache hosted sites

I have been learning ADFS/SAML on the fly. If you come across this, and you see that I am doing it all wrong, then let me know!

I wanted to use my existing ADFS infrastructure to authenticate an apache resource on CentOS 6. Below is what I figured out
(There are alot of steps).

First, your site has to have HTTPS enabled.

Second, install Shibboleth: add it to your repos, yum install it, enable it, and start it.

wget -P /etc/yum.repos.d
yum install shibboleth
chkconfig shibd on
service shibd start

This will include the “/etc/httpd/conf.d/shib.conf” file that defines the apache paths to the shibd service (and enables the module).

Next, I needed to edit the /etc/shibboleth/shibboleth2.xml file

<ApplicationDefaults entityID="" REMOTE_USER="eppn persistent-id targeted-id">
<ApplicationDefaults entityID="" REMOTE_USER="eppn persistent-id targeted-id">


<SSO entityID="" discoveryProtocol="SAMLDS" discoveryURL="">
<SSO entityID="" discoveryProtocol="SAMLDS" discoveryURL="">

At this point, I ran into trouble. Normally, it looks like you continue editing /etc/shibboleth/shibboleth2.xml config file and you setup the metadata provider to point to your site like this:

<MetadataProvider type="XML" uri="" backingFilePath="federation-metadata.xml" reloadInterval="7200">

But I kept getting errors when I re-started shibd (service shibd restart). Seems that shibboleth and ADFS don’t speak the same language.
This site talks about it, and the solution is to download the metadata document, modify it, store it locally, and finally point the /etc/shibboleth/shibboleth2.xml config file to the “pre processed” local metadata file.

I processed the metadata file in PowerShell with a script here. I put the PowerShell code in a file ADFS2Fed.ps1 file, changed the top variables to look like this:

$scope = "";

Downloaded the xml file from “” and saved it as federationmetadata.xml (in the same directory as ADFS2Fed.ps1) .

I ran the script ADFS2Fed.ps1, it found the downloaded metadata file “federationmetadata.xml”, pre-processed it, and spit out “federationmetadata.xmlForShibboleth.xml”

I uploaded this file to my /etc/shibboleth/ folder and named it “partner-metadata.xml”

I then uncommented the following line in the /etc/shibboleth/shibboleth2.xml

 <MetadataProvider type="XML" validate="true" file="partner-metadata.xml"/>

That took care of the metadata provider.

Next. I needed to add this to the bottom of the atribute-map.xml file . The UPN that ADFS was sending was being ignored by shibd.

<Attribute name="" nameFormat="urn:oasis:names:tc:SAML:2.0:attrname-format:unspecified" id="upn" />

Next, I needed to allow Shibboleth to work with SELinux (source):

Create a file named mod_shib-to-shibd.tewith :

module mod_shib-to-shibd 1.0;
require {
       type var_run_t;
       type httpd_t;
       type initrc_t;
       class sock_file write;
       class unix_stream_socket connectto;
#============= httpd_t ==============
allow httpd_t initrc_t:unix_stream_socket connectto;
allow httpd_t var_run_t:sock_file write;

Compile, package and load the module with the following 3 commands:

checkmodule -m -M -o mod_shib-to-shibd.mod mod_shib-to-shibd.te
semodule_package -o mod_shib-to-shibd.pp -m mod_shib-to-shibd.mod
semodule -i mod_shib-to-shibd.pp

Finally the last step on the apache/linux side is the set the apache virtual host to use shibboleth to authenticate.

        <Directory /var/www/dir/to/site>
          AllowOverride All
          AuthType Shibboleth
          ShibRequireSession On
          require valid-user
          ShibUseEnvironment On
          Order allow,deny
          Allow from all

On the Windows/ADFS side:

  • In the ADFS Management Console, choose Add Relying Party Trust.
  • Select Import data about the relying party published online or on a local network and enter the URL for the SP Metadata (
  • Continuing the wizard, select Permit all users to access this relying party.
  • In the Add Transform Claim Rule Wizard, select Pass Through or Filter an IncomingClaim.
  • Name the rule (for example, Pass Through UPN) and select the UPN Incoming claim type.
  • Click OK to apply the rule and finalize the setup.

I hope this helped someone. It took me a while to figure this out.
In summary,

  1. Use SSL
  2. Install shibd
  3. Edit /etc/shibboleth/shibboleth2.xml
  4. Process the metadata file
  5. edit /etc/shibboleth/shibboleth2.xml to point to the local processed metadata file
  6. modify atribute-map.xml
  7. Allow shidb to work with SELinux
  8. Tell Apache to use shibboleth
  9. Setup ADFS using the wizard