Archive | PowerShell

PowerShell to move a VM to a new Log Analytics WorkSpace

This code uninstalls the Microsoft Monitoring agent and re-installs it to a new WorkSpace.


# change your VM Name and it's resource group
$vm = get-azurermvm -VMName YourVMName -ResourceGroupName VMResourceGroup
Remove-AzureRmVMExtension -ResourceGroupName $vm.ResourceGroupName -VMName $vm.Name -Name MicrosoftMonitoringAgent -Force
# put in your new workspaceId & workspaceKey
$workspaceId = "NewWorksSpaceID"
$workspaceKey = "SupaSecretKey"

$PublicSettings = @{"workspaceId" = $workspaceId;"stopOnMultipleConnections" = $false}
$ProtectedSettings = @{"workspaceKey" = $workspaceKey}

Set-AzureRmVMExtension -ExtensionName "MicrosoftMonitoringAgent" -ResourceGroupName $vm.resourcegroupname -VMName $vm.name `
-Publisher "Microsoft.EnterpriseCloud.Monitoring" `
-ExtensionType "MicrosoftMonitoringAgent" `
-TypeHandlerVersion 1.0 `
-Settings $PublicSettings `
-ProtectedSettings $ProtectedSettings `
-Location $vm.Location

Nothing special, just thought I would put it here. Mayby it will help someone?

3

Use the REST API to create a new Project in Azure DevOps

As the title says, I wanted to create a new project in VSTS / Azure DevOps, whatever you want to call it. Here is the code to do that. You need a Personal Access Token to authenticate with.

$User="[email protected]"
$PAT="YourPAT"
$Organization="YourOrg"
$base64authinfo = [Convert]::ToBase64String([Text.Encoding]::ASCII.GetBytes(("{0}:{1}" -f $User, $PAT)))
$url="https://dev.azure.com/$Organization/_apis/projects?api-version=5.1-preview.4"
$body = @"
{
  "name": "FabrikamTravel",
  "description": "Frabrikam travel app for Windows Phone",
  "capabilities": {
    "versioncontrol": {
      "sourceControlType": "Git"
    },
    "processTemplate": {
      "templateTypeId": "6b724908-ef14-45cf-84f8-768b5384da45"
    }
  }
}
"@
Invoke-RestMethod -Method POST -ContentType application/json -Uri $url -Headers @{Authorization=("Basic {0}" -f $base64authinfo)} -Body $Body

Hope that helps someone?

1

Using PowerShell to query Azure Log Analytics via the REST API

I wanted to pull some data out of Azure Log Analytics using PowerShell and the REST API. Here is the code to Pull all errors in the Application event logs on VMs that are pushing their logs into Log Analytics via MicrosoftMonitoringAgent.

Hopefully this may help someone:

$SubscriptionId = "$($env:SubscriptionId)"
$TenantId       = "$($env:TenantId)" 
$ClientID       = "$($env:ClientID)"      
$ClientSecret   = "$($env:ClientSecret)"  
$TenantDomain   = "$($env:TenantDomain)" 
$loginURL       = "https://login.microsoftonline.com/$TenantId/oauth2/token"
$resource       = "https://api.loganalytics.io"         

$body           = @{grant_type="client_credentials";resource=$resource;client_id=$ClientID;client_secret=$ClientSecret}
$oauth          = Invoke-RestMethod -Method Post -Uri $loginURL -Body $body
$headerParams = @{'Authorization'="$($oauth.token_type) $($oauth.access_token)"}

$Workspacename="Your WS Name"
$WorkspaceId="Your WS ID"

$url="https://api.loganalytics.io/v1/workspaces/$WorkspaceId/query"
$body = @{query = 'Event | where EventLog == "Application" | order by TimeGenerated asc | project Computer,EventLog,Source,EventLevelName,EventID,RenderedDescription,TimeGenerated'} | ConvertTo-Json
$webresults=Invoke-RestMethod -UseBasicParsing -Headers $headerParams -Uri $url -Method Post -Body $body -ContentType "application/json"

Notes:

  1. I keep my subscription information in Env Varaibles. It is easier for me to swtich to a different Tenant
  2.  This returns the results in tables. To move the tables into an object look at this person’s code at line 60  https://blog.tyang.org/2017/11/14/searching-oms-using-the-new-search-language-kusto-rest-api-in-powershell/
  3. My interpretation of code in #2
$resultsTable=$webresults.Content | ConvertFrom-Json
$count = 0
foreach ($table in $resultsTable.Tables) {
$count += $table.Rows.Count
}
$results = New-Object object[] $count
$i = 0;
foreach ($table in $resultsTable.Tables) {
    foreach ($row in $table.Rows) {
        # Create a dictionary of properties
        $properties = @{}
        for ($columnNum=0; $columnNum -lt $table.Columns.Count; $columnNum++) {
            $properties[$table.Columns[$columnNum].name] = $row[$columnNum]
        }      
        $results[$i] = (New-Object PSObject -Property $properties)
        $null = $i++
    }
}
$results



0

Removing machines from Azure State Configuration (DSC)

I have been provisioning machines over an over trying to learn all the VM Extensions. One of the extensions that I have been playing with is the DSC extension. Every time I provision with this extension, it adds an additional record into the State Configuration, resulting in many stale machines. I wanted to clear out all the old machines. I couldn’t find a way to do it in PowerShell, so I figure out how to do it via the REST API (and PowerShell).

Here is the code to remove all machines from Azure State Configuration (DSC)

$SubscriptionId = "$($env:SubscriptionId)"
$TenantId       = "$($env:TenantId)" 
$ClientID       = "$($env:ClientID)"      
$ClientSecret   = "$($env:ClientSecret)"  
$TenantDomain   = "$($env:TenantDomain)" 
$loginURL       = "https://login.microsoftonline.com/$TenantId/oauth2/token"
$resource       = "https://management.core.windows.net/"    
$resourceGroupName = "YourResourceGroupName "
$automationAccountsName ="YourAutomationAccountsName "

# get the OAUTH token & prepare header
$body           = @{grant_type="client_credentials";resource=$resource;client_id=$ClientID;client_secret=$ClientSecret}
$oauth          = Invoke-RestMethod -Method Post -Uri $loginURL -Body $body
$headerParams = @{'Authorization'="$($oauth.token_type) $($oauth.access_token)"}

# main query to find all the nodes
$url="https://management.azure.com/subscriptions/$SubscriptionId/resourceGroups/$resourceGroupName/providers/Microsoft.Automation/automationAccounts/$automationAccountsName/nodes?api-version=2018-01-15"
$results=Invoke-RestMethod -Uri $url -Headers $headerParams -Method Get
# Loop through all the nodes and delete them all.
foreach ($node in $($results.value | Select-Object  -ExpandProperty properties | Select nodeid)){
$url="https://management.azure.com/subscriptions/$SubscriptionId/resourceGroups/$resourceGroupName/providers/Microsoft.Automation/automationAccounts/$automationAccountsName/nodes/$($node.nodeId)?api-version=2018-01-15"
Invoke-RestMethod -Uri $url -Headers $headerParams -Method Delete
}

Notes:

  1. I put all my SPN info into environmental varaibles (easier to switch of needed)
  2. Put in your RG name and Auutomation Account Name
  3. Warning. This will delete all nodes!

HTH

0

Using PowerShell to extract all contacts from MS CRM 2011

We are moving to Salesforce from MSCRM 2011. We need to get our data out so we can import into Salesforce. Here is the PowerShell script I am using to export contacts to csv.

$url="http://crm.sardverb.com/Company/xrmservices/2011/OrganizationData.svc/ContactSet?`$filter=StatusCode/Value eq 1"

$assembly = [Reflection.Assembly]::LoadWithPartialName("System.Web.Extensions")
$count=0
$output = @()

while ($url){
    function GetData ($url) {
    $webclient = new-object System.Net.WebClient
    $webclient.UseDefaultCredentials = $true
    $webclient.Headers.Add("Accept", "application/json")
    $webclient.Headers.Add("Content-Type", "application/json; charset=utf-8");
    $data=$webclient.DownloadString($url)
    return $data
    }
    $data=GetData($url) | ConvertFrom-Json
    $output += $data
    $count=$count+$data.d.results.length
    write-host $count
    if ($data.d.__next){
        #$url=$null
        $url=$data.d.__next.ToString()
    }
    else {
        $url=$null
    }
}

$output.d.results | Select -ExcludeProperty ParentCustomerId,__metadata @{l="ParentCustomerID";e={$_.ParentCustomerID.Id}},* | Export-Csv -NoTypeInformation C:\Contact.csv

Hope that helps someone.

0

Connecting to the Salesforce REST api using PowerShell

As I said in my previous post, we are starting to use Salesforce, and I like REST APIs, so I wanted to see how to connect to Salesforce with cuRL and PowerShell.

cURL was pretty easy, PowerShell was not so much. The biggest issue was that when I queried the standard “https://login.salesforce.com/services/oauth2/token” url, I would get one response back, but if I tried again, it wouldn’t work. I had to install fiddler to figure out what was going on. I finally found the error and this solution: use your instance ID in the URL. That took me half a day to figure out. Add-on a typo of not having https in the URL, and I was not having fun. Once I figured out that you need to use your instance url and https I hit this error:

salesforce stronger security is required

So I had to figure out how to force Invoke-WebRequest or Invoke-RestMethod to use TLS 1.2. Here is the code that I finnanly figred out that gets an access token and queries accounts.


[Net.ServicePointManager]::SecurityProtocol = [Net.SecurityProtocolType]::Tls12
$tokenurl = "https://InstanceName-dev-ed.my.salesforce.com/services/oauth2/token"
$postParams = [ordered]@{
grant_type="password";
client_id="ReallyLongClientIDReallyLongClientIDReallyLongClientIDReallyLongClientIDReallyLongCli";
client_secret="1234567890123456789";
username="[email protected]";
password="PasswordAndTokenNoSpaces";
}

$access_token=(Invoke-RestMethod -Uri $tokenurl -Method POST -Body $postParams).access_token

$url = "https://InstanceName-dev-ed.my.salesforce.com/services/data/v37.0/sobjects/Account"
Invoke-RestMethod -Uri $url -Headers @{Authorization = "Bearer " + $access_token}


you don’t need the [ordered] part of the hash table, i was just using it to troubleshoot.

4

Hidden or UnDocumented Network Security Group (NSG) default rule in Azure (DNS)

I have been working to get a Citrix Netscaler up and running in Azure. It has not been easy, as all the documentation is for ASM.

Our network configuration has IPSec tunnels going from OnPrem to Azure, and I have created two SubNets in Azure – a DMZ and a LAN. The DMZ has the following Outbound NSG rules (ACLs) for the NetScaler to talk to a LAN SubNet.

Get-AzureRmNetworkSecurityGroup -ResourceGroupName ResourceGroupName | Select SecurityRules -ExpandProperty SecurityRules | where {$_.Direction -eq "Outbound"} | Select Priority,Name,Protocol,SourceAddressPrefix,SourcePortRange,DestinationAddressPrefix,DestinationPortRange,Access | Sort-Object Priority|ft -AutoSize

DMZ Netscaler = 192.10.8.100
LAN DC = 192.10.9.10

Priority Name                           Protocol SourceAddressPrefix SourcePortRange DestinationAddressPrefix DestinationPortRange Access
-------- ----                           -------- ------------------- --------------- ------------------------ -------------------- ------
     101 LDAP_From_NSIP                 TCP      192.10.8.100        *               192.10.9.10              389                  Allow
     102 DNSUDP_From_NSIP               Udp      192.10.8.100        *               192.10.9.10              53                   Allow
     103 DNSTCP_From_NSIP               TCP      192.10.8.100        *               192.10.9.10              53                   Allow
     104 RADIUS_From_NSIP               Udp      192.10.8.100        *               192.10.9.10              1812                 Allow
    4095 Subnet_To_Internet             *        *                   *               Internet                 *                    Allow
    4096 Deny_All_Outbound              *        *                   *               *                        *                    Deny

As you can see, I add a DenyAll at the end even though there is one in the DefaultSecurityRules. I just like to see it there. I find it comforting.

I found that from then Netscaler, I could do a DNS lookup against my OnPrem DC. How can that be?
Rule 101-104 are only for the Azure LAN DC. Then I DenyAll with 4096.
How can the Netscaler look up via the OnPrem DC?
I am DenyingAll!
I was pulling my hair out.

I realized that I had never changed my DNS server settings for my Virtual Network in Azure (I needed it to join the domain for the local DC when I build it!). I forgot to switch it the local Azure LAN DC.

Therefore, even though there is a DenyAll in my NSG rules, there has to be a Hidden or UnDocumented rule that allows queries to the DNS servers listed in the Virtual Network settings.

As soon as I changed the DNS server settings to the local Azure LAN DC, I could no longer query the OnPrem DC.

I understand why it is there. If you put in a DenyAll (like I did), Windows Servers will panic. They do not like it if they can’t access a DNS server.

I think Azure needs to move the DNS server settings down to the SubNet level, since all VMs are DHCP (Reservations). If they do this, a DMZ and LAN can use different DNS server settings, or none at all.

Just something I ran across today.

0

PowerShell to delete blobs in Azure

I was trying to delete a VHD in Azure via PowerShell, and I couldn’t find a good solution. Here is how you delete a blob in Azure

$resourceGroupName="Default"
$storageAccountname="StorageAccount01"
$storageAccountKey = (Get-AzureRmStorageAccountKey -ResourceGroupName $resourceGroupName -Name $storageAccountname).Key1
$storageContext = New-AzureStorageContext -StorageAccountName $storageAccountName -StorageAccountKey $storageAccountKey
$containerName="vhds"
 
# List blobs
Get-AzureStorageBlob -Container $containerName -Context $storageContext
 
# Remove Blob
Get-AzureStorageBlob -Container $containerName -Context $storageContext -Blob "SystemDisk01.vhd" | Remove-AzureStorageBlob
Get-AzureStorageBlob -Container $containerName -Context $storageContext -Blob "DataDisk01.vhd" Remove-AzureStorageBlob

Hope that helps someone.

My Azure ASM to ARM script

This is the “script” I used to move our older classic environment VMs to the new Azure Resource Manager.
It it is not a function – I wanted to step through the process and make sure all was well at the different points in the script.
The script assumes that there is only one Data disk (or none), and that you have created your availability set before hand.
I based most of the script off this.

I hope this helps some one.

Add-AzureAccount 
Login-AzureRmAccount 
$VMName="ASMVM01"
$ServiceName="ASMVM01_Service"
$SourceVMSize="Standard_A3"
$DestinationAvailabilitySet="AvailabilitySet01"
$PrivateIpAddress="192.168.1.10"
$ResourceGroupName="ResourceGroup01"
$DestinationNetworkName="Network01"
$DestinationNetworkSubnet="SubeNet01"
$Location="East US"
$OSType="Windows"
#$OSType="Linux"
[switch]$DataDisk=$false
$DatDiskSize=100
$SourceStorageAccountName="srcstorageaccount"
$DestinationStorageAccountName="dststorageaccount"

# ---- Edit above
#region Get Source Storage
$SourceStorageAccountKey=(Get-AzureStorageKey -StorageAccountName $SourceStorageAccountName).Primary
$SourceContext = New-AzureStorageContext -StorageAccountName $SourceStorageAccountName -StorageAccountKey $SourceStorageAccountKey
#endregion

#region Get Destination Storage
$DestinationAccountKey=(Get-AzureRmStorageAccountKey -ResourceGroupName $ResourceGroupName -Name $DestinationStorageAccountName).Key1
$DestinationContext = New-AzureStorageContext -StorageAccountName $DestinationStorageAccountName -StorageAccountKey $DestinationAccountKey
#endregion

#region Get SourceVM
$SourceVM = Get-AzureVm  -ServiceName $ServiceName -Name $VMName
if (! $SourceVM.Status -eq "StoppedDeallocated"){
"You need to sopt $SourceVM first"
return;
}
#endregion

#region Copy SystemDisk
$SourceSystemDisk=Get-AzureDisk | Where-Object { $_.AttachedTo.RoleName -eq "$VMName" } | where {$_.OS -eq $OSType}
$DestinationSystemDiskName="$($VMNAME)_SYSTEM.vhd"
write-host "Copying System Disk"
Write-Host "Start-AzureStorageBlobCopy -Context $SourceContext -AbsoluteUri $($SourceSystemDisk.MediaLink.AbsoluteUri) -DestContainer ""vhds"" -DestBlob $DestinationSystemDiskName -DestContext $DestinationContext -Verbose"
$SystemBlob = Start-AzureStorageBlobCopy -Context $SourceContext -AbsoluteUri $($SourceSystemDisk.MediaLink.AbsoluteUri) -DestContainer "vhds" -DestBlob $DestinationSystemDiskName -DestContext $DestinationContext -Verbose 
$SystemBlob | Get-AzureStorageBlobCopyState
While ($($SystemBlob | Get-AzureStorageBlobCopyState).Status -ne "Success"){
sleep 5
$BlobCopyStatus=$SystemBlob | Get-AzureStorageBlobCopyState
"$($($BlobCopyStatus).Status) ($($BlobCopyStatus).BytesCopied) of $($($BlobCopyStatus).TotalBytes) bytes)"
}
#endregion

#region Copy Data Disk
if ($DataDisk){
$SourceDataDisk=Get-AzureDisk | Where-Object { $_.AttachedTo.RoleName -eq "$VMName" } | where {! $_.OS}
$DestinationDataDiskName="$($VMNAME)_DATA01.vhd"
write-host "Copying Data disk"
Write-Host "Start-AzureStorageBlobCopy -Context $SourceContext -AbsoluteUri $($SourceDataDisk.MediaLink.AbsoluteUri) -DestContainer ""vhds"" -DestBlob $DestinationDataDiskName -DestContext $DestinationContext -Verbose"
$DataDiskBlob = Start-AzureStorageBlobCopy -Context $SourceContext -AbsoluteUri $($SourceDataDisk.MediaLink.AbsoluteUri) -DestContainer "vhds" -DestBlob $DestinationDataDiskName -DestContext $DestinationContext -Verbose 
$DataDiskBlob | Get-AzureStorageBlobCopyState
While ($($DataDiskBlob | Get-AzureStorageBlobCopyState).Status -ne "Success"){
sleep 5
$BlobCopyStatus=$DataDiskBlob | Get-AzureStorageBlobCopyState
"$($($BlobCopyStatus).Status) ($($BlobCopyStatus).BytesCopied) of $($($BlobCopyStatus).TotalBytes) bytes)"
}
}
#endregion

#region Build New VM
$DestinationVM = New-AzureRmVMConfig -vmName $vmName -vmSize $SourceVMSize -AvailabilitySetId $(Get-AzureRmAvailabilitySet -ResourceGroupName $ResourceGroupName -Name $DestinationAvailabilitySet).Id
$nicName="$($VMName)_NIC01"
$vnet = Get-AzureRmVirtualNetwork -Name $DestinationNetworkName -ResourceGroupName $ResourceGroupName 
$subnet = $vnet.Subnets | where {$_.Name -eq $DestinationNetworkSubnet}
$nic = New-AzureRmNetworkInterface -Name $nicName -ResourceGroupName $ResourceGroupName -Location $Location -SubnetId $Subnet.Id -PrivateIpAddress $PrivateIpAddress
$DestinationVM = Add-AzureRmVMNetworkInterface -VM $DestinationVM -Id $nic.Id 
$DestinationSystemDiskUri = "$($DestinationContext.BlobEndPoint)vhds/$DestinationSystemDiskName"
$DestinationDataDiskUri = "$($DestinationContext.BlobEndPoint)vhds/$DestinationDataDiskName"

If ($OSType -eq "Windows"){
$DestinationVM = Set-AzureRmVMOSDisk -VM $DestinationVM -Name $DestinationSystemDiskName -VhdUri $DestinationSystemDiskUri -Windows -CreateOption attach
if ($DataDisk){
$DestinationVM = Add-AzureRmVMDataDisk -VM $DestinationVM -Name $DestinationDataDiskName -VhdUri $DestinationDataDiskUri -CreateOption attach -DiskSizeInGB $DatDiskSize
}
}
If ($OSType -eq "Linux"){
$DestinationVM = Set-AzureRmVMOSDisk -VM $DestinationVM -Name $SourceSystemDisk -VhdUri $DestinationOSDiskUri -Linux -CreateOption attach
if ($DataDisk){
$DestinationVM = Add-AzureRmVMDataDisk -VM $DestinationVM -Name $DestinationDataDiskName -VhdUri $DestinationDataDiskUri -CreateOption attach -DiskSizeInGB $DatDiskSize
}
}
 
New-AzureRmVM -ResourceGroupName $resourceGroupName -Location $Location -VM $DestinationVM
#endregion

PowerShell to download and install most recent Azure PowerShell cmdlets

This script will pull down the most recent Azure PowerShell cmdlets from github. This script assumes that Microsoft has not renamed the installer file, and the most recent is at the top.

JBM-INSTALL-AzurePowerShell{
((Invoke-WebRequest https://github.com/Azure/azure-powershell/releases).Links).href | where {$_ -like "https*azure-powershell*msi*"} | Select-Object -first 1| foreach {
Invoke-WebRequest $_ -OutFile "./$([System.IO.Path]::GetFileName($_))"
start-process "./$([System.IO.Path]::GetFileName($_))"
}
}

hope that helps someone.