Saw this the other day, and I wanted to post it, so that I can remember it.
How to find a Raspberry Pi’s DHCP address:
arp -na | grep -i "b8:27:eb"
Saw this the other day, and I wanted to post it, so that I can remember it.
How to find a Raspberry Pi’s DHCP address:
arp -na | grep -i "b8:27:eb"
Simple and quick – I had never done it before. I was working on a different script, but I realized that I have never downloaded my billing statements from azure. Simple 1 liner to download them:
foreach ($invoice in $(Get-AzBillingInvoice -GenerateDownloadUrl)){irm $invoice.Downloadurl -OutFile "$($invoice.Name).pdf"}
Hope that helps someone!
I wanted to get the cost of a VM each day. I came up with the following code. Just the VM cost, not the ingress (DataTrIn) or egress (DataTrOut). I hope it helps someone!
$results=Get-AzConsumptionUsageDetail -StartDate 2019-12-01 -EndDate 2019-12-10 -IncludeAdditionalProperties $subscriptions="YourSubName" foreach ($sub in $subscriptions){ $results | ` where {$_.ConsumedService -eq 'Microsoft.Compute'} | ` where {$_.SubscriptionName -eq $sub}| ` Select UsageStart, InstanceName, UsageQuantity,PreTaxCost,@{N="UsageType";E={(ConvertFrom-Json $($_.AdditionalInfo)).UsageType}} | ` where {$_.UsageType -eq 'ComputeHR'} | ` Sort-Object InstanceName,UsageStart | ft }
I recently ran a Get-Modules -ListAvaiable and I saw a lot of older versions:
Get-Module -ListAvailable Az* ModuleType Version Name ExportedCommands ---------- ------- ---- ---------------- Script 1.6.0 Az.Accounts {Disable-AzDataCollection, Disable-AzContextAutosave, Enable-AzDataCollection, Enable-AzContextAutosave...} Script 1.4.0 Az.Accounts {Disable-AzDataCollection, Disable-AzContextAutosave, Enable-AzDataCollection, Enable-AzContextAutosave...} Script 1.3.0 Az.Accounts {Disable-AzDataCollection, Disable-AzContextAutosave, Enable-AzDataCollection, Enable-AzContextAutosave...} Script 1.0.1 Az.Aks {Get-AzAks, New-AzAks, Remove-AzAks, Import-AzAksCredential...} Script 1.1.0 Az.AnalysisServices {Resume-AzAnalysisServicesServer, Suspend-AzAnalysisServicesServer, Get-AzAnalysisServicesServer, Remove-AzAnalysisServicesServer...} Script 1.0.2 Az.AnalysisServices {Resume-AzAnalysisServicesServer, Suspend-AzAnalysisServicesServer, Get-AzAnalysisServicesServer, Remove-AzAnalysisServicesServer...} Script 1.0.0 Az.AnalysisServices {Resume-AzAnalysisServicesServer, Suspend-AzAnalysisServicesServer, Get-AzAnalysisServicesServer, Remove-AzAnalysisServicesServer...} Script 1.2.0 Az.ApiManagement {Add-AzApiManagementApiToProduct, Add-AzApiManagementProductToGroup, Add-AzApiManagementRegion, Add-AzApiManagementUserToGroup...} Script 1.0.0 Az.ApiManagement {Add-AzApiManagementRegion, Get-AzApiManagementSsoToken, New-AzApiManagementCustomHostnameConfiguration, New-AzApiManagementSystemC... Script 1.0.0 Az.ApplicationInsights {Get-AzApplicationInsights, New-AzApplicationInsights, Remove-AzApplicationInsights, Set-AzApplicationInsightsPricingPlan...} Script 1.3.0 Az.Automation {Get-AzAutomationHybridWorkerGroup, Remove-AzAutomationHybridWorkerGroup, Get-AzAutomationJobOutputRecord, Import-AzAutomationDscNo... Script 1.1.2 Az.Automation {Get-AzAutomationHybridWorkerGroup, Remove-AzAutomationHybridWorkerGroup, Get-AzAutomationJobOutputRecord, Import-AzAutomationDscNo... Script 1.1.0 Az.Automation {Get-AzAutomationHybridWorkerGroup, Remove-AzAutomationHybridWorkerGroup, Get-AzAutomationJobOutputRecord, Import-AzAutomationDscNo... Script 1.1.0 Az.Batch {Remove-AzBatchAccount, Get-AzBatchAccount, Get-AzBatchAccountKey, New-AzBatchAccount...} Script 1.0.0 Az.Batch {Remove-AzBatchAccount, Get-AzBatchAccount, Get-AzBatchAccountKeys, New-AzBatchAccount...} Script 1.0.0 Az.Billing {Get-AzBillingInvoice, Get-AzBillingPeriod, Get-AzEnrollmentAccount, Get-AzConsumptionBudget...} Script 0.2.0 Az.Blueprint {Get-AzBlueprint, Get-AzBlueprintAssignment, New-AzBlueprintAssignment, Remove-AzBlueprintAssignment...} Script 1.3.0 Az.Cdn {Get-AzCdnProfile, Get-AzCdnProfileSsoUrl, New-AzCdnProfile, Remove-AzCdnProfile...} Script 1.1.0 Az.Cdn {Get-AzCdnProfile, Get-AzCdnProfileSsoUrl, New-AzCdnProfile, Remove-AzCdnProfile...} Script 1.0.1 Az.Cdn {Get-AzCdnProfile, Get-AzCdnProfileSsoUrl, New-AzCdnProfile, Remove-AzCdnProfile...} Script 1.0.1 Az.CognitiveServices {Get-AzCognitiveServicesAccount, Get-AzCognitiveServicesAccountKey, Get-AzCognitiveServicesAccountSkus, Get-AzCognitiveServicesAcco... Script 1.0.0 Az.CognitiveServices {Get-AzCognitiveServicesAccount, Get-AzCognitiveServicesAccountKey, Get-AzCognitiveServicesAccountSkus, Get-AzCognitiveServicesAcco... Script 2.4.0 Az.Compute {Remove-AzAvailabilitySet, Get-AzAvailabilitySet, New-AzAvailabilitySet, Update-AzAvailabilitySet...} Script 1.5.0 Az.Compute {Remove-AzAvailabilitySet, Get-AzAvailabilitySet, New-AzAvailabilitySet, Update-AzAvailabilitySet...} Script 1.3.0 Az.Compute {Remove-AzAvailabilitySet, Get-AzAvailabilitySet, New-AzAvailabilitySet, Update-AzAvailabilitySet...}
I wanted to remove all versions except the most recent. Here is my script:
foreach ($module in (Get-Module -ListAvailable Az*).Name |Get-Unique) { if ((Get-Module -ListAvailable $module).Count -gt 1) { $Latest_Version = (Get-Module -ListAvailable $module | select Version | Sort-Object Version)[-1] write-host "Latest $module version $Latest_Version" Get-Module -ListAvailable $module | Where-Object {$_.Version -ne $Latest_Version.Version} | foreach {Uninstall-Module -Name $_.Name -RequiredVersion $_.Version -verbose} } else { Write-Output "Only one version installed for $module" } }
Hope that helps someone.
Ran into this one the other day. Suspect it may be an issue for some people as their Service Principle secrets are going to expire soon (default is 1 year).
I used the following method below to build an AKS cluster:
## Create a new SP az ad sp create-for-rbac --name AKS01 --skip-assignment ## Give the new SP rights to my ACR az role assignment create --assignee AppID --role acrpull --scope /subscriptions/{SubScriptionID}/resourceGroups/{ResourceGroupName}/providers/Microsoft.ContainerRegistry/registries/{ACRName} ## Create: az aks create --resource-group {ResourceGroupName} --name AKS01 --ssh-key-value /path/tomy/.ssh/id_rsa.pub --service-principal SPAppID --client-secret PASSWORD ## Add an app from my ACR: kubectl run mysimplenodeapp --image=myacr.azurecr.io/mysimplenodeapp :latest --port=5000 kubectl expose deployment mysimplenodeapp --type=LoadBalancer --name=mysimplenodeapp --port 5000 ## Update the cluster: az aks upgrade --resource-group {ResourceGroupName} --name AKS01 --kubernetes-version 1.12.6
Standard Stuff.
The trick is, when you need to update you SP credentials, how are you going to do it? Seems that there are 2 ways you can update the credentials, in the portal and via command line.
Long story short: Use the command line method! Ran into a problem when the secret was created in the portal. The portal generated a very complex password and after updating the AKS cluster:
az aks update-credentials \ --name "AKS01" \ --resource-group {ResourceGroupName} \ --client-secret 'sJk_^/(_{##)-!&@(:|&&;:}*/G[{i$m+(}JC@?;]).X+3(Vb:>>%z?_J:' \ --service-principal {SPAppID } \ --reset-service-principal
Images couldn’t be pulled from the ACR:
Failed to pull image "myacr.azurecr.io/mysimplenodeapp:latest": rpc error: code = Unknown desc = Error response from daemon: Get https://myacr.azurecr.io/v2/mysimplenodeapp/manifests/latest: unauthorized: authentication required
When I updated the SP credentails using the CLI way:
SP_ID=$(az aks show -g {ResourceGroupName} -n AKS01 --query servicePrincipalProfile.clientId -o tsv) SP_SECRET=$(az ad sp credential reset --name $SP_ID --query password -o tsv) az aks update-credentials \ --resource-group {ResourceGroupName} \ --name AKS01 \ --reset-service-principal \ --service-principal $SP_ID \ --client-secret $SP_SECRET
My image pulled from the ACR right away!
Seems that when you reset the credential via the CLI, it generates a “GIUD” as the secret, which doesn’t have any of the non alphanumeric characters that the portal produces. And all seems fine.
So if it is time for you to update-credentials, use the CLI method:
SP_ID=$(az aks show -g {ResourceGroupName} -n AKS01 --query servicePrincipalProfile.clientId -o tsv) SP_SECRET=$(az ad sp credential reset --name $SP_ID --query password -o tsv) az aks update-credentials \ --resource-group {ResourceGroupName} \ --name AKS01 \ --reset-service-principal \ --service-principal $SP_ID \ --client-secret $SP_SECRET
Hope that helps someone!
In the past I have always run Get-AzureRmResourceGroup, and then looped through all the resources groups and the VMs inside of them. This can be slow. So I put together this script to leverage Azure Resource Graph for your inventory. I hope this helps some one:
$everything=$(Search-AzureRmGraph -Query "where type != ''" -First 5000) while ($($everything.Count) % 5000 -eq 0) { $everything=$everything + $(Search-AzureRmGraph -Query "where type != ''" -Skip $($everything.Count)) } $VMs=$everything | Where {$_.type -contains 'Microsoft.Compute/virtualMachines'} $NICs=$everything | Where {$_.type -contains 'microsoft.network/networkinterfaces'} $pubIPs = $everything | Where {$_.type -contains 'microsoft.network/publicipaddresses'} $NSGs= $everything | Where {$_.type -contains 'microsoft.network/networksecuritygroups'} $VMSizes = @() $locations=$VMs | Select location -Unique foreach ($location in $($locations.location)){ $sizes=get-azurermvmsize -location $location | Select @{Name="Location";Expression={$location}},Name,NumberOfCores,MemoryInMB,MaxDataDiskCount,OSDiskSizeInMB,ResourceDiskSizeInMB $VMSizes+=$sizes } $output=$VMs ` | select *,@{N='vmSize';E={$_.properties.hardwareProfile.vmSize}} ` | select *,@{N='CurrentSku';E={$s=$_.VMSize;$l=$_.location;$VMSizes | where {$_.Location -eq $l -and $_.Name -eq $s}}} ` | select *,@{N='NumberOfCores';E={$_.CurrentSku.NumberOfCores}} ` | select *,@{N='MemoryInMB';E={$_.CurrentSku.MemoryInMB}} ` | select *,@{N='MaxDataDiskCount';E={$_.CurrentSku.MaxDataDiskCount}} ` | select *,@{N='ResourceDiskSizeInMB';E={$_.CurrentSku.ResourceDiskSizeInMB}} ` | select *,@{N='NICInfo';E={$NICId=$_.id;$NICs | Where {$_.properties.virtualMachine.id -eq $NICId }}} ` | select *,@{N='NicName';E={(($_.NICInfo).Name)}} ` | select *,@{N='NSGID';E={(($_.NICInfo).properties).networkSecurityGroup.id}} ` | select *,@{N='NSGInfo';E={$NSGID=$_.NSGID;($NSGs | Where {$_.Id -eq $NSGID}).Properties}} ` | select *,@{N='securityRules';E={(($_.NSGInfo).securityRules).Name}} ` | select *,@{N='PrivIP';E={(((($_.NICInfo).Properties).ipConfigurations[0]).properties).privateIPAddress}} ` | select *,@{N='PubIPID';E={(((($_.NICInfo).Properties).ipConfigurations[0]).properties).publicIPAddress.id }} ` | select *,@{N='PubIPInfo';E={$PUBIPID=$_.PubIPID;($pubIPs | Where {$_.Id -eq $PUBIPID}).Properties}} ` | select *,@{N='publicIPAllocationMethod';E={(($_.PubIPInfo)).publicIPAllocationMethod}} ` | select *,@{N='publicIPAddress';E={(($_.PubIPInfo).ipAddress)}}
This pulls back everything and then you can pull out what you want.
There are a lot of examples out there on how to POST a document to Cosmos DB, but they weren’t working for me. I kept getting a 400 Bad Request. After far to long, I finally got it to work. I need the “x-ms-documentdb-partitionkey” header to make it work.
Code for anyone who needs it (I hacked the original code here that wasn’t working for me):
Function Generate-MasterKeyAuthorizationSignature{ [CmdletBinding()] Param( [Parameter(Mandatory=$true)][String]$verb, [Parameter(Mandatory=$true)][String]$resourceLink, [Parameter(Mandatory=$true)][String]$resourceType, [Parameter(Mandatory=$true)][String]$dateTime, [Parameter(Mandatory=$true)][String]$key, [Parameter(Mandatory=$true)][String]$keyType, [Parameter(Mandatory=$true)][String]$tokenVersion ) $hmacSha256 = New-Object System.Security.Cryptography.HMACSHA256 $hmacSha256.Key = [System.Convert]::FromBase64String($key) If ($resourceLink -eq $resourceType) { $resourceLink = "" } $payLoad = "$($verb.ToLowerInvariant())`n$($resourceType.ToLowerInvariant())`n$resourceLink`n$($dateTime.ToLowerInvariant())`n`n" $hashPayLoad = $hmacSha256.ComputeHash([System.Text.Encoding]::UTF8.GetBytes($payLoad)) $signature = [System.Convert]::ToBase64String($hashPayLoad) [System.Web.HttpUtility]::UrlEncode("type=$keyType&ver=$tokenVersion&sig=$signature") }
Code above just creates the auth header and is called below:
Function Post-CosmosDocuments{ [CmdletBinding()] Param( [Parameter(Mandatory=$true)][String]$EndPoint, [Parameter(Mandatory=$true)][String]$DBName, [Parameter(Mandatory=$true)][String]$CollectionName, [Parameter(Mandatory=$true)][String]$MasterKey, [String]$Verb="POST", [Parameter(Mandatory=$true)][String]$JSON ) $ResourceType = "docs"; $ResourceLink = "dbs/$DBName/colls/$CollectionName" $partitionkey = "[""$(($JSON |ConvertFrom-Json).id)""]" $dateTime = [DateTime]::UtcNow.ToString("r") $authHeader = Generate-MasterKeyAuthorizationSignature -verb $Verb -resourceLink $ResourceLink -resourceType $ResourceType -key $MasterKey -keyType "master" -tokenVersion "1.0" -dateTime $dateTime $header = @{authorization=$authHeader;"x-ms-version"="2015-12-16";"x-ms-documentdb-partitionkey"=$partitionkey;"x-ms-date"=$dateTime} $contentType= "application/json" $queryUri = "$EndPoint$ResourceLink/docs" #$header #$queryUri [Net.ServicePointManager]::SecurityProtocol = [Net.SecurityProtocolType]::Tls12 $result = Invoke-RestMethod -Method $Verb -ContentType $contentType -Uri $queryUri -Headers $header -Body $JSON }
And to run the functions above:
$CosmosDBEndPoint = "https://YourDBAccount.documents.azure.com:443/" $DBName = "yourDB" $CollectionName = "YourCollection" $MasterKey = "YourPrimaryKey" Post-CosmosDocuments -EndPoint $CosmosDBEndPoint -MasterKey $MasterKey -DBName $DBName -CollectionName $CollectionName -JSON ($SomeObject | ConvertTo-Json)
The key was to set the correct contentType and add “x-ms-documentdb-partitionkey” to the headers. This needs to match what your set your DB up with. I am useing “id”
As a bonus, here is the code to query a DB. Leveraging the same first function to create the auth header:
Function Query-CosmosDocuments{ [CmdletBinding()] Param( [Parameter(Mandatory=$true)][String]$EndPoint, [Parameter(Mandatory=$true)][String]$DBName, [Parameter(Mandatory=$true)][String]$CollectionName, [Parameter(Mandatory=$true)][String]$MasterKey, [Parameter(Mandatory=$true)][String]$JSON, [String]$Verb="POST" ) $ResourceType = "docs"; $ResourceLink = "dbs/$DBName/colls/$CollectionName" $query=@" { "query": "SELECT * FROM contacts c WHERE c.id = @id", "parameters": [ { "name": "@id", "value": "$(($JSON |ConvertFrom-Json).id)" } ] } "@ $dateTime = [DateTime]::UtcNow.ToString("r") $authHeader = Generate-MasterKeyAuthorizationSignature -verb $Verb -resourceLink $ResourceLink -resourceType $ResourceType -key $MasterKey -keyType "master" -tokenVersion "1.0" -dateTime $dateTime $header = @{authorization=$authHeader;"x-ms-version"="2015-12-16";"x-ms-documentdb-isquery"="True";"x-ms-date"=$dateTime} $contentType= "application/query+json" $queryUri = "$EndPoint$ResourceLink/docs" #$header #$queryUri [Net.ServicePointManager]::SecurityProtocol = [Net.SecurityProtocolType]::Tls12 $result = Invoke-RestMethod -Method $Verb -ContentType $contentType -Uri $queryUri -Headers $header -Body $query return $result }
Hope that helps someone
I was looking for data that I couldn’t find in a PowerShell command, so I needed an access token to run a query against an Azure API.
I was stuck with the basic problem of how do I query the Azure REST endpoints from a RunBook. In my last post, I just learned that you can use the RunAs account for the AutomationAccount in an “Login-AzureRmAccount” session:
$connection = Get-AutomationConnection -Name AzureRunAsConnection $loginresults=Login-AzureRmAccount -ServicePrincipal -Tenant $connection.TenantID ` -ApplicationId $connection.ApplicationID -CertificateThumbprint $connection.CertificateThumbprint
Taking that a step further, I can then get an access token from the logged in context, and use that with an REST API call:
$connection = Get-AutomationConnection -Name AzureRunAsConnection $loginresults=Login-AzureRmAccount -ServicePrincipal -Tenant $connection.TenantID ` -ApplicationId $connection.ApplicationID -CertificateThumbprint $connection.CertificateThumbprint $context = Get-AzureRmContext $SubscriptionId = $context.Subscription $cache = $context.TokenCache $cacheItem = $cache.ReadItems() $AccessToken=$cacheItem[$cacheItem.Count -1].AccessToken $resourceGroup="MyResourceGroup" $headerParams = @{'Authorization'="Bearer $AccessToken"} $url="https://management.azure.com/subscriptions/$SubscriptionId/resourceGroups/$resourceGroup/providers/Microsoft.Compute/virtualMachines?api-version=2018-06-01" $results=Invoke-RestMethod -Uri $url -Headers $headerParams -Method Get Write-Output $results.value
Hope that helps someone !
I have an azure lab subscription (as do you, I am sure). In this lab, I am always provisioning and deleting and scaling VMs. In order to keep my costs down, I want to enable the AutoShutdown feature on all new VMs. I can do that easily with an ARM template. But for machines that aren’t provisioned by one of my templates, I always forget to enable the Auto-Shutdown.
Here is code to loop through all of your machines and enable the Auto-Shutdown setting. I know there other ways to do this, but I wanted it to be visible when looking at the VM (in the Auto-Shutdown section).
Take this code and put it in an Azure Automation RunBook, and it will run every night (I run it an hour before the AutoShutdow time!)
Also, I have never used runbooks before, so I learned that you need lines 1 & 2 to connect to Azure as the Azure Automation RunAs account (no passwords in code!)
Here is the code:
$connection = Get-AutomationConnection -Name AzureRunAsConnection $loginresults=Login-AzureRmAccount -ServicePrincipal -Tenant $connection.TenantID ` -ApplicationId $connection.ApplicationID -CertificateThumbprint $connection.CertificateThumbprint foreach ($rg in $((Get-AzureRmResourceGroup).ResourceGroupName)){ foreach ($vm in $(Get-AzureRmVM -ResourceGroupName $rg)){ $shutdown_time = "22:00" $shutdown_timezone = "Eastern Standard Time" $properties = @{ "status" = "Enabled"; "taskType" = "ComputeVmShutdownTask"; "dailyRecurrence" = @{"time" = $shutdown_time }; "timeZoneId" = $shutdown_timezone; "notificationSettings" = @{ "status" = "Disabled"; "timeInMinutes" = 30 } "targetResourceId" = $VM.Id } try{ $Status=(Get-AzureRmResource -ResourceId ("/subscriptions/{0}/resourceGroups/{1}/providers/microsoft.devtestlab/schedules/shutdown-computevm-{2}" -f (Get-AzureRmContext).Subscription.Id, $rg, $vm.Name) -ErrorAction stop).Properties.Status } Catch{ write-output "Setting $($vm.Name) to auto shutdown @ $shutdown_time (was never enabled)" New-AzureRmResource -ResourceId ("/subscriptions/{0}/resourceGroups/{1}/providers/microsoft.devtestlab/schedules/shutdown-computevm-{2}" -f (Get-AzureRmContext).Subscription.Id, $rg, $vm.Name) -Location $vm.Location -Properties $properties -Force } if ($Status -eq "Disabled"){ write-output "Setting $($vm.Name) to auto shutdown @ $shutdown_time (was disbaled)" New-AzureRmResource -ResourceId ("/subscriptions/{0}/resourceGroups/{1}/providers/microsoft.devtestlab/schedules/shutdown-computevm-{2}" -f (Get-AzureRmContext).Subscription.Id, $rg, $vm.Name) -Location $vm.Location -Properties $properties -Force } else { write-output "$($vm.Name) is already set to auto shutdown" } } }
As you can see I am using some “write-output”s in the code. How can I see them with our having to naviagate to the histroy of the job? Log Analytics! To enable Auzre Automation to write to Log Analytics:
https://docs.microsoft.com/en-us/azure/automation/automation-manage-send-joblogs-log-analytics
And here is a Kusto query to see the output of the Automation Job
AzureDiagnostics | where ResourceProvider == "MICROSOFT.AUTOMATION" | where RunbookName_s == "MyJobName" | where Category == "JobLogs" or Category == "JobStreams" | order by TimeGenerated | project TimeGenerated,CorrelationId,RunbookName_s,ResultDescription,ResultType | where TimeGenerated > now() - 1d
Now you can setup an Logic App too run the Kusto query and email you the results!
Note: This will set all your VMs to Auto-Shutdown. Make sure you don’t run this against your production environment!
Hope that helps someone.
This is a NoteToSelf. Often I paste into VI/VIM, and the tabs go crazy and fly across the screen, leaving the text starting in the middle of the line. Then next line is even further! I can never remember how fix the issue.
To fix:
:set paste