Author Archive | jbmurphy

Using git and a post-recive to update production node.js apps.

I have been trying to figure out the best way to deploy and maintain node.js apps in development and production. If I have a local git repo on my machine, and I want to push it to production, what is the best way to do this? I don’t think the .git files should be there. I also don’t keep my modules in the repo, so I need a way to push updates, and make sure the newest dependencies are on the server.
I figured out that people are using a post-recieve script to update the site. This is what I ended up with. You put it in a file named post-receive in the hooks folder (on the server not on your local repo)

#!/bin/sh
GIT_WORK_TREE=/opt/node/nodapp
git --work-tree=$GIT_WORK_TREE checkout --force
cd $GIT_WORK_TREE
npm install

I may take this a step further and recycle pm2, but that is another post!

0

Using PowerShell to extract all contacts from MS CRM 2011

We are moving to Salesforce from MSCRM 2011. We need to get our data out so we can import into Salesforce. Here is the PowerShell script I am using to export contacts to csv.

$url="http://crm.sardverb.com/Company/xrmservices/2011/OrganizationData.svc/ContactSet?`$filter=StatusCode/Value eq 1"

$assembly = [Reflection.Assembly]::LoadWithPartialName("System.Web.Extensions")
$count=0
$output = @()

while ($url){
    function GetData ($url) {
    $webclient = new-object System.Net.WebClient
    $webclient.UseDefaultCredentials = $true
    $webclient.Headers.Add("Accept", "application/json")
    $webclient.Headers.Add("Content-Type", "application/json; charset=utf-8");
    $data=$webclient.DownloadString($url)
    return $data
    }
    $data=GetData($url) | ConvertFrom-Json
    $output += $data
    $count=$count+$data.d.results.length
    write-host $count
    if ($data.d.__next){
        #$url=$null
        $url=$data.d.__next.ToString()
    }
    else {
        $url=$null
    }
}

$output.d.results | Select -ExcludeProperty ParentCustomerId,__metadata @{l="ParentCustomerID";e={$_.ParentCustomerID.Id}},* | Export-Csv -NoTypeInformation C:\Contact.csv

Hope that helps someone.

0

When using PowerShell to pull REST data from MS CRM, escape `$filter !

Note to self.

When trying to filter a REST response in PowerShell, by using the “$filter” parameter in the url (as with MS CRM 2011), you must escape the “$” with “`$”.

For example:

Does not work:
$url=”http://crmserver.company.com/Organization/xrmservices/2011/OrganizationData.svc/ContactSet$filter=StateCode/Value eq 0″

Works:
$url=”http://crmserver.company.com/Organization/xrmservices/2011/OrganizationData.svc/ContactSet`$filter=StateCode/Value eq 0″

Gets me every time, and I can’t figure out why my filters are being ignored!

2

Using jsforce and node.js to connect to Salesforce

I wanted to write a node.js app to pull data from Salesforce. I found the NPM library jsforce. I added it to my packages in my package.json:

  "dependencies": {
    "express": "*",
    "dotenv": "*",
    "jsforce": "*"
  }

I also added “dotenv” which I am using to load my client secret and all configuration data from a hidden .env file. This is not in my git repo, so I can have different values in production and development.

Here is what I have in my .env file:

CLIENTID=zWHRIM8F87FChMcfHpZKS9LhQeeLwfthDbaiL9iXNO7ZBwfUwFPFqpDzC2HruNkJfIxrOdeITtftxBg20WEIm
CLIENTSECRET=123456789987654
REDIRECTURI=localhost
USERNAME=username@yourdomain.com
PASSWORD=PASSWORDANDCODE
LOGINURL=https://sitename-dev-ed.my.salesforce.com

Here is the code to pull in the .env values, define the oauth2 connection and login.

var dotenv         = require('dotenv').load();
var conn = new jsforce.Connection({
  oauth2 : {
      loginUrl : process.env.LOGINURL,
      clientId : process.env.CLIENTID,
      clientSecret : process.env.CLIENTSECRET,
      redirectUri : process.env.REDIRECTURI
    }
});
var username = process.env.USERNAME;
var password = process.env.PASSWORD;
conn.login(username, password, function(err, userInfo) {
  if (err) { return console.error(err); }
  console.log(conn.accessToken);
  console.log(conn.instanceUrl);
  console.log("User ID: " + userInfo.id);
  console.log("Org ID: " + userInfo.organizationId);
});

Once connected and logged in, we can query using SOQL. This is a query to pull All Opportunities, their contacts and contact roles, and their team members and the team member roles. If that makes sense. I am using this query to show the relationships between Opportunities and their Contacts and team members using d3.js. More on that later.

    var query = "SELECT Id, Name,(SELECT Contact.Name,Contact.Email,Contact.Id,Contact.AccountId,ContactId,Role,Contact.Account.Name FROM OpportunityContactRoles),(SELECT User.Name,User.Email,User.Id,UserId,TeamMemberRole FROM OpportunityTeamMembers) FROM Opportunity"
    conn.query(query, function(err, results) {
      if (err) { return console.error(err); }
      console.log("Query: " + results.totalSize)
      console.log(JSON.stringify(results, null, 2))
    });
0

My script/procedure to move Hyper-V VMs to Azure

We have been moving resources from ESXi to Hyper-V to Azure. ESXi to Hyper-V is done via the Microsoft Virtual Machine Converter (MVMC). Here is the Checklist/Script/Procedure I have been using to get Hyper-V to Azure.

  1. Once machine is in Hyper-V, make sure the VMs HDs are VHD and not VHDX
  2. Make sure DHCP is set on the VM
  3. Make sure RDP is enabled (ours is set via group policy)
  4. Power down VM
  5. Run the PowerShell below to add the HD (Add-AzurermVhd), and create a new VM in Azure:
Login-AzureRmAccount
$VMName="NAMEOFMACHINE"
$DestinationVMSize="Standard_A1"
$DestinationAvailabilitySet="AvailabilitySetName"
$PrivateIpAddress="192.168.5.55"
$ResourceGroupName="YourResourceGroup"
$DestinationNetworkName="YourNetwork"
$DestinationNetworkSubnet="YourLanSubnet"
$Location="East US2"
$OSType="Windows"
[switch]$DataDisk=$false
$SourceSystemLocalFilePath="C:\PathToYour\VHDs\$($VMName)-System.vhd"
$SourceDataLocalFilePath="C:\PathToYour\VHDs\$($VMName)-Data.vhd"
$DestinationStorageAccountName="yourstorageaccount"
$DestinationSystemDiskUri= "http://$DestinationStorageAccountName.blob.core.windows.net/vhds/$VMName-System.vhd"
$DestinationDataDiskUri= "http://$DestinationStorageAccountName.blob.core.windows.net/vhds/$VMName-Data.vhd"
$DestinationSystemDiskName="$($VMNAME)_SYSTEM.vhd"
$DestinationDataDiskName="$($VMNAME)_DATA01.vhd"
 
Add-AzurermVhd -Destination $DestinationSystemDiskUri -LocalFilePath $SourceSystemLocalFilePath -ResourceGroupName $ResourceGroupName
if ($DataDisk){
Add-AzurermVhd -Destination $DestinationDataDiskUri -LocalFilePath $SourceDataLocalFilePath -ResourceGroupName $ResourceGroupName
}
 
#region Build New VM
$DestinationVM = New-AzureRmVMConfig -vmName $vmName -vmSize $DestinationVMSize -AvailabilitySetId $(Get-AzureRmAvailabilitySet -ResourceGroupName $ResourceGroupName -Name $DestinationAvailabilitySet).Id
$nicName="$($VMName)_NIC01" 
$vnet = Get-AzureRmVirtualNetwork -Name $DestinationNetworkName -ResourceGroupName $ResourceGroupName
$subnet = $vnet.Subnets | where {$_.Name -eq $DestinationNetworkSubnet}
$nic = New-AzureRmNetworkInterface -Name $nicName -ResourceGroupName $ResourceGroupName -Location $Location -SubnetId $Subnet.Id -PrivateIpAddress $PrivateIpAddress
$DestinationVM = Add-AzureRmVMNetworkInterface -VM $DestinationVM -Id $nic.Id
$DestinationSystemDiskUri = $DestinationSystemDiskUri
$DestinationDataDiskUri = $DestinationDataDiskUri
 
If ($OSType -eq "Windows"){
$DestinationVM = Set-AzureRmVMOSDisk -VM $DestinationVM -Name $DestinationSystemDiskName -VhdUri $DestinationSystemDiskUri -Windows -CreateOption attach
if ($DataDisk){
$DestinationVM = Add-AzureRmVMDataDisk -VM $DestinationVM -Name $DestinationDataDiskName -VhdUri $DestinationDataDiskUri -CreateOption attach -DiskSizeInGB $DatDiskSize
}
}
 
New-AzureRmVM -ResourceGroupName $resourceGroupName -Location $Location -VM $DestinationVM

The most important part is to use “-attach” with “Set-AzureRmVMOSDisk”

Hope that helps someone.

0

Using Let’s Encrypt, cerbot-auto with Apache on CentOS 6

There are plenty of better documented examples out there, so this is more of a note to self.

cd /opt
mkdir YourDir
cd YourDir/
wget https://dl.eff.org/certbot-auto
chmod a+x certbot-auto

/certbot-auto --apache certonly -d www.FirstDomain.com -d FirstDomain.com -d www.SecondDoamin.com -d SecondDoamin.com -d www.ThirdDoamin.com -d ThirdDoamin.com -d www.FourthDomain.com -d FourthDomain.com

The name on the cert will be the first domain you list int he command above. All the other names will be part of the SAN cert.

And to renew, cron this up:
/opt/YourDir/certbot-auto renew

0

Using ADFS for authenticating apache hosted sites

I have been learning ADFS/SAML on the fly. If you come across this, and you see that I am doing it all wrong, then let me know!

I wanted to use my existing ADFS infrastructure to authenticate an apache resource on CentOS 6. Below is what I figured out
(There are alot of steps).

First, your site has to have HTTPS enabled.

Second, install Shibboleth: add it to your repos, yum install it, enable it, and start it.

wget http://download.opensuse.org/repositories/security://shibboleth/CentOS_CentOS-6/security:shibboleth.repo -P /etc/yum.repos.d
yum install shibboleth
chkconfig shibd on
service shibd start

This will include the “/etc/httpd/conf.d/shib.conf” file that defines the apache paths to the shibd service (and enables the module).

Next, I needed to edit the /etc/shibboleth/shibboleth2.xml file

Change:
<ApplicationDefaults entityID="https://sp.example.org/shibboleth" REMOTE_USER="eppn persistent-id targeted-id">
To:
<ApplicationDefaults entityID="https://www.SiteYouWantToProtect.com/shibboleth" REMOTE_USER="eppn persistent-id targeted-id">

And

Change:
<SSO entityID="https://idp.example.org/idp/shibboleth" discoveryProtocol="SAMLDS" discoveryURL="https://ds.example.org/DS/WAYF">
 SAML2 SAML1
</SSO>
To:
<SSO entityID="http://your.sitename.com/adfs/services/trust" discoveryProtocol="SAMLDS" discoveryURL="https://ds.example.org/DS/WAYF">
 SAML2 SAML1
</SSO>

At this point, I ran into trouble. Normally, it looks like you continue editing /etc/shibboleth/shibboleth2.xml config file and you setup the metadata provider to point to your site like this:

<MetadataProvider type="XML" uri="https://your.sitename.com/FederationMetadata/2007-06/FederationMetadata.xml" backingFilePath="federation-metadata.xml" reloadInterval="7200">

But I kept getting errors when I re-started shibd (service shibd restart). Seems that shibboleth and ADFS don’t speak the same language.
This site talks about it, and the solution is to download the metadata document, modify it, store it locally, and finally point the /etc/shibboleth/shibboleth2.xml config file to the “pre processed” local metadata file.

I processed the metadata file in PowerShell with a script here. I put the PowerShell code in a file ADFS2Fed.ps1 file, changed the top variables to look like this:

$idpUrl="https://your.sitename.com";
$scope = "sitename.com";

Downloaded the xml file from “https://your.sitename.com/FederationMetadata/2007-06/FederationMetadata.xml” and saved it as federationmetadata.xml (in the same directory as ADFS2Fed.ps1) .

I ran the script ADFS2Fed.ps1, it found the downloaded metadata file “federationmetadata.xml”, pre-processed it, and spit out “federationmetadata.xmlForShibboleth.xml”

I uploaded this file to my /etc/shibboleth/ folder and named it “partner-metadata.xml”

I then uncommented the following line in the /etc/shibboleth/shibboleth2.xml

 <MetadataProvider type="XML" validate="true" file="partner-metadata.xml"/>

That took care of the metadata provider.

Next. I needed to add this to the bottom of the atribute-map.xml file . The UPN that ADFS was sending was being ignored by shibd.

<Attribute name="http://schemas.xmlsoap.org/ws/2005/05/identity/claims/upn" nameFormat="urn:oasis:names:tc:SAML:2.0:attrname-format:unspecified" id="upn" />

Next, I needed to allow Shibboleth to work with SELinux (source):

Create a file named mod_shib-to-shibd.tewith :

module mod_shib-to-shibd 1.0;
require {
       type var_run_t;
       type httpd_t;
       type initrc_t;
       class sock_file write;
       class unix_stream_socket connectto;
}
#============= httpd_t ==============
allow httpd_t initrc_t:unix_stream_socket connectto;
allow httpd_t var_run_t:sock_file write;

Compile, package and load the module with the following 3 commands:

checkmodule -m -M -o mod_shib-to-shibd.mod mod_shib-to-shibd.te
semodule_package -o mod_shib-to-shibd.pp -m mod_shib-to-shibd.mod
semodule -i mod_shib-to-shibd.pp

Finally the last step on the apache/linux side is the set the apache virtual host to use shibboleth to authenticate.

        <Directory /var/www/dir/to/site>
          AllowOverride All
          AuthType Shibboleth
          ShibRequireSession On
          require valid-user
          ShibUseEnvironment On
          Order allow,deny
          Allow from all
        </Directory>

On the Windows/ADFS side:

  • In the ADFS Management Console, choose Add Relying Party Trust.
  • Select Import data about the relying party published online or on a local network and enter the URL for the SP Metadata (https://your.sitename.com/Shibboleth.sso/Metadata)
  • Continuing the wizard, select Permit all users to access this relying party.
  • In the Add Transform Claim Rule Wizard, select Pass Through or Filter an IncomingClaim.
  • Name the rule (for example, Pass Through UPN) and select the UPN Incoming claim type.
  • Click OK to apply the rule and finalize the setup.

I hope this helped someone. It took me a while to figure this out.
In summary,

  1. Use SSL
  2. Install shibd
  3. Edit /etc/shibboleth/shibboleth2.xml
  4. Process the metadata file
  5. edit /etc/shibboleth/shibboleth2.xml to point to the local processed metadata file
  6. modify atribute-map.xml
  7. Allow shidb to work with SELinux
  8. Tell Apache to use shibboleth
  9. Setup ADFS using the wizard
0

Problems with Citrix Receiver over VPN: ARGetNetworkLocationForStore returned NETWORK_LOCATION_NONE

I was working on my home lab, specifically setting up a Citrix XenDesktop environment. Since I didn’t have a Netscaler in place (yet), I connected to my home network via a Cisco AnyConnect VPN via a Mac.

While tunneling through the VPN connection, I could connect to the storefront and resources via HTML5, but I could never get the receiver client to connect – I could authenticate, but I couldn’t ever connect to the storefront (error:Citrix Receiver cannot connect to the server. Check your network connection.). I rebuilt the environment several times.

After some debugging of “Library/Logs/com.citrix.AuthManager.log” I figured it out. The error I was getting was:

CMacServiceRecordConnector::CallARGetNetworkLocationForStore url=https://storefront.domain.com/Citrix/Main/discovery
Thu Jul 28 14:33:09 2016     > T:00006A3F api    .   .   .   .   .   .   .   {
Thu Jul 28 14:33:09 2016     < T:00006A3F api    .   .   .   .   .   .   .   }
Thu Jul 28 14:33:09 2016       T:00006A3F api    .   .   .   .   .   .   .   Receiver status = success
Thu Jul 28 14:33:09 2016       T:00006A3F api    .   .   .   .   .   .   .   location=NETWORK_LOCATION_NONE
Thu Jul 28 14:33:09 2016 <<<<< T:00006A3F api    .   .   .   .   .   .   .   Throwable created: CHttpException: ARGetNetworkLocationForStore returned NETWORK_LOCATION_NONE; server URL: 'https://storefront.domain.com/Citrix/Main/discovery'

---

Processing exception, type='HTTP exception' description='ARGetNetworkLocationForStore returned NETWORK_LOCATION_NONE; server URL: 'https://https://storefront.domain.com/Citrix/Main/discovery''

The “location=NETWORK_LOCATION_NONE” was the issue. Citrix receiver didn’t know if it was inside or out. I figured the issue was the beacons, but setting them to obvious settings did not fix the issue.

It wasn’t until I set the internal beacon of the storefront to an IP address rather than a DNS name, did I get everything working.

My conclusion is that the Receiver client uses different DNS setting (most likely resolve.conf) than the browser. A browser (or any other networking app) on a mac uses the “scutil –dns” settings.

From here:

Note: AnyConnect does not change the resolv.conf file on Macintosh OS X, but rather changes OS X-specific DNS settings. 
Macintosh OS X keeps the resolv.conf file current for compatibility reasons. 
Use the scutil --dns command in order to view the DNS settings on Macintosh OS X.

I believe this is a bug in the way the receiver is programmed.

0

Powered by WordPress. Designed by WooThemes