Archive | March, 2011

My PowerShell PowerCLI VMware guest provisioning script

This script will provision a 4GB Ram, 40 GB HD Server 2008 R2 VM, set the CD to an OSD iso, set the BootDelay to 5 seconds, and start the machine


$vmhost = Get-VMHost "server.name.local"
$ds = Get-Datastore "server:storage1"
$rp = get-resourcepool -id "ResourcePool-resgroup-22"
$nn = "NetworkName"
$gi = "windows7Server64Guest"
$iso = "[server:ISOs] Folder/OSD.iso"

####
$vmname = "VMGeust01"
New-VM -name $vmname -VMHost $vmhost -numcpu 1 -DiskMB 40960 -memoryMB 4096 -datastore $ds -guestID $gi -resourcepool $rp -cd -NetworkName $nn
Get-VM $vmname | Get-CDDrive | Set-CDDrive -IsoPath $iso -StartConnected $true -Confirm:$false

$value = "5000"
$vm = Get-VM $vmname | Get-View
$vmConfigSpec = New-Object VMware.Vim.VirtualMachineConfigSpec
$vmConfigSpec.BootOptions = New-Object VMware.Vim.VirtualMachineBootOptions
$vmConfigSpec.BootOptions.BootDelay = $value
$vm.ReconfigVM_Task($vmConfigSpec)

Start-VM -VM $vmname

OS X: Move files based on creation date

I wanted to organize my flip videos – I wanted them moved into a folder named after their creation date.

One of the scripts that I found uses ls’s “–time-style” parameter. BSD’s ls does not have this value. MacPorts’ coreutils port contains gls (GNU’s ls) that does support the parameter. So here is my script to move movies into a sub folder based on the movies creation date.

for filename in *.MP4; do
    datepath="$(gls -l --time-style=+%Y-%m-%d $filename | awk '{print $6}')"
    echo "$datepath"
    if ! test -e "$datepath"; then
        mkdir -pv "$datepath"
    fi
    mv -v $filename $datepath
done

Mailbox moves and error no. c1034ad6

We have been moving mailboxes to a new Equallogic iSCSI SAN volume. The old information store is almost empty, except for one mailbox that did not get removed at the end of the move. It was sitting there with a red “X”. When we tried to delete it we received the error:

The Operation cannot be performed because this mailbox was already reconnected to an existing user.
ID no. c1034ad6
Exchange System Manager

I found this article, that suggested the it should be automatically removed once the “Exchange maintenance” runs. Since our maintenance runs over the weekend, and our “Keep deleted mailboxes for (days)” is set to 1 days, two weeks later, the mailbox should have been removed. But it wasn’t.

The problem ended up being that we had the check box “Do not permanently delete mailboxes and items until the store has been backed up” checked. This is great, but since there weren’t any mailboxes left in the information store, we had stopped backing it up!!!!

If you get the error above, and

  • the lat mailbox has been moved
  • you have stopped backing up your information store
  • The check box “Do not permanently delete mailboxes and items until the store has been backed up” is checked

Then your last mailbox will never be permanently deleted!!!

Obvious, but maybe someone will run across this and it will save them a couple of minutes scratching their head!

OS X Finder Service to convert Grab.app TIFF files to JPG

It drives me nuts that Grab.app defaults to .tif as the file type. And no matter how many times I try the recommended :

defaults write com.apple.screencapture type jpg

I can not get it to default to save as a jpg (and I tried jpeg). All I can guess is that this command does not work in 10.6. I don’t know. It drives me nuts.

So I sat down and created a quick OS X Service in Automator that takes files as an input and changes them to jpg. First in automator I selected “Service” as my workflow type

Next, I set the Service receives selected “files or folders” in “any application” values. Then I added the “Change Type of Images” action and set “To Type” to JPEG.

Hit save as “ChangeToJPEG” and now when I right click an annoying .tif, I have an option under Services to ChageToJPG

Grab.app is still annoying, but this makes it more tolerable.

Reporting Services report from related data in SQL and a SharePointList via SSIS

This is an older project that I wanted to talk about. I worked on this last June. It was a longer post, so i procrastinated.

I wanted to create a “mash up” of data from a SharePoint List and a set of SQL tables. I tried several things, somehting that should be easy, was not (at least for me).

  • First I tried using SharePoint as a DatasSource in a SQL Server Reporting Services. That worked, but I could not figure out how to combine that with a different datasource in a report. Basically I could not figure out how to use “SQL Joins” between two different data sources in SSRS.
  • Second I decided to look into combining my data inside SQL server. If I can establish my relationships inside SQL, then the report would be easy. The most efficient way (AFAIK) would be to use a liked server to the SharePoint list. That way the data stays in the SharePoint list, and I am just querying it. I found several articles talking about ODBC or OLEDB connections to a SharePoint list. I just could not get it work. I saw another article strongly recommending against it (I can’t find that link right now). So I scrapped that idea.
  • I Finally settled on grabbing the data out of the list and putting into a SQL Table. I am not a SQL guru, and I could not figure out how to use a temp table with SSRS, so I ended just adding a table. This table would be an exact replica of the data in the SharePoint list (not many rows). The data in the table would be erased and then repopulated every X number of hours. The question was how to get the data into SQL.

I decided to use SQL Server Integration Services (SSIS) to pull the data from the SharePoint list. I found the add-in to SSIS that made it easy to get data from a SharePoint List – SharePointListAdaptersSetupForSqlServer2005.msi. I ended up with a SSIS control flow that looked like this:

The first part deletes the contents of the destination table, second is the DataFlow piece:

The Data Conversion piece changes a “double-precision float” to a “unicode string”. All this is put together and creates a dtsx file (LoadSPList.dtsx) that can be executed by a Scheduled task:

DTEXEC.EXE /FILE “C:\LoadSPList.dtsx” /CONNECTION “connectionName”;”\”Data Source=DBName;Initial Catalog=DBName;Provider=SQLNCLI.1;Integrated Security=SSPI;Auto Translate=False;\”” /MAXCONCURRENT ” -1 ” /CHECKPOINTING OFF /REPORTING EWCDI

Finally, I just to created a report that contained a SQL query to relate the two different SQL tables.

Fun project, using three different technologies, SSIS, SQL and SharePoint.

Verify if a Recipient Policy is being applied to a user

We use the excellent Dell EMS Email Continuity product. It is fantastic. All our mail is replicated (via an SMTP sink and vaultbox) to their data center. In the event of and outage, our mail will route to their system, and they provide a way for the mail to be delivered to the user. One of the benefits, is that we only need to keep a limited about of mail on our exchange server (the rest in their archives).

We purge any mail that is over a year old. Maybe it is radical, but it works for us. Users prefer a snappier exchange server to messages from 10 years ago.

I do this using Recipient Policies. I can apply “Delete immediately” for messages older than a certain date, to users in AD groups. For example, I use the following in my filter rule:

memberOf=CN=MailBoxPurge-12M,DC=DOMAIN,DC=LOCAL

To verify that a user is receiving the policy, I do the following:

  1. open adsiedit.msc
  2. In the “Select a well known Naming Context”, select Configuration
  3. Navigate to Services -> Microsoft Exchange -> Organization Name -> Recipient Policies -> Select the properties of the Recipient Policy that you want to verify
  4. Obtain the objectGUID
  5. Switch the “Select a well known Naming Context” to Default Context
  6. Now navigate to employe and select properties
  7. You will see a value msExchPoliciesIncluded.

If the GUID you looked up in step 4 is in there, then you are good to go.

My adventures in Exchange Log Replaying.

I wanted to restore our exchange environment in an off line VM. One of my goals was to take an information store and mount it – just to prove that I can access the data.

At our off site business continuity location, we have a Double-Take replica of our production Exchange 2003 server, including replicas of all our information stores. Those replicated store are on an Dell Equallogic SAN, which has snapshot capabilities.

I figured I could take a snapshot of the volume that contains the information store I want present it to my offline exchange, and just mount the store. When I tried to mount the store, I realized this would not be be easy. Time to do some learning.
First Error:

The log version stamp of logfile Drive:\PATH\mdbdata\E01.log does not match the database engine version stamp. The logfiles may be the wrong version for the database.

This one was easy. I was too lazy to install SP2 on the recovery server. Seems that I needed to as the logs are stamped with a version number. Quick Fix – install Exchange 2003 Sp2.

Next error:

ESA: The Exchange Virtual Server needs to be upgraded before coming online. From the Cluster Administrator program, select ‘Upgrade Exchange Virtual Server’ from the resource’s context menu to upgrade this Exchange virtual server.

Again another easy one. I just had to go into the cluster admin and right click “Upgrade Exchange Virtual Server”

Next error:

An internal processing error has occurred. Try restarting the Exchange System Manager or the Microsoft Exchange Information Store service, or both

I was using a Standby Exchange Cluster as described here. This error suggested that I did not have al my drives and paths right. For some reason this inherited Exchange 2003 cluster had the exchange program files installed to an “E:” drive. I added another drive and re-installed exchange with Sp2.

Next error:

Database recovery/restore failed with unexpected error -515.

Alright! This was a good one. I suspected my snapshot of a replicated information store might not be happy. How do I make it happy? So begins my adventures in replay logs. Here is the summary of what I learned. I understood how what to do, I just did not know how to do it!

If you suspect you are having an issue with an information store that you can not mount, run (here is your only warning – backup before you try anything.) :

eseutil.exe /mh MailStore.edb

If you see: State: Clean Shutdown – there is something else going on.
If you see: State: Dirty Shutdown – you need to get the mail store to a clean state.

AFIAK, you can get a mailstore to a clean state one of two ways. The first is to replay the missing logs and the second it to repair. The second results in loss of data.

To replay logs, look at the output of the above command again and you will see :
Log Required: 30012-30013 (0x753c-0x753d).

If you have these log files (they will have a prefix like E00 or E01), all you should need to do is run

eseutil.exe /r E01

This will find the files and replay them into the MailStore.

I tried this but I was missing a log file. I received the following error:

Operation terminated with error -515 (JET_errInvalidLogSequence, Timestamp in next log does not match expected) after 4.16 seconds.

Since I was missing a log file, and I had no other option to get it (like from a backup (this is all dev so there was need no to panic)). I had to resort to the “I loss data” method:

eseutil.exe" /p MailStore.edb

This took several hours and the output looks like this:

Initiating REPAIR mode…
Database: MailStore.edb
Streaming File: MailStore.STM
Temp. Database: TEMPREPAIR3692.EDB

Checking database integrity.

The database is not up-to-date. This operation may find that
this database is corrupt because data from the log files has
yet to be placed in the database.

To ensure the database is up-to-date please use the ‘Recovery’ operation.

Scanning Status (% complete)

0 10 20 30 40 50 60 70 80 90 100
|—-|—-|—-|—-|—-|—-|—-|—-|—-|—-|
……………………………………………

Scanning the database.

Scanning Status (% complete)

0 10 20 30 40 50 60 70 80 90 100
|—-|—-|—-|—-|—-|—-|—-|—-|—-|—-|
……………………………………………

Repairing damaged tables.

Scanning Status (% complete)

0 10 20 30 40 50 60 70 80 90 100
|—-|—-|—-|—-|—-|—-|—-|—-|—-|—-|
…………………….
Deleting unicode fixup table.
……………………..

Repair completed. Database corruption has been repaired!

Repaired!

Now when I run:

eseutil.exe /mh MailStore.edb

I see “State: Clean Shutdown” and I can mount my MailStore. That was fun.

Exchange 2010 SP1 and New-DatabaseAvailabilityGroup

I was progressing along on with my offline Exchange install, and I ran into a problem when creating my Database Availability Group (DAG). I wanted to put the witness directory on a non exchange server and the instructions say:

If the witness server you specify isn’t an Exchange 2010 server, you must add the Exchange Trusted Subsystem universal security group to the local Administrators group on the witness server.

I added the correct group to the correct group and I run:

New-DatabaseAvailabilityGroup DAGNAME -witnessserver nonexchange.domain.local -witnessdirectry c:\DAGFSW

I received the following error:

WARNING: The Exchange Trusted Subsystem is not a member of the local Administrators group on specified witness server nonexchange.domain.local.

But it is in there, believe me I tripple checked. I also tried:

  • I rebooted the server, deleted the DAG and tried again. Nada.
  • I added the witness machine$ account to the local Administrator’s account – since all it is doing is creating a shared directory. Nope.
  • Looked on the witness server, and did not see a shared folder.
  • But, I never added a member to the DAG because I thought that the shared folder should be there.

So I started reading and I came across this article. Devin suggests that all you need to do, like the documentation says, is add the Exchange Trusted Subsystem to the local administrators group, and NOT add the witness machine$ account to the Exchange Trusted Subsystem group. I agree with his argument as to why it is not necessary.

BUT. I think there might be a bug in SP1. My findings are:

  • If you run the New-DatabaseAvailabilityGroup command and ONLY have the Exchange Trusted Subsystem as a member of the witness’s local Administrators group:
    1. You will receive this error: WARNING: The Exchange Trusted Subsystem is not a member of the local Administrators group on specified witness server nonexchange.domain.local.
    2. If your witness folder is a directory or two deep, parent directories will be created
    3. The witness shared folder will not be created until you add a member to the DAG
  • If you run the New-DatabaseAvailabilityGroup command AND have the witness machine$ account in the Exchange Trusted Subsystem:
    1. You will NOT receive an error.
    2. The witness shared folder will not be created until you add a member to the DAG

So, in summary, it seems:

  • That there is a bug in SP1 in the New-DatabaseAvailabilityGroup command. It incorrectly reports that “The Exchange Trusted Subsystem is not a member of the local Administrators group”, when it is.
  • New-DatabaseAvailabilityGroup creates the DAG and even though it spits back an error, everything seems to function once a DAG member has been created – the witness folder is created
  • Devin’s article is still a valid recommendation as you do not need to add the non exchange witness machine$ account to the Exchange Trusted Subsystem group to get a DAG up and running.

Of course there could be other things at play, but as of now, this is what I have found.