• How to update ESXi 4.1 without vCenter

    I wanted to update a standalone ESXi box from 4.1 to 4.1 Update 1. Here is how I went about it:

    1. Downloaded the update on a windwos box from here and unziped it
    2. Open the viClient datastore browser and upload the unzipped folder. If you put it off the root of your datastore, the path will be:
      1. /vmfs/volumes/datastore1/update/update-from-esxi4.1-4.1_update01
    3. Install the VMware vSphere PowerCLI – which is a Windows PowerShell interface to the vSphere API
    4. Add the VMware cmdlts to your PowerShell session: add-pssnapin “VMware.VimAutomation.Core”
    5. Put the ESXi server into maintenance mode.
    6. In PowerShell, connect to the ESXi server:  Connect-VIServer servername.domain.local
    7. In PowerShell: Install-VMHostPatch -HostPath /vmfs/volumes/datastore1/update-from-esxi4.1-4.1_update01/metadata.zip
    8. The result was: WARNING: The update completed successfully, but the system needs to be rebooted for the changes to be effective.
    9. Reboot!

    The summary below was also returned:

    Id                                              VMHostId IsIns IsApp Needs Needs
                                                             talle licab Resta Recon
                                                             d     le    rt    nect
    --                                              -------- ----- ----- ----- -----
    cross_oem-vmware-esx-drivers-scsi-3w-9xxx_... ...ha-host False True  True  False
    cross_oem-vmware-esx-drivers-net-vxge_400.... ...ha-host False True  True  False
    deb_vmware-esx-firmware_4.1.0-1.4.348481      ...ha-host False False True  False
    deb_vmware-esx-tools-light_4.1.0-1.4.348481   ...ha-host True  True  False False
    

  • Script to compare RPMs on two different CentOS servers

    I wanted to make sure that the same RPMs were installed on several servers. I wasn’t worried about versions of RPMs because everything should be kept up to date via yum. So I sat down and wrote the script below. It has been on my ToDo list for quite a while!

    RRPM=$(ssh $REMOTESERVER "rpm -qa --queryformat '%{NAME}\n'" )
    LRPM=$(rpm -qa --queryformat '%{NAME}\n')
    
    echo "*** Missing from $REMOTESERVER" ***
    grep -vf <(echo "$RRPM"| sort) <(echo "$LRPM"|sort)
    echo
    echo "*** Missing from Local system ***"
    grep -vf <(echo "$LRPM"| sort) <(echo "$RRPM"|sort)
    echo
    

    This script connects to a remote machine and compares RPMs installed there to the RPMs that are installed locally.


  • AutoSSH on CentOS

    I have been interested in MySQL replication over ssh and I wanted a way to make sure that the tunnel is always up. Everyone says to use AutoSSH. AutoSSH is not in EPEL, but is in rpmforge (Is that that same as DAG? Didn’t they merge?). I installed rpmforge:

    rpm -Uhv http://apt.sw.be/redhat/el5/en/i386/rpmforge/RPMS/rpmforge-release-0.3.6-1.el5.rf.i386.rpm

    I don’t like to do the RepoDance, so I disabled rpmforge:

    sed -i "s/enabled = 1/enabled = 0/" /etc/yum.repos.d/rpmforge.repo

    Next I installed AutoSSH

    yum install --enablerepo=rpmforge autossh</p>
    

    And finally my Bash function to create an AutoSSH tunnel:
     

    function StartAutoSSH {
    	. /etc/rc.d/init.d/functions
    	AUTOSSH_PIDFILE=/var/run/autossh.pid # we are assuming only one autossh tunnel
    	if [ ! -e $AUTOSSH_PIDFILE ]; then
    	AUTOSSH_PIDFILE=$AUTOSSH_PIDFILE;export AUTOSSH_PIDFILE
    	autossh -M29001 -f -N -L7777:127.0.0.1:3306 [email protected]
    	else
    	status -p $AUTOSSH_PIDFILE autossh
    	fi
    }
    

    If you call this function, it will created the specified tunnel or if it is up and runnng, then it will spit back the PID.


  • Mirror all MySQL DBs to the local machine

    Continuing on my quest to get MySQL replicating over ssh, I am using the following bash function to replicate all remote DBs locally:

     

    function MirrorAllRemoteDBsToLocal {
    	for REMOTEDB in $(mysql -h 127.0.0.1 -P 7777 --batch --skip-column-names -e "SHOW DATABASES")
    	do
    	LOCALDBEXISTS=$(mysql --batch --skip-column-names -e "SHOW DATABASES LIKE '"$REMOTEDB"';" | grep "$REMOTEDB" > /dev/null; echo "$?")
    	if [ $LOCALDBEXISTS -ne 0 ];then
    		echo "adding $REMOTEDB to local MySQL"
    		mysql -e "create database $REMOTEDB;"
    		echo "getting a dump"
    		mysqldump -h 127.0.0.1 -P 7777 $REMOTEDB | mysql $REMOTEDB;
    		echo " adding $REMOTEDB to my.conf"
    		sed -i '/master-port/a\\treplicate-do-db='$REMOTEDB'' /etc/my.cnf
    	fi
    	done
    }
    
    

    Line 2 connects to the local AutoSSH tunnel and gets a list of all the remote DBs.

    Then we loop through the DBs and if there is not a DB locally with that name, the script will create a database. Next the script gets (Line 9) a dump of the DB and copies it to the newly created DB.

    And finally the script add the DB to the /etc/my.cnf (line 11).

    All that should have to happen is to issue a slave stop and then slave start, and all DBs should be mirrored locally.


  • Google Reader Starred items to Together.app

    I use Together.app from Reinvented Software as my archiving solution – my knowledge base. I like the product because it leaves the pdfs I create on the filesystem and the db contains the tags and links associated with each file. I used to use Yojimbo, but it keeps all the files in their database. I am not sure it is that big of an issue (especially because we are considering SharePoint as a document management system!), but I am living with Together.app. I just need a way to get my Together.app data to my iPhone – but that is another issue.

    My information consumption workflow starts in Google reader, and Reeder for the iPad and iPhone, and ends in Together.app. Interesting items are “Starred” in Google Reader, and I needed a way to get the starred items to Together.app. I could not find a way to do it in bulk until I ran across this post explaining how to dump your starred items to a html document. I took the script a little further and I used apple script to import the url into Together.app:

     

    require "rubygems"
    require "open-uri"
    require "simple-rss"
    feed = "http://www.google.com/reader/public/atom/user%0000000000000000000000%2Fstate%2Fcom.google%2Fstarred?n=50"
    rss = SimpleRSS.parse open(feed)
    rss.entries.each do |item|
    puts "Downloading: #{item.title.sub( ":", "-" )}\n"
    %x(osascript -e 'tell application \"Together\" to import url \"#{item.link}\" as web PDF')
    end
    

     
    Make your starred items public, and change the “0000000000000000000000” to your user id (as described in the the original post). Run it, and 50 starred items at a time will be added to your Together.app
     
    My colleague suggested that I unstar the item automatically after added to Together, but I will have to sit down and figure that out.


  • WordPress TwentyTen Custom Header setting in the db

    When we move a WordPress site from development to production we update the URL in the following db values:

    • in the GUID value of each post in wp_posts
    • in the wp_options table, the option_name of home
    • in wp_options, the option_name of siteurl

    In a recent move, we found that the custom header in the TwentyTen theme was not displaying correctly when we moved across servers. Seems that when you use a TwentyTen theme or child theme, a wp_option is added to the table – theme_mods_twentyten. The value of this contains all the theme mods including the the URL of the header image. The query below would update the URL in this value:

    • mysql –batch –skip-column-names -e “use $CURRENTDB;UPDATE wp_options SET option_value = replace(option_value, ‘”$OLDSITENAME”‘, ‘”http://$NEWSITENAME”‘) WHERE option_name = ‘ theme_mods_twentyten’;”

    Note: When using a copy of the TwentyTen theme, the option_name value will be theme_mods_NameOfTheTheme.


  • MySQL replication of WordPress dbs over ssh

    I wanted to setup MySQL replication over ssh for a small WordPress database. I have a VM that lives in my house, I wanted to be able to bring it up, make it current, disconnect, and then hack away. Here is my proceedure.

    On the master:

    Setup the replication user (I had to use 127.0.0.1 becasue % did not let me connect from ssh):

    grant replication slave on *.* TO repl@"127.0.0.1" identified by '[repl password]';
    

    edit /etc/my.cnf

    server-id=1
    log-bin=mysql-bin
    replicate-do-db=DB_TO_REPLICATE   #the name of the db you want to replicate.
    

    From the slave:

    edit /etc/my.cnf

    server-id = 2
    master-host = 127.0.0.1
    master-user = repl
    master-password = password
    master-port = 7777
    replicate-do-db = DB_TO_REPLICATE   #the name of the db you want to replicate.
    

    Next, tunnel a ssh connection from slave to the source, and then make sure you can see the source databases:

    ssh -f -N -L7777:127.0.0.1:3306 [email protected]
    mysql -h 127.0.0.1 -P 7777 -e "show databases;"
    

    Next create a local database:

    mysql -e "create database DB_TO_REPLICATE;"
    

    Next we want to find where the master’s log is and its position:

    MASTERLOGPOS=$(mysql -h 127.0.0.1 -P 7777 --batch --skip-column-names -e "SHOW MASTER STATUS;" | cut -f2)
    MASTERLOGFILE=$(mysql -h 127.0.0.1 -P 7777 --batch --skip-column-names -e "SHOW MASTER STATUS;" | cut -f1)
    

    next we want to seed the local db

    mysqldump -h 127.0.0.1 -P 7777 DB_TO_REPLICATE | mysql -u root DB_TO_REPLICATE;
    

    and finally, tell the db to use the source as the Master:

    mysql -e "use DB_TO_REPLICATE;CHANGE MASTER TO MASTER_LOG_FILE='"$MASTERLOGFILE"',MASTER_LOG_POS=$MASTERLOGPOS;"
    mysql -e "start slave;"
    

    To test I run:

    mysql -e "show slave status\G;"
    

    and I compare the following on the master and the client:

    mysql -e "select id,post_parent,post_modified,post_title from jbmurphy_com.wp_posts"
    

  • DrivesMeNuts: Windows shutdown tracker

    You know that drop down box that looks like this (called the Shutdown Tracker):

    Every month when I patch servers it DrivesMeNuts that Microsoft did not put in a “Planned” option for “Operating System: Patching”

    Every time I click the shutdown tracker, I think: “Microsoft intentionally left “Patching” off the selection list, so that the logs don’t track the amount of patching we have to do!”