Friday, June 21, 2013

Freeing Disk Space on C:\ Windows Server 2008

 

I just spent the last little while trying to clear space on our servers in order to install .NET 4.5. Decided to post so my future self can find the information when I next have to do this.

I performed all the usual tasks:

  • Deleting any files/folders from C:\windows\temp and C:\Users\%UserName%\AppData\Local\Temp

  • Delete all EventViewer logs

    • Save to another Disk if you want to keep them

  • Remove any unused programs, e.g. Firefox

  • Remove anything in C:\inetpub\logs

  • Remove any file/folders C:\Windows\System32\LogFiles

  • Remove any file/folders from C:\Users\%UserName%\Downloads

  • Remove any file/folders able to be removed from C:\Users\%UserName%\Desktop

  • Remove any file/folders able to be removed from C:\Users\%UserName%\My Documents

  • Stop Windows Update service and remove all files/folders from C:\Windows\SoftwareDistribution

  • Deleting an Event Logs
  • Run COMPCLN.exe
  • Move the Virtual Memory file to another disk

However this wasn’t enough & I found the most space was cleared by using the Disk Cleanup tool “cleanmgr.exe” but of course this isn’t installed by default on Windows Server 2008. 


In order to get the Disk Cleanup you need to go to Server Manager > Add Features > and turn on “Desktop Experience”

desktop_experience

After running the Disk Cleanup I found this gem “Hibernation File Cleaner” was using almost 7.45 GB, I’m pretty sure I don’t need Hibernate functionality on an always on Web Server.

hibernation_file_cleaner

This was a big win & I could continue installing .NET 4.5

Tuesday, June 28, 2011

Windows Azure AppFabric Service Bus Queues API

 

Recently Microsoft released a new App Fabric SDK 2.0 CTP including some great new features,

You can grab all the bits and pieces from here and/or read the release notes.

Some of the highlights include: 

  • Publish/Subscribe which are called Topics
  • Message Queues
  • Visual Studio Tools
  • AppFabric Application Manager
  • Support for running WCF & WF

The part that interested me the most is the Queues feature and that is what I’m going to be exploring in this post.

Overview of Queues

Message Queues are not a new concept allow for more reliable and scalable communication between distributed systems than pure request/response. Solutions like MSMQ, NServiceBus already exist to solve this problem for locally connected systems.  What the Queues API provides is that it provides similar features but the messages are being transported across the internet and persisted in the cloud.  

There is currently a Message Buffer available in Azure but this has serious limitations:

  • Messages only persisted for a maximum 10 minutes
  • Maximum of 50 messages
  • Requires the client to be connected
  • Charged per connection not per message
  • Complex API
  • Max message size of 8 KB

The Queues API addresses a number of these issues by adding.

  • Long term persistence (No clarification on the SLA yet)
  • Simpler API
  • Max message size of 256 KB
  • Sessions
  • Larger size of queue (100 MB currently but set to increase)

To start testing out the Queues feature you need to first Login to the AppFabricLabs portal and create a test Service Namespace.

Then you will need to add Microsoft.ServiceBus and Microsoft.ServiceBus.Messaging dlls to your project. They can be found in C:\Program Files\Windows Azure AppFabric SDK\V2.0\Assemblies\NET4.0

Wrapping It Up

Before starting on the implementation I’m going to define a simple interface around the ServiceBus bits and also add an IMessage interface which all Messages will need to implement.

IServiceBus

    public interface IServiceBus
    {
        void Send(string queueName, IMessage message);
        
        T Receive<T>(string queueName) where T : IMessage; 
    }

 

IMessage

Although not required by the API it’s good practice to have a unique identifier for each message.
When architecting message based systems you need to allow for idempotence meaning that an operation should be able to be executed multiple times without changing the result. Most message queue systems guarantee that the messages will be delivered “at least once” so you need to allow for this in your code and having an unique identifier on the message makes implementing this a trivial exercise.

    public interface IMessage
    {
        Guid Id { get; set; }
    }
Note: All messages must be Serializable

 

Authentication

In order to authenticate against the service bus you need three values:

  • Issuer Name
  • Issuer Key
  • Service Namespace

The service namespace is just the name which you created in the portal, in this example it’s “appfabricdemo1”.

The issuer name & key can be found under the Default Key section in the Portal.

default_key

 

There is a little bit of ceremony in setting up the correct objects but I’ve put this into a single method called InitClient which is called from the Send and Receive methods.

       private readonly string issuerKey;
       private readonly string issuerName;
       private readonly string serviceNamespace;
       private TransportClientCredentialBase clientCredentials;
       private MessagingFactory messagingFactory;
       private ServiceBusNamespaceClient namespaceClient;
       private IList<Queue> queueList; 

       public AzureServiceBus(string issuerName, string issuerKey, string serviceNamespace)
       {
           this.issuerName = issuerName;
           this.issuerKey = issuerKey;
           this.serviceNamespace = serviceNamespace;
       }
       private void InitClient()
       {
           clientCredentials = TransportClientCredentialBase
               .CreateSharedSecretCredential(issuerName, issuerKey);

           var uri = ServiceBusEnvironment
               .CreateServiceUri("https", serviceNamespace, string.Empty);
           
           var runtimeUri = ServiceBusEnvironment
               .CreateServiceUri("sb", serviceNamespace, string.Empty);

           namespaceClient = new ServiceBusNamespaceClient(uri, clientCredentials);

           messagingFactory = MessagingFactory.Create(runtimeUri, clientCredentials);
       }

The key classes here are:

 

Creating the Queue

Before we can start sending and receiving messages you first have to create a queue. The API doesn’t currently have a method to check if a queue exists already so it has to be hand rolled as calling CreateQueue throws an exception if it already exists.

This is fairly easily achieved with two simple methods

       private Queue GetOrCreateQueue(string path)
       {
           path = path.ToLower();

           var queues = GetQueues();

           var queueExists = queues.Any(q => q.Path == path);

           if (queueExists)
           {
               return queues.FirstOrDefault(q => q.Path == path);
           }

           var queue = namespaceClient.CreateQueue(path);

           queues.Add(queue);

           return queue;
       }

       private IList<Queue> GetQueues()
       {
           if (queueList != null)
           {
               return queueList;
           }

           queueList = new List<Queue>();
           var queues = namespaceClient.GetQueues();

           foreach (var queue in queues)
           {
               queueList.Add(queue);
           }

           return queueList;
       }

As the GetQueues method needs to make a remote call I keep an in-memory collection of the Queue objects so that it only calls the first time.

Note: Queue names are converted to lowercase when created.

Sending a Message

In order to send a message you need to create a QueueClient, MessageSender and then convert the message to a BrokeredMessage.

        public void Send(string queueName, IMessage message)
        {
            InitClient();

            var queue = GetOrCreateQueue(queueName);

            var queueClient = messagingFactory.CreateQueueClient(queue);

            using (var messageSender = queueClient.CreateSender())
            {
                var brokeredMessage = ConvertToBrokeredMessage(message);

                messageSender.Send(brokeredMessage);
            }

            queueClient.Close();
            messagingFactory.Close();
        }

        private static BrokeredMessage ConvertToBrokeredMessage(IMessage message)
        {
            var brokeredMessage = BrokeredMessage.CreateMessage(message);

            brokeredMessage.MessageId = message.Id.ToString();

            return brokeredMessage;
        }

 

The BrokeredMessage has a bunch of useful properties, most notable of which are:

  • ContentType
  • CorrelationId
  • DeliveryCount
  • Properties – which is a Dictionary<string, object>
  • MessageId

 

Receiving a Message

Receiving a message is much the same as sending a message in that it requires a QueueClient & MessageReceiver.

There are two different modes when receiving a message.
ReceiveAndDelete which deletes the message immediately after reading or PeekLock which only locks the message and leaves you to manage the delete.

       public T Receive<T>(string queueName) where T : IMessage
       {
           InitClient();

           var queue = GetOrCreateQueue(queueName);

           var queueClient = messagingFactory.CreateQueueClient(queue);

           queueClient.CreateReceiver();

           BrokeredMessage brokeredMessage;

           using (var messageReceiver = queueClient.CreateReceiver(ReceiveMode.ReceiveAndDelete))
           {
               brokeredMessage = messageReceiver.Receive();
           }

           if (brokeredMessage == null)
           {
               return default(T);
           }

           queueClient.Close();
           messagingFactory.Close();

           var message = brokeredMessage.GetBody<T>();

           return message;
       }

 

The Demo App

I wanted to pull this together and so created an example application which is two console apps one being the client and one being the server. The server sends a message to the client on one queue and when the client receives the message it sends a reply message on another queue.

Server

    internal class Program
    {
        private static void Main(string[] args)
        {
            Console.WriteLine("Welcome to the Queue Demo Server");
            Console.WriteLine("Press any key to start:");
            Console.ReadLine();


            var serverQueue = "serverqueue";
            var clientQueue = "clientqueue"; 

            var serviceBus = new AzureServiceBus(ConfigurationManager.AppSettings["IssuerName"], 
                ConfigurationManager.AppSettings["IssuerKey"], 
                ConfigurationManager.AppSettings["ServiceNamespace"]);

            var message = new TestMessage("I've travelled along way just to get here.");

            
            serviceBus.Send(serverQueue, message);

            Console.WriteLine("Sent Message:" + message.Id + " at " + DateTime.Now);


            while (true)
            {
                message = serviceBus.Receive<TestMessage>(clientQueue);

                if (message == null || message.Id == Guid.Empty)
                {
                    Console.WriteLine("No messages found yet I'll keep trying");
                }
                else
                {
                    Console.WriteLine("Read Message: " + message.Id + " at " + DateTime.Now);
                    Console.WriteLine(message.Message);
                    break;
                }
            }


            Console.ReadLine();
        }
    }
server_console

Client

   internal class Program
   {
       private static void Main(string[] args)
       {
           Console.WriteLine("Welcome to the Queue Demo Client");
           Console.WriteLine("Press any key to start:");
           Console.ReadLine();
           
           var serverQueue = "serverQueue";
           var clientQueue = "clientQueue"; 

           var serviceBus = new AzureServiceBus(ConfigurationManager.AppSettings["IssuerName"], 
               ConfigurationManager.AppSettings["IssuerKey"], 
               ConfigurationManager.AppSettings["ServiceNamespace"]);


           while (true)
           {
               var message = serviceBus.Receive<TestMessage>(serverQueue);

               if (message == null || message.Id == Guid.Empty)
               {
                   Console.WriteLine("No messages found yet I'll keep trying"); 
               }
               else
               {
                   Console.WriteLine("Read Message: " + message.Id + " at " + DateTime.Now);
                   Console.WriteLine(message.Message);
                   break;
               }
           }


           var responseMessage = new TestMessage("And so have I");

           serviceBus.Send(clientQueue, responseMessage);

           Console.WriteLine("Sent Message:" + responseMessage.Id + " at " + DateTime.Now);



           Console.ReadLine();
       }
   }

client_console

 

Fiddler

If you’re curious you can see what’s going on under the covers using Fiddler.

fiddler

 

API Change Requests

Having played around with the API it’s pretty good and much better than the MessageBuffer API which is really awkward.

I’d like to see on the ServiceBusNamespaceClient a method that tells you if a Queue exists or not, something like:

bool QueueExists(string path)

Currently MessagingFactory, ServiceBusNamespaceClient, QueueClient do not implement interfaces which makes it much harder to Unit Test. Creating an abstracting around these classes would be a big win IMO.

 

Conclusion

All in all the Queue API is a much needed and welcomed addition to the Azure offerings and greatly simplifies messaging communication between not always connected clients and the server. It also appears to be very fast as well.

Grab the code.

In my next post I’m going be looking at the Publish/Subscribe (Topics) features.

Tuesday, June 21, 2011

Getting started with SpecFlow, WatiN, ATDD and BDD

After completing my Scrum Master course I felt it was high time I review some of our existing engineering practices and consider ways to improve.

Whilst we practice strict test-first TDD and have 100% code coverage as part of our “definition of done” the one area that we’ve been missing is automated UI testing and acceptance testing. After all, high code coverage doesn’t mean a thing if the requirements are not met.

ATDD & BDD

This motivated me to rediscover what is known as Acceptance Test Driven Development or ATDD which is about defining automatable tests from a customer perspective that closely reflect the requirements described in the User Stories. This is also associated to Behaviour-Driven Development (BDD).

The cool kids in the Ruby world caught on to this way back when we were still doing web forms development.

Some of the Tools available in this space are:

RSpec (Ruby)
Cucumber (Ruby)
Robot Framework (Java & Python)  
FITNESSE
WatiR Web UI Testing Framework for Ruby

Of all these tools & frameworks Cucumber is the most interesting in that it allows to describe behavioural tests in plain (non-technical) language allowing to the tests to be authored by the customers or customer proxies.

But What About .NET?

So I’ve mentioned a couple of the leading tools and frameworks in this area which are available in the Java and dynamic language world. In the .NET world there are a number of frameworks around which are either inspired by or ports of their counterparts.

NSpec (Inspired by RSpec)
SpecFlow (Inspired by Cucumber and conforms to Gherkin syntax)
WatiN (Inspired by WatiR)

NSpec and RSpec focus on BDD style Unit-Testing where as Cucumber and SpecFlow are more suited to acceptance tests.

For that reason I’m going to be looking at SpecFlow and WatiN for the remained of this post.

SpecFlow

For those of you without any knowledge of Cucumber or SpecFlow essentially how it works is that you specify Features which are files in your project.
Features firstly define the requirement from a customers perspective using non-techie vocabulary and then specify the scenarios or steps using the Given, When, Then words.

Here is a simple example of a how a feature can be written (I’m not an expert on this language yet).

Feature: Authentication
    In order to allow site personalization
    As a user
    I want to be able to login 

Scenario: Navigation to Home 
    When I navigate to /Home/Index 
    Then I should be on the home page

Scenario: Logging in
    Given I am on the home page
    And I have entered will into the username field
    And I have entered test into the password field 
    When I press submit
    Then the result should be a Logged In title on the screen 

 

Relating this back to Agile you can think of a feature as a user story. 

It should be noted that developers should not be writing these, they should be written by either the customer or a customer proxy such as a tester or business analyst. If these are written by non-technical people then you can ensure that they are focusing on the requirement and not the technical implementation.

Setting up the Test Project

After downloading and installing SpecFlow (make sure to close Visual Studio when doing so) you’ll need to create a standard Unit Test project. e.g. SpecFlowDemo.AcceptanceTests.

Add a reference to TechTalk.SpecFlow.dll which you can find in C:\Program Files\TechTalk\SpecFlow.

If you’re using MSTest then you’ll need to add the following to your App.Config

<?xml version="1.0" encoding="utf-8" ?>
<configuration>
  <configSections>
    <section name="specFlow" type="TechTalk.SpecFlow.Configuration.ConfigurationSectionHandler, TechTalk.SpecFlow"/>
  </configSections>
  <specFlow>
    <unitTestProvider name="MsTest" />
  </specFlow>
</configuration>

 

I like to create a separate folder for the Features and Steps.

At this point your project should be looking something like.

project

 

Defining a Feature

As I mentioned earlier a feature should try and map to a single user story.

For this example I’m going to create a Search feature which searches google for BDD related articles.

Add New Item > SpecFlow Feature File

addnewitem_search_feature

This will add a Feature file which comprises of the .feature part and the .cs code-behind. Make sure you don’t touch the .cs file as this contains designer generated code.

The Feature

For this fictitious example I’m going to use a Search feature. This is described below using the Gherkin language.

Feature: Search
    In order to find articles on BDD
    As a BDD fanatic 
    I want to enter a keyword into a search engine and be shown a list of related websites


Scenario: Navigate to Search Engine
    When I enter http://www.google.com in the address bar
    Then I should be on the home page


Scenario: Perform search
    Given I am on the home page
    And I have entered BDD into the keyword textbox
    When I press the btnG button
    Then I should see a list of articles related to BDD 

Adding the Steps

Now we are going to add a Step definition for each of the scenarios described in the Feature file.

Right Click on Project > Add New Item > SpecFlow Step Defintion

addnewitem_step_definition

Once you’ve added the Step you have to add the methods for each of the Given, When, Then lines in your scenario and annotate them with the correct attribute.

The key thing here is that the class needs to be annotated with the Binding attribute.

You can use the (.*) expression to represent variables coming from your Feature file. 

NavigateToSearchEngine Step

[Binding]
 public class NavigateToSearchEngine
 {
     [When("I enter (.*) in the address bar")]
     public void when_i_enter_the_url(string url)
     {
     }

     [Then("I should be on the home page")]
     public void then_i_should_be_on_the_home_page()
     {
     }
 }

 

PerformSearch Step

    [Binding]
    public class PerformSearch
    {
        [Given("I am on the home page")]
        public void given_i_am_on_the_home_page()
        {
           
        }

        [Given("I have entered (.*) into the keyword textbox")]
        public void and_i_have_entered(string keyword)
        {

        }

       [When("I press the (.*) button")]
       public void when_i_press_the_button(string button)
       {

       }
    }

 

WatiN

Now that we have our Feature and Steps defined in order to execute the UI tests we’re going to need WatiN. For more info on WatiN check out the documentation.

Download WatiN

Add references to Waitn.Core.dll & Interop.SHDocVw.dll which can be found in C:\Program Files\WatiN Test Recorder

Ensure that Interop.SHDocVw has Embed Interop Types set to False

Static Browser Class

This class is a static helper class for accessing the browser. WatiN has support for Internet Explorer & Firefox but I haven’t been able to get it to work with Firefox 4 I just tend to use the IE implementation.

#region

using TechTalk.SpecFlow;
using WatiN.Core;

#endregion

namespace SpecFlowDemo.AcceptanceTests
{
    [Binding]
    public class WebBrowser
    {
        public static Browser Current
        {
            get
            {
                if (!ScenarioContext.Current.ContainsKey("browser"))
                {
                    ScenarioContext.Current["browser"] = new IE();
                }
                return (Browser) ScenarioContext.Current["browser"];
            }
        }

        [AfterScenario]
        public static void Close()
        {
            if (ScenarioContext.Current.ContainsKey("browser"))
            {
                Current.Close();
            }
        }
    }
}

 

Implementing the Steps

 

NavigateToSearchEngine

    [Binding]
    public class NavigateToSearchEngine
    {
        [When("I enter (.*) in the address bar")]
        public void when_i_enter_the_url(string url)
        {
            WebBrowser.Current.GoTo(url);
        }

        [Then("I should be on the home page")]
        public void then_i_should_be_on_the_home_page()
        {
            Assert.AreEqual(WebBrowser.Current.Title, "Google");
            Assert.IsTrue(WebBrowser.Current.TextFields.Exists(tf => tf.Name == "q"));
        }
    }

 

PerformSearch

    [Binding]
    public class PerformSearch
    {
        [Given("I am on the home page")]
        public void given_i_am_on_the_home_page()
        {
            WebBrowser.Current.GoTo("http://www.google.com");
        }

       
        [Given("I have entered (.*) into the keyword textbox")]
        public void and_i_have_entered(string keyword)
        {
            var field = WebBrowser.Current.TextField(tf => tf.Name == "q");
            
            field.TypeText(keyword);

        }

       [When("I press the (.*) button")]
       public void when_i_press_the_button(string buttonName)
       {
           var button = WebBrowser.Current.Button(b => b.Name == buttonName);

           button.Click();
           
           WebBrowser.Current.WaitForComplete(); 

       }

       [Then("I should see a list of articles related to (.*)")]
       public void then_i_should_see_a_list_articles_related_to(string keyword)
       {
           WebBrowser.Current.WaitUntilContainsText("Advanced search");
           
           var searchDiv = WebBrowser.Current.Div("res");

           Assert.IsTrue(searchDiv.Children().Count == 3); 
           
       }
    }

 

Running the Tests

Now you’re ready to run the tests and with a bit of luck they should both pass. You will notice during the tests that Internet Explorer will open and you can actually watch the UI test happen.

test_results

 

Console Output

You get a nice descriptive console output as well which is nice.

When I enter http://www.google.com in the address bar
-> done: NavigateToSearchEngine.when_i_enter_the_url("http://www.google...") (15.5s)
Then I should be on the home page
-> done: NavigateToSearchEngine.then_i_should_be_on_the_home_page() (2.9s)

Given I am on the home page
-> done: PerformSearch.given_i_am_on_the_home_page() (6.4s)
And I have entered BDD into the keyword textbox
-> done: PerformSearch.and_i_have_entered("BDD") (3.2s)
When I press the btnG button
-> done: PerformSearch.when_i_press_the_button("btnG") (0.3s)
Then I should see a list of articles related to BDD
-> done: PerformSearch.then_i_should_see_a_list_articles_related_to("BDD") (1.4s)

 

The Code

You can download the code here.

Conclusion

If you’re not doing ATDD or Automated UI Testing then I strongly urge you to consider SpecFlow and WaitN which can add huge value to your testing process. I’m really looking forward to expanding our definition of done to include these.

There is also a WatiN Test Recorder which can assist in automating some of your tests or at the least show you the capability of WatiN. At last look though it doesn’t seem to be an active project anymore.

Tuesday, June 14, 2011

IE Security Warning with QuickTime Object tags

 

I just received a bug report about the infamous Internet Explorer Security Warning for one of the pages in our application that serves video content over HTTPS.

iesecuritywarning

 

After viewing the source I found the offender which turned out to be the codebase attribute set to http://www.apple.com/qtactivex/qtplugin.cab.

<object id="videoObject" classid="clsid:02BF25D5-8C17-4B23-BC80-D3488ABDDC6B" 
            codebase="http://www.apple.com/qtactivex/qtplugin.cab" 
            width="330" height="292"> 
            <param name="src" value="https://securedomain/video.mp4" /> 
            <param name="controller" value="true" /> 
            <param name="autoplay" value="False" /> 
            <param name="scale" value="aspect" /> 
            <param name="cache" value="true"/>
            <param name="saveembedtags" value="true"/>
            <param name="postdomevents" value="true"/> 
               
            <!--[if IE] --> 
            <EMBED name="movie"
                height="292"
                width="330"
                scale="aspect"
                src="https://securedomain/video.mp4"
                type="video/quicktime"
                pluginspage="www.apple.com/quicktime/download"
                controller="true"
                autoplay="False"
            /> 
            <!--[endif]--> 
        </object> 

The fix was just to change this to https. https://www.apple.com/qtactivex/qtplugin.cab.

<object id="videoObject" classid="clsid:02BF25D5-8C17-4B23-BC80-D3488ABDDC6B" 
            codebase="https://www.apple.com/qtactivex/qtplugin.cab" 
            width="330" height="292"> 
            <param name="src" value="https://securedomain/video.mp4" /> 
            <param name="controller" value="true" /> 
            <param name="autoplay" value="False" /> 
            <param name="scale" value="aspect" /> 
            <param name="cache" value="true"/>
            <param name="saveembedtags" value="true"/>
            <param name="postdomevents" value="true"/> 
               
            <!--[if IE] --> 
            <EMBED name="movie"
                height="292"
                width="330"
                scale="aspect"
                src="https://securedomain/video.mp4"
                type="video/quicktime"
                pluginspage="www.apple.com/quicktime/download"
                controller="true"
                autoplay="False"
            /> 
            <!--[endif]--> 
        </object> 

Fortunately our users have the option of choosing the HTML5 Video player meaning they don’t need to install any 3rd party plugins to view videos.

Tuesday, May 17, 2011

Entity Framework 4 with Amazon RDS

 

In my last post I demonstrated how you can use MySQL with Entity Framework 4.

In this post I’m going to show you how to use Amazon RDS. Amazon RDS is a Relational Database Service which is similar to SQL Azure except that it supports MySQL & Oracle is coming soon. This is the actually the first time I’ve attempted to use the service and am going to be writing this as I go. 

Amazon RDS takes care of all the critical database management tasks like software updates, backups & replication.

Signing Up

This post assumes you already have an AWS (Amazon Web Services) account, if you don’t go to the Sign In page.

As with all Amazon Web Services you have to explicitly sign up, you can do this by going to http://aws.amazon.com/rds/ and clicking the “Sign Up For Amazon RDS” button.

Note: Signing up is not instant, it took about 12 hours for me to receive the confirmation email.

Launching the DB Instance

Log into the AWS console and go to the Amazon RDS tab.

Step 1 – Launch DB Instance

Click on the big Launch DB Instance button.

launchdb

Step 2 – DB Instance Details

After clicking the Launch DB button you will get the below screen.

For information on the DB Instance Class go to the AWS RDS page.

Multi-AZ Deployment is the ability to have the DB instance replicated in another availability zone with Amazon managing the failover, we’re not going to worry about that for now. This can be changed at anytime after the instance is created.

launchdb_step1

Note: See the maximum size of 1024 GB, compared with SQL Azures offering of 50GB. 

Step 3 – Additional Configuration

On this screen you can specify the Database name, port & availability zone.

launch_db_step2

Step 4 – Management Options

On this screen you set your preferences for DB backups and maintenance. You’ll want to tweak this depending on where your customers are geographically located.

launch_db_step3

Step 5 – Review

Finally you can review all of your settings and when you’re satisfied click the Launch DB Instance and let Amazon work their magic.

launch_db_step_4

Done

After a about 5 minutes or so the instance will be available in the DB Instances page.

DBInstances

 

Security

Now that we’ve created our instance we need to set some security rules in order to be able to access it from EC2 instances and any other external machines. I’m going to add a rule to allow access from machine so that I can do the data & schema import.

You can either choose from CIDR/IP or EC2 Security Group when setting your rule. When selecting the EC2 Security Group you write the name of the security group used in EC2.

security_groups

For more information on security groups go to Working with DB Security Groups.

Importing the Data & Schema

The preferred method for importing schema and data into your DB Instance for datasets of 1GB and less is to extract the data with mysqldump and pipe it directly to your instance. If you dataset is larger than 1GB checkout the Amazon RDS Customer Data Import Guide for MySQL.

Connection

First you need to get the Endpoint for the instance from the AWS console. This can be found be going to the DB Instances page and clicking on your DB Instance.

endpoint

Before starting you’ll want to check that you can connect to the instance properly using telnet:

telnet efdemo.cauwsk2cfjqz.ap-southeast-1.rds.amazonaws.com 3306

If you experience any problems here then check your security group rules and local firewall settings.

Import

Here is the command for the import:

"C:\Program Files\MySQL\MySQL Server 5.1\bin\"\mysqldump efdemo --host=efdemo.cauwsk2cfjqz.ap-southeast-1.rds.amazonaws.com --port 3306 --user=willbt --password

If everything went to plan you should get the “Dump completed” message in the command prompt and we’re ready to test the connectivity from EC2.

If you have issues, make sure the version of your DB Instance and local MySQL Installation are the same.

Of course you could also just connect to your instance using MySQL Administrator & MySQL Query Browser.

 

Accessing from EC2

Once you’ve got your instance setup in EC2 you can deploy your application to it.

If you’ve added the same security group that your EC2 instance uses to the DB Instance security groups then you should have no problems connecting to your DB Instance. Make sure the port you specified when creating the DB Instance is configured in any firewall rules.

To double check go onto your EC2 instance and run the telnet command again, if you have any issues check the instance firewall rules and the security group settings again.

You will need to also install the MySQL Connector onto your EC2 instance or make sure you set Copy To Local on the assemblies MySql.Data and MySql.Data.Entity.

Finally the connection string by setting the server attribute to the Endpoint of your DB Instance.

<connectionStrings>
    <add name="EFDemoEntities" connectionString="metadata=res://*/EFDemo.csdl|res://*/EFDemo.ssdl|res://*/EFDemo.msl;provider=MySql.Data.MySqlClient;provider connection string=&quot;server=efdemo.cauwsk2cfjqz.ap-southeast-1.rds.amazonaws.com;User Id=willbt;Password=password;database=efdemo;Persist Security Info=True&quot;" providerName="System.Data.EntityClient" />
  </connectionStrings>

Pricing

When you compare the pricing between a MySQL solution and the SQL Server solution it’s very easy to see benefits of going MySQL.

  Annual USD Monthly USD
My SQL RDS (Large)

$ 3854.40

$ 321.20

SQL Server Standard (Large)

$ 9460.80

$ 788.40

 

That’s a pretty big difference & also bear in mind that with a SQL Server solution you’re not going to get automated backups, updates & failover.

Conclusion

Writing this blog was my first experience with Amazon RDS and I have to say I am very impressed with just how easy it is to get setup.

Amazon EC2 & RDS with MySQL for .NET applications is a very attractive deployment option and the next greenfield project I work on I’ll be seriously considering Amazon RDS with MySQL.

I’ve got a few pet projects kicking around with would be well suited for this platform so stay tuned.

Tuesday, May 10, 2011

Using Entity Framework 4 with MySQL

 

If you’re on the .NET Platform then MS SQL Server is usually the de-facto choice for the RDBMS. However if you’re at all cost conscience then you will realize that scaling and replication is going to cost you a fair chunk of change in licensing fees.

For that reason open source RDBMSs and in particular MySQL offer a much cheaper alternative.

In this post I’d like to demonstrate how you can use Entity Framework 4 with MySQL.

MySQL Connector Net 6.3.6

The first thing you’ll need to do is download and install the latest version of the MySQL Connector for .NET from http://dev.mysql.com/downloads/connector/net/

Make sure that Visual Studio is closed when you install.

Pascal Case Table Names

Because we are going to generate our Entity Framework Model of an existing database we want to make sure that the entity names use pascal casing. By default MySQL on Windows forces lowercase table names.

You can change this behaviour by adding lower_case_table_names=2 to your my.ini file which will located in
C:\Program Files\MySQL\MySQL Server <YOURVERSION>\

Read more about Identifier case sensitivity on the MySQL site.

NOTE: This will need to be done before you create your Schema and Tables.

 

Creating the Entity Framework Model

Right click on the solution and go to Add New Item then select the ADO.NET Entity Data Model. 

AddNewItem

Then choose “Generate from Database”.

GenerateFromDatabase

You will then want to create a connection that points to your MySQL Database.

Go to New Connection. Note: by default the Data Source will be set to Microsoft SQL Server (SqlClient)

Click “Change” and select the Data source and Data Provider as shown below.

NewConnectionString

After doing this you enter your connection properties, shown below.

ConnectionProperties

Once you press OK you will be presented with your database objects like below. Select all the Tables, Views & Stored Procedures you want to include in your model and press Finish

DatabaseObjects

Follow the “Next” buttons through the wizard until your Model has been created.

EFModel

 

Testing the Object Context

To make sure that we can use the ObjectContext correctly I wrote a quick test in a console app.

internal class Program
    {
        private static void Main(string[] args)
        {
            using (var context = new EFDemoEntities())
            {
                var category = new Category
                                   {
                                       Name = "Developer Tools"
                                   };

                var product = new Product
                                  {
                                      Name = "Visual Studio 2010"
                                  };

                category.Products.Add(product);

                context.Categories.AddObject(category); 

                context.SaveChanges();

                category = context.Categories.FirstOrDefault();

                Console.WriteLine("CategoryId: " + category.Id);

                product = context.Products.FirstOrDefault();

                Console.WriteLine("ProductId: " + product.Id);

            }
        }
    }
 
And the output.
commandline

If you receive “The specified value is not an instance of a valid constant type Parameter name: value” when trying to do an insert of a related entity then make sure that the foreign key column is not UNSIGNED. The MySQL connector does not support UNSIGNED columns as foreign keys,

 

Conclusion

I’ve only covered the basic steps to using MySQL with Entity Framework 4.

I have not used this yet in production, however it seems that MySQL and Entity Framework 4 is now a viable solution for those not married to SQL Server.

Till next time.

Friday, April 29, 2011

Downloading Amazon Kindle app for iPad2 in Singapore

 

Last week I got my brand spanking new iPad 2.

One of the major reasons I wanted this piece of technical indulgence is to use as a e-book reader on the daily commute. 

So after plugging it in for the first time and going through the arduous ceremony of setting it up and doing the software update you can imagine my dismay when I discovered that the Kindle App was not available in the Singapore App Store.

Now I’m already used to the fact that there is no Music or Video content in the Singapore version of iTunes and get by quite happily without it. But disallowing the downloading of a free app is taking it a bit too far.

Fortunately for those in Singapore there is a workaround.

VPost

The first step is to create a VPost account.

For those who don’t know VPost is a service run by Singapore Post that effectively gives you a shipping address in either Japan, Europe or the United States. You can then get online shopping orders delivered to this address and they will forward the items to your Singapore based address.

It’s a great service which has many uses and I thoroughly recommended it to anyone.

Another iTunes Account

Once you’ve got your VPost address in the US you’ll need to create a new iTunes account using your US based address and not linked to your credit card as iTunes validates the billing address of the credit card against the address entered in the sign up form and prevents from signing up if the country is not the same.

The full list of steps are listed on the Apple Support site http://support.apple.com/kb/ht2534.

Downloading the Kindle App

Now that you have your US based iTunes account you can back to your iPad go to Settings > Store then sign-out of your existing account and sign-in using your new account. 

Next up go to the App Store the search for “kindle” and low and behold it should appear and be able to be downloaded.

Because your US account has no payment information tied to it, you’ll need to remember to sign out and sign in using your Singapore based account whenever you want to buy any apps.

Amazon Account

Now that you have the Kindle app you’ll need to create an Amazon account in order to be able to buy books.

It is possible to use your Singapore address for this account, but bear in mind that there are quite a lot of books which are not available in the Asia-Pacific region. So to get around this you can use your US based VPost address and you will get access to all

To get started go to the sign in page and select “No, I am a new customer”.

Conclusion

Despite the hoops you have to jump through to both download the App and then get the content this workaround does make it all possible.

Hope this helps someone.

Tuesday, February 22, 2011

Specification Pattern, Entity Framework & LINQ

 

Firstly just to clarify I am going to be talking about the OOP Specification Pattern not the data pattern commonly found in the SID (Shared Information & Data) model.

Much has been said about the specification pattern so I’m not going to go into that, if you want an overview check out these posts:

http://www.lostechies.com/blogs/chrismissal/archive/2009/09/10/using-the-specification-pattern-for-querying.aspx
http://devlicio.us/blogs/jeff_perrin/archive/2006/12/13/the-specification-pattern.aspx

In this post I’m going to demonstrate how you can make use of the specification pattern to query Entity Framework and create reusable, testable query objects and eliminate inline LINQ queries.

The Smell

When I first got started with Entity Framework way back in 2008 when EF was still in it’s infancy we had lot’s of inline LINQ all over the code base and specific methods on our repositories for querying requirements (which any OOP purist will tell you is bad).

We had a service layer method which more or less looked something like:

     public IEnumerable<Product> FindAllActiveProducts(string keyword)
     {
         return productRepository.FindAllActive(keyword); 
     }
And then on our repository a method which looked like:
        public IEnumerable<Product> FindAllActive(string keyword)
        {
            var query = context.CreateObjectSet<Product>().Include("Category");


            var products = from p in query
                           where p.IsActive
                                 && p.Name.Contains(keyword)
                           select p;

            return products.ToList();

        }

Now whilst this is not all bad it does present a few code smells:

  • You end up with lot’s of methods on your repositories to handle different query scenarios
  • There’s no way to test the query in isolation
  • Magic strings for the Include path

I’m not going to touch on whether or not you should be using repositories for this type of query scenario because that’s a whole other topic and a quick Google search will yield many posts debating this very subject. 

However I will say that if you find yourself with lot’s of methods on your repositories that only perform query operations then this is a big code smell.

The Specification Interface

The first step is to define an interface for our Specification.

   public interface ISpecification<T>
   {
       Expression<Func<T, bool>> Predicate { get; }

       IFetchStrategy<T> FetchStrategy { get; set; }

       bool IsSatisifedBy(T entity);
   }

If you’re familiar with this pattern you will notice the addition of two properties Predicate and FetchStrategy.

The Predicate is ultimately what will be using to perform the query. You will notice this is read-only which forces it to be defined within the specification implementation.

The FetchStrategy is an abstraction which defines the child objects that should be retrieved when loading the entity. More on this below.

Fetch Strategy

For those of you who don’t know when you load an entity from EF & other ORMs you can choose to either load just the root properties or load the related entities at the same time. The way to do this is in EF is by using the .Include method on the ObjectQuery.

This works fine, however fetch strategies are likely needed to be reused in different places so having the .Include with magic strings everywhere becomes a real maintenance headache.

In order to alleviate this pain I’ve created an abstraction on the concept. 

   public interface IFetchStrategy<T>
   {
       IEnumerable<string> IncludePaths { get; }

       IFetchStrategy<T> Include(Expression<Func<T, object>> path);

       IFetchStrategy<T> Include(string path);
   }

 

Here is a generic implementation of this IFetchStrategy.

    public class GenericFetchStrategy<T> : IFetchStrategy<T>
    {
        private readonly IList<string> properties;

        public GenericFetchStrategy()
        {
            properties = new List<string>();
        }

        #region IFetchStrategy<T> Members

        public IEnumerable<string> IncludePaths
        {
            get { return properties; }
        }

        public IFetchStrategy<T> Include(Expression<Func<T, object>> path)
        {
            properties.Add(path.ToPropertyName());
            return this;
        }

        public IFetchStrategy<T> Include(string path)
        {
            properties.Add(path);
            return this;
        }

        #endregion
    }

    public static class Extensions
    {
        public static string ToPropertyName<T>(this Expression<Func<T, object>> selector)
        {
            var me = selector.Body as MemberExpression;
            if (me == null)
            {
                throw new ArgumentException("MemberException expected.");
            }

            var propertyName = me.ToString().Remove(0, 2);
            return propertyName;
        }
    }

This is all fairly self explanatory, all it does is maintain a list of the include paths and provides a fluent interface. All the ToPropertyName extension does is take a LINQ expression and returns the name of the property.

Do note however that there is still one Include method that takes a string as the parameter.
This is here to support really deep object hierarchies which can’t be represented as an expression.

You could easily create your own implementations for each scenario e.g. FullProductFetchStrategy and use that, however I tend to define the fetch strategy within the specification itself as you will soon see.

The Specification Implementation

First off we have a base class which contains the basic functionality and implements the ISpecification interface.

    public abstract class SpecificationBase<T> : ISpecification<T>
    {
        protected IFetchStrategy<T> fetchStrategy;
        protected Expression<Func<T, bool>> predicate;

        protected SpecificationBase()
        {
            fetchStrategy = new GenericFetchStrategy<T>();
        }

        public Expression<Func<T, bool>> Predicate
        {
            get { return predicate; }
        }

        public IFetchStrategy<T> FetchStrategy
        {
            get { return fetchStrategy; }
            set { fetchStrategy = value; }
        }

        public bool IsSatisifedBy(T entity)
        {
            return new[] {entity}.AsQueryable().Any(predicate); 
        }
    }

I have given the fetch strategy a getter & setter as it provides a bit of flexibility to the consumers but arguably you could make this read only and force instantiation in the constructor.

Now returning to the original example of finding active products with a similar name to the keyword provided here is the ActiveProductsByNameSpec.

   public class ActiveProductsByNameSpec : SpecificationBase<Product>
   {
       public ActiveProductsByNameSpec(string keyword)
       {
           predicate = p => p.Name.Contains(keyword) && p.IsActive;

           fetchStrategy = new GenericFetchStrategy<Product>().Include(p => p.Category);
       }
   }

As you can see everything is defined in the constructor.

Now you could expose a property for the keyword argument and have a single function which builds the predicate.

My preference however is to do everything in the constructor as it is immediately obvious what the requirements for this class are and by exposing properties you risk having required values not set leading to subtle bugs.

 

Testing the Specification

Testability for me is one the greatest benefits of using this pattern. With deep object graphs LINQ queries can soon grow in size and complexity. More code == More chance of bugs.

I can count 4 different test cases for this spec and here’s how we can test them.

 
        [TestMethod]
        public void When_Product_Not_Active_Predicate_Should_Find_No_Match()
        {
            var product = new Product {IsActive = false, Name = "Resharper"};
            
            var spec = new ActiveProductsByNameSpec("Resharper");

            var actual = spec.IsSatisifedBy(product); 

            Assert.IsFalse(actual);

        }

        [TestMethod]
        public void 
            When_Product_IsActive_But_Does_Not_Contain_Keyword_Predicate_Should_Find_No_Match()
        {
            var product = new Product { IsActive = true, Name = "Visual Studio" };

            var spec = new ActiveProductsByNameSpec("Resharper");

            var actual = spec.IsSatisifedBy(product);

            Assert.IsFalse(actual);

        }

        [TestMethod]
        public void 
            When_Product_Does_Not_Contain_Keyword_And_Is_Not_Active_Predicate_Should_Find_No_Match()
        {
            var product = new Product { IsActive = false, Name = "Visual Studio" };

            var spec = new ActiveProductsByNameSpec("Resharper");

            var actual = spec.IsSatisifedBy(product);

            Assert.IsFalse(actual);
        }

        [TestMethod]
        public void 
            When_Product_IsActive_And_Contains_Keyword_Predicate_Should_Find_Match()
        {
            var product = new Product { IsActive = true, Name = "Resharper" };

            var spec = new ActiveProductsByNameSpec("Resharper");

            var actual = spec.IsSatisifedBy(product);

            Assert.IsTrue(actual);
        }

 

The Generic Repository

Now that we have our spec we need to create a generic repository which takes the ISpecification interface and returns some Entities.

   public interface IGenericQueryRepository
   {
       T Load<T>(ISpecification<T> spec);

       IEnumerable<T> LoadAll<T>(ISpecification<T> spec);

       bool Matches<T>(ISpecification<T> spec);
   }

And this is implemented by:

    public class GenericQueryRepository : IGenericQueryRepository
    {
        private ObjectContext context;

        #region IGenericQueryRepository Members

        public T Load<T>(ISpecification<T> spec)
        {
            var query = GetQuery(spec.FetchStrategy);

            return query.FirstOrDefault(spec.Predicate);
        }

        public IEnumerable<T> LoadAll<T>(ISpecification<T> spec)
        {
            var query = GetQuery(spec.FetchStrategy);

            return query.ToList();
        }

        public bool Matches<T>(ISpecification<T> spec)
        {
            var query = GetQuery(spec.FetchStrategy);

            return query.Any(spec.Predicate);
        }

        #endregion

        private IQueryable<T> GetQuery<T>(IFetchStrategy<T> fetchStrategy)
        {
            ObjectQuery<T> query = context.CreateObjectSet<T>();

            if (fetchStrategy == null)
            {
                return query;
            }

            foreach (var path in fetchStrategy.IncludePaths)
            {
                query = query.Include(path);
            }

            return query;
        }
    }

 

Pulling it all together

Now that we have a generic repository we can safely get rid of the FindAllActive method on our ProductRepository and instead change our service layer to depend on the IGenericQueryRepository and instantiate the specification like so.

       public IEnumerable<Product> FindAllActiveProducts(string keyword)
       {
           var spec = new ActiveProductsByNameSpec(keyword);

           return queryRepository.LoadAll(spec); 
       }

 

Conclusion

Well that’s it. I hope I’ve demonstrated how you can reduce the number of inline LINQ queries and ease the testability of such queries. On my team it is now a rule that all LINQ to Entities queries are defined as Specifications.

The only element of this approach that is specific to Entity Framework is the fetch strategy approach I’ve used and I’m sure this could be easily adapted to fit with other ORMs that support LINQ.

As always I’m happy to hear any feedback and feel free to contact me should you need any clarification.