The experiences of a software developer as he wades through the dynamic world of technology. Discussions of new industry developments and current technologies he finds himself wrapped up in.

Thursday, August 24, 2006

Using Microsoft's Service Factory

It's obvious that the development of a new system is a lot of work. There are many things to consider when designing the system, and this can be very time consuming. Since many of the architectural hurdles you face have been met by others in the industry, it only makes sense to learn from their experiences. This saves you from making the same mistakes that others have already made. This saves you time. And that in turn saves you money.

Microsoft's Patterns & Practices provides recommendations on how to, "design, develop, deploy, and operate architecturally sound applications for the Microsoft application platform." Leveraging their 'software factories', architects and developers can spend their time on business opportunities and building an application that addresses the needs of their organization.

Microsoft patterns & practices contain deep technical guidance and tested source code based on real-world experience. The technical guidance is created, reviewed, and approved by Microsoft architects, product teams, consultants, product support engineers, and by Microsoft partners and customers. The result is a thoroughly engineered and tested set of recommendations that you can follow with confidence when building your applications.


I recently started a .NET design for a new system, and while researching 'best practices' I came across Microsoft's newest addition to their Patterns & Practices Software Factories - The Web Service Software Factory (a.k.a. Service Factory). Since I was designing an SOA (Service Oriented Application), this software factory fit the bill.

Not to get into the specifics of the system I'm designing, its main component is a web service that exposes various web methods for placing orders. Having a strong SOA understanding, I know the general architectural requirements for a web service driven application. What I wasn't familiar with was the slight nuances needed for designing one using the .NET framework (which I am new to). Discovering the Web Service Software Factory gave me the boost I needed.

Once you download and install the Service Factory and related Guidance Packages, you will have a new project type available to create in Visual Studio. Under 'Project Types' there will be a subcategory :


  • Guidance Packages
    • Web Service Software Factory (ASMX)
      • ASMX Service (this is an installed template)

When you create a new Visual Studio project using the ASMX Service template, a default project structure is created for you, following the Service Factory architecture. Three specific layers are created, separating the components of your solution. Each of the three components should be cohesive, about the same level of abstraction, and loosely coupled. Below are description of each layer as defined by Microsoft.

Service Interface Layer: This layer defines the operations of a service, the messages required to interact with each operation, and the patterns by which these messages interact—these patterns are referred to as message exchange patterns. The service interface layer contains a service contract, which describes the behavior of a service and the messages required as the basis for interaction. The service interface layer may also contain a service adapter, which is used to implement the service contract and to expose this functionality on a certain endpoint.

Business Layer: This layer incorporates components that implement the business logic of the application. Simple services often require only a very simple business action, but services with more complex requirements may implement a Controller pattern or business rules for implementing service behavior. The business layer also includes business entities that are used to represent objects specific to a business domain. These entities can incorporate both state and behavior. This layer can also include business workflows. These define and coordinate long-running, multi-step business processes.

Resource Access Layer: This layer contains the logic necessary to access data. The layer also contains service agents, which isolate the nuances of calling diverse services from an application and can provide additional services, such as basic mapping between the format of the data exposed by the service and the format an application requires.


Under each layer, the Guidance Package creates separate projects representing the architecture of the Service Factory. For example, if your solution was named 'MyService', your solution would have a structure as follows:


Solution 'MyService'
   Source
      Service
         MyService.DataTypes
         MyService.ServiceContracts
         MyService.ServiceImplementation
         C:\MyProjects\MyService.Host\
      Business Component
         MyService.BusinessEntities
         MyService.BusinessLogic
      Data Access Layer
         MyService.DataAccess

By right-clicking on most of the projects, such as 'MyService.DataTypes', or 'MyService.BusinessEntities', you are able to invoke a wizard to create particular components of the application. By making use of these wizards, it makes it easy to stay within the Service Factory pattern architecture. Using the wizards can be a little confusing the first few times you use them, leaving you wondering what information you are supposed to enter. I made it through the first few times by referencing the help system that is included, as well as the examples provided.

The Service Factory is fairly flexible, allowing you to create your web service as you see fit, and to whatever level of complexity you need. For example you can define your messgae types following an implicit (parameter programming model) or explicit (message contract degin programming model) approach, depending on what your organization requires. And since services designed using the .NET platform default to SOAP Document-based message formatting, you're web service maintains its message-based architecture.

So far (in my early development using the Service Factory) I've been able to get away with using the unmodified version of the Software Factory. However it's good to know that I am able to customize it if the need arises. There is detailed instructions on how to :


  • Modify Responsibilities for the Data Access Layer

  • Modify Responsibilities for the Service Layer

  • Create Custom Solution Structures

  • Create New Guidance Packages that Include Your Configuration Settings

  • Modify the Documentation


Adding this level of customization, many architects and developers may feel less apprehensive about using the Service Factory.

Like I said, I'm only early in the development stage of the web service I've been referring to. I haven't actually created any 'real' code yet, but have been using the Service Factory to aid in the design phase of development. As I try to create the data models and service contracts for the service, the Service Factory has been indispensible in helping me define the core components of my application. I'll be sure to add a follow-up post documenting my experiences once I start creating the meat of the service.

Labels: , , ,

Thursday, August 17, 2006

Using Google's Blogger Data API

Google's Blogger Data API allows you to write client applications to manipulate Blogger content using GData feeds. For those not familiar with GData (Google Data API), Google defines it as follows:


A simple standard protocol for reading and writing data on the web. GData combines common XML-based syndication formats (Atom and RSS) with a feed-publishing system based on the Atom publishing protocol, plus some extensions for handling queries.


Google provides client libraries in both Java and C#, allowing for easy retrievals, additions, and updates of Blogger feeds. You can also manipulate the feeds manually, providing XML representing an entry. Unless you already have a client that constructs this data, I can see no reason not to use the client library Google provides - it really makes working with the feeds easy.

One thing you'll have to keep in mind is that Blogger is currently going through a transition phase. As it is now, Blogger uses it's own credentials, but there is a new version which is still in beta, that uses Google accounts. This can pose problems during any authentication phases (such as an update to an entry of a feed). The API docs do a good job of explaining how to handle this issue, so be sure to take note of it.

Like I stated earlier, using the Google Blogger API is easy. There is a general flow your application will have to follow to communicate with Blogger.

  1. Retrieve the feed URL for a blog.

  2. Using that feed URL, make a request to retrieve the blog's feed.

  3. Manipulate the feed, that is, add, update, or query a post.


Retrieving the feeds URL can be done by making an HTTP GET request to the blog's URL, that is, the one you would use in your web browser. Once you receive the response, you will have to parse the XML for the <HEAD> section. Here you will find the <link rel="alternate"> tags. Depending on which version of Blogger you're speaking to (current, or the new beta), there will be different feed URLs. Regardless of this, you should always use the Atom 1.0 version - may as well stay current.

Once you have the feed URL, you can use the Blogger API to get the actual feed. As Google explains in their documentation, you can write the following Java code to do this. Obviously you'll have to replace the URL string with the one you retrieved in the previous step. The credentials too will have match the Blogger account you are referring to.

URL feedUrl = new URL("http://www.blogger.com/feeds/blogID/posts/full");
GoogleService myService = new GoogleService("blogger", "exampleCo-exampleApp-1");

// Set up authentication (optional for beta Blogger, required for current Blogger):
myService.setUserCredentials("liz@gmail.com", "mypassword");

// Send the request and receive the response:
Feed myFeed = myService.getFeed(feedUrl, Feed.class);

// Print the title of the returned feed:
System.out.println(myFeed.getTitle().getPlainText());

To add a post to the blog via a web client, using the client library, you can use the following Java code as a guideline, as Google has provided (once again replacing the URL with the appropriate value).

googleService.setAuthSubToken(sessionToken, null);

URL postUrl = new URL("http://www.blogger.com/feeds/blogID/posts/full");
Entry myEntry = new Entry();

myEntry.setTitle(new PlainTextConstruct("Marriage!"));
myEntry.setContent(new PlainTextConstruct(
"<p>Mr. Darcy has <em>proposed marriage</em> to me!</p>
<p>He is the last man on earth I would ever desire to marry.</p>
<p>Whatever shall I do?</p>"));

Person author = new Person("Elizabeth Bennet", null, "liz@gmail.com");
myEntry.getAuthors().add(author);

GoogleService myService = new GoogleService("blogger", "exampleCo-exampleApp-1");

// Send the request and receive the response:
Entry insertedEntry = myService.insert(postUrl, myEntry);

The GData API also allows you to make queries against a feed based on a set of criteria you provide. This can range from searching for feeds with a specific instance of a word, or posts that fall between two dates. As Google has illustrated, the following code returns a feed containing the blog posts that fall between a specified date range.

URL feedUrl = new URL("http://www.blogger.com/feeds/blogID/posts/full");

Query myQuery = new Query(feedUrl);
myQuery.setUpdatedMin(DateTime.parseDateTime("2006-03-16T00:00:00"));
myQuery.setUpdatedMax(DateTime.parseDateTime("2006-03-24T23:59:59"));

GoogleService myService = new GoogleService("blogger", "exampleCo-exampleApp-1");

// Send the request and receive the response:
Feed resultFeed = myService.query(myQuery, Feed.class);

From the examples I have provided, I hope to have demonstrated how simple it can be to communicate with the Blogger service using the Google Blogger and GData APIs. For those who wish to integrate a blogging solution into their applications, these APIs provide a backbone to build upon. If you are serious about diving into these libraries, I highly suggest reading the full docs, as this post was simply to act as a primer, and get you excited. Keep in mind that Google also provides an API for Google Calendar. I've hacked around with it as well, and using it is very similar to the Blogger API. A GoogleBase API is also on the way, and I have the privelege of being a 'trusted tester', giving me early access. As the suite of Google APIs continue to grow, developers are getting the opportunity to push out some really kickin' apps.

Labels: , ,

Wednesday, August 09, 2006

Selective Copy Using Recursion in XSLT

As you continue to work with XML and XSLT, you may find yourself needing to copy an entire XML file, while suppressing certain elements. Take a credit card transaction represented in an XML file, for example. Let's say that once the transaction completes, a confirmation email is sent to the user. We'll assume that the confirmation email is generated by forwarding the transaction XML to some external web service as a message (perhaps JMS - Java Messaging Service). You feel that the email should contain all of the transaction information, but it would be more secure not to include the credit card number, for obvious reasons.

If you're using XPath 2.0, you have a fairly easy way to suppress nodes by using the 'except' operator. But if you're still using an older version (which the majority of the industry is at the moment), this task, that sounds as if it should be relatively simple, can become a quite difficult, especially if you're working with a complex XML document.

You may be able to get away with using :


<xsl:copy-of select="node1|node2|node4|node5"/>

The above would copy all nodes, except 'node3'. As you can imagine, this would become very tiresome, and difficult to write for complex documents. As I was fighting with this exact issue, I discovered a simple way to exclude nodes using recursion in my XSLT documents. Keeping with our credit card transaction example, let's consider the following XML.


<?xml version="1.0" encoding="UTF-8"?>
<transaction orderID="100">
   <LastName>Smith</LastName>
   <FirstName>John</FirstName>
   <CreditCardType>VISA</CreditCardType>
   <CreditCardExpMonth>10</CreditCardExpMonth>
   <CreditCardExpYear>2010</CreditCardExpYear>
   <CreditCardNumber>4111111111111111</CreditCardNumber>
</transaction>

Obiviously, this is a very simple document and it would be easy enough to state what nodes to copy, as I did previously. But to make things easier to follow, let's use it for illustrating the recursive solution.

By using the XSLT below, the complete XML file representing the transaction will be copied.


<?xml version="1.0" encoding="UTF-8"?>
<xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform">
   <xsl:output method="xml" version="1.0" encoding="UTF-8" indent="yes"/>
   <xsl:template match="/">
      <xsl:apply-templates select="Transaction" mode="RecursiveDeepCopy" />
   </xsl:template>
   <xsl:template match="@*|node()" mode="RecursiveDeepCopy">
      <xsl:copy>
         <xsl:copy-of select="@*" />
         <xsl:apply-templates mode="RecursiveDeepCopy" />
      </xsl:copy>
   </xsl:template>
</xsl:stylesheet>

But we're not done yet. We still need to exlude the credit card number. This can be accomplished by adding another 'template' whose 'match' value is the name of the node we want to suppress. Since we are going to leave the template empty, the node will not be copied.


<xsl:template match="CreditCardNumber" mode="RecursiveDeepCopy" />

You can add these empty templates for any node you want to exclude. If credit card type was considered to be sensitive, we could simply add another template.


<xsl:template match="CreditCardType" mode="RecursiveDeepCopy" />

I personally found this technique for performing a deep copy extremely handy. In my case I was generating multiple versions of a feed using the same XSLT, but certain ones needed to have specific private information blocked (such as email addresses, etc.). I struggled with this problem, and settled for writing complex XSLT templates. However, once I discovered the recursive method, everything was smooth sailing.

Labels: ,