Friday, October 4, 2013

How to Create Custom Binding in KnocoutJS


Why Custom Binding?

This question might be raised when you read the title of this article. Well, you need to go for custom binding when you want more control and flexibility over elements and observables used. If you want to create your own interactive controls then the right way is to go for custom binding. Here is the goodexample for custom binding provided by KnockoutJS.

All the bindings available in the KnockoutJS are the sub properties of a "ko.bindingHandlers" object.
 
Creating Custom Binding

In order to create the custom binding, we need to add a property with your custom binding name and assign an object with two callback functions.
 
ko.bindingHandlers.myCustomBindingName = {
        init:
function (element, valueAccessor, allBindingsAccessor, viewModel, bindingContext) {
        },

        update:
function (element, valueAccessor, allBindingsAccessor, viewModel, bindingContext) {
        }

    }
 

Here you can see that we have added a property myCustomBindingName assigned with an object that has init and update callbacks. Let us see the callbacks in detail.
 
The init callback

The init callback function will be executed when the binding is applied the very first time. In this callback, we can have the initial setup necessary for the custom binding such as attaching events, setting up the initial state for variables and so on.

Note: the init callback function will be called only one when the binding is applied.

The update callback

The update callback function will be called whenever the associated observable is changed. While binding your custom binding handler with the element, if you have associated/assigned an observable to your custom binding handler then the update callback will be executed whenever you change the associated/assigned observable.

For example, you have a custom binding handler, say "animateItem" that does some animation whenever an item is added to the collection. The animateItem binding handler accepts an observable that decides whether the added item should be animated on the screen or not based on the observable true/false value.
 
Whenever you update the observable, the update callback of your binding callback will be called with the updated value.
 
The parameters of callback functions
 
These init and update callback functions have the same set of parameters. They are element, valueAccessor, allBindingsAccessor, viewModel and bindingContext. Let us discuss these one by one.
 
element The DOM element on which our custom binding is applied.
valueAccessor The JavaScript function that will return the value assigned/associated with the binding.
You can use the "ko.unwrap" utility function to get the value assigned
allBindingAccessor The JavaScript function that will return all the values assigned/associated with
all the bindings bound with the current DOM. Suppose you have some other KO bindings say
value, visible
then the allBindingAccessor will give you those values and visible binding values also.
viewModel The viewmodel object that is bound to the entire page using "ko.applyBindings()". But when your
binding is within the with binding or foreach binding then the viewModel parameter will have
the $data context instead of the main viewModel.
bindingContext The binding context available for the DOM element. This parameter will have all other bindings
($parent, $parents and $root...)
as a special property.

Example
 
We will discuss the custom binding with an example. We will create a custom binding where we can display the change log for an input box.
 
<p>Change this value: <input data-bind="value: logEntry, valueUpdate: 'keyup'" /> </p>
<div data-bind="yourBindingName: logEntry">
    <ul>
    </ul>
</
div>

In this example view, we have an input element bound to the logEntry observable assigned/associated with the value binding. And we have a div to which we have bound our custom binding "yourCoustomBinding". Since we have associated the logEntry observable with our custom bidning, the update callback function will be executed whenever the text in the input box is changed.
ko.bindingHandlers.yourBindingName = {
        init:
function (element, valueAccessor, allBindingsAccessor, viewModel, bindingContext) {
// Setup the events, variables..
$(element).css(
"background-color", "rgb(228, 222, 222)");
         },
        update:
function (element, valueAccessor, allBindingsAccessor, viewModel, bindingContext) {
           
var text = ko.unwrap(valueAccessor()); // Getting the update log text
            $(
'ul').append('<il> The new text is : ' + text + '</il>');// Adding into the ul as il.
        }
    }
     
var viewModel = { logChange: ko.observable("Type some thing") };
     ko.applyBindings(viewModel);

 
In the init callback function, as an initial setup, we are setting the background color for the change log container. When you start changing the text, the update callback will be called. In the update callback, we are unwrapping the valueAccessor to get the value passed to our custom binding. Then we are appending the changed text to the ul element as an il item.

Creating a Bootstrapper For Prism Application


Introduction
This article shows how to create a bootstrapper for your Prism application using WPF with a short explanation of what a bootstrapper is, such as definition, how it works and so on. Let's start with the definition of what the bootstapper is, how it works and its relation with Prism.

Bootstrapper
A bootstrapper is a class responsible for the initialization of an application built using the Prism Library. By using a bootstrapper, you have more control of how the Prism Library components are wired to your application.

The Prism Library includes a default abstract Bootstrapper base class that can be specialized for use with any container. Many of the methods on the bootstrapper classes are virtual methods. You can override these methods as appropriate in your own custom bootstrapper implementation.

Basic stages of the bootstrapping process

  1. Createa LoggerFacade.
  2. Createand Configure a module catalog
  3. Createand Configure the container
  4. Configuredefault region adapter mappings
  5. Configuredefault region behaviors.
  6. Registerframework exception types.
  7. Createthe Shells.
  8. Initializethe shell
  9. Initializethe module
Creating and Configuring the Module Catalog
If you are building a module application then you will need to create and configure a module catalog. Prism uses a concrete IModuleCatalog instance to track what modules are available to the application, which modules may need to be downloaded, and where the modules reside.

The Bootstrapper provides a protected ModuleCatalog property to reference the catalog as well as a base implementation of the virtual CreateModuleCatalog method. The base implementation returns a new ModuleCatalog; however, this method can be overridden to provide a different IModuleCatalog instance instead, as shown in the following code from the QuickStartBootstrapper in the Modularity with MEF for Silverlight QuickStart.

C# Copy Code

protected override IModuleCatalog CreateModuleCatalog()
{
    // When using MEF, the existing Prism ModuleCatalog is still
    // the place to configure modules via configuration files.
    return ModuleCatalog.CreateFromXaml(new Uri(
   "/ModularityWithMef.Silverlight;component/ModulesCatalog.xaml",
    UriKind.Relative));
}

In both the UnityBootstrapper and MefBootstrapper classes, the Run method calls the CreateModuleCatalog method and then sets the class's ModuleCatalog property using the returned value. If you override this method then it is not necessary to call the base class's implementation because you will replace the provided functionality.

Creating and Configuring the Container
Containers play a key role in an application created with the Prism Library. Both the Prism Library and the applications built on top of it depend on a container for injecting required dependencies and services. During the container configuration phase, several core services are registered. In addition to these core services, you may have application-specific services that provide additional functionality as it relates to composition.

Note

Dependency Injection
Applications built with the Prism Library rely on dependency injection provided by a container. The library provides assemblies that work with the Unity Application Block (Unity) or Managed Extensibility Framework (MEF), and it allows you to use other dependency injection containers. Part of the bootstrapping process is to configure this container and register types with the container.

The Prism Library includes the UnityBootstrapper and MefBootstrapper classes, that implement most of the functionality necessary to use either Unity or MEF as the dependency injection container in your application. In addition to the stages shown in the previous illustration, each bootstrapper adds some steps specific to its container.

Creating and Configuring the Container
Containers play a key role in an application created with the Prism Library. Both the Prism Library and the applications built on top of it depend on a container for injecting required dependencies and services. During the container configuration phase, several core services are registered. In addition to these core services, you may have application-specific services that provide additional functionality as it relates to composition.

Core Services
The following details lists the core non-application specific services in the Prism Library.
  1. IModuleManager: Defines the interface for the service that will retrieve and initialize the application's modules.
  2. IModuleCatalog: Contains the metadata about the modules in the application. The Prism Library provides several different catalogs.
  3. IModuleInitializer: Initializes the modules.
  4. IRegionManager: Registers and retrieves regions, that are visual containers for layout.
  5. IEventAggregator: A collection of events that are loosely coupled between the publisher and the subscriber.
  6. ILoggerFacade: A wrapper for a logging mechanism, so you can choose your own logging mechanism. The Stock Trader Reference Implementation (StockTrader RI) uses the Enterprise Library Logging Application Block,via the EnterpriseLibraryLoggerAdapter class, as an example of how you can use your own logger. The logging service is registered with the container by the bootstrapper's Run method, using the value returned by the CreateLogger method. Registering another logger with the container will not work; instead override the CreateLogger method on the bootstrapper.
  7. IServiceLocator: Allows the Prism Library to access the container. If you want to customize or extend the library, this may be useful.
Application-Specific Services
The following details lists the application-specific services used in the Stock Trader RI. This can be used as an example to understand the types of services your application may provide.

  1. IMarketFeedService: Provides real-time (mocked) market data. The PositionSummaryPresentationModel updates the position screen based on notifications it receives from this service.
  2. IMarketHistoryService: Provides historical market data used for displaying the trendline for the selected fund.
  3. IAccountPositionService: Provides the list of funds in the portfolio.
  4. IOrdersService: Persists submitted buy/sell orders.
  5. INewsFeedService: Provides a list of news items for the selected fund.
  6. IWatchListService: Handles when new watch items are added to the watch list.
There are two Bootstrapper-derived classes available in Prism, the UnityBootstrapper and the MefBootstrapper. Creating and configuring the various containers involve similar concepts that are implemented differently.

Creating and Configuring the Container in the UnityBootstrapper
The UnityBootstrapper class's CreateContainer method simply creates and returns a new instance of a UnityContainer. In most cases, you will not need to change this functionality; however, the method is virtual, thereby allowing that flexibility.

After the container is created, it probably needs to be configured for your application. The ConfigureContainer implementation in the UnityBootstrapper registers a number of core Prism services by default, as shown here.

An example of this is when a module registers module-level services in its Initialize method.

protected virtual void ConfigureContainer()
{
   ...
   if (useDefaultConfiguration)
   {
         RegisterTypeIfMissing(typeof(IServiceLocator), typeof(UnityServiceLocatorAdapter), true);
         RegisterTypeIfMissing(typeof(IModuleInitializer), typeof(ModuleInitializer), true);
         RegisterTypeIfMissing(typeof(IModuleManager), typeof(ModuleManager), true);
         RegisterTypeIfMissing(typeof(RegionAdapterMappings), typeof(RegionAdapterMappings), true);
         RegisterTypeIfMissing(typeof(IRegionManager), typeof(RegionManager), true);
         RegisterTypeIfMissing(typeof(IEventAggregator), typeof(EventAggregator), true);
         RegisterTypeIfMissing(typeof(IRegionViewRegistry), typeof(RegionViewRegistry), true);
         RegisterTypeIfMissing(typeof(IRegionBehaviorFactory), typeof(RegionBehaviorFactory), true);
    }
}

The bootstrapper's RegisterTypeIfMissing method determines whether a service has already been registered, it will not register it twice. This allows you to override the default registration through configuration. You can also turn off registering any services by default; to do this, use the overloaded Bootstrapper.Run method passing in false. You can also override the ConfigureContainer method and disable services that you do not want to use, such as the event aggregator.

If you turn off the default registration then you will need to manually register the required services. To extend the default behavior of ConfigureContainer, simply add an override to your application's bootstrapper and optionally call the base implementation, as shown in the following code from the QuickStartBootstrapper from the Modularity for WPF (with Unity) QuickStart. This implementation calls the base class's implementation, registers the ModuleTracker type as the concrete implementation of IModuleTracker, and registers the callbackLogger as a singleton instance of CallbackLogger with Unity.
protected override void ConfigureContainer()
{
   base.ConfigureContainer();
   this.RegisterTypeIfMissing(typeof(IModuleTracker), typeof(ModuleTracker), true);
   this.Container.RegisterInstance<CallbackLogger>(this.callbackLogger);
}

Creating and Configuring the Container in the MefBootstrapper
The MefBootstrapper class's CreateContainer method does several things. First, it creates an AssemblyCatalog and a CatalogExportProvider. The CatalogExportProvider allows the MefExtensions assembly to provide default exports for a number of Prism types and still allows you to override the default type registration. Then CreateContainer creates and returns a new instance of a CompositionContainer using the CatalogExportProvider. In most cases, you will not need to change this functionality; however, the method is virtual, thereby allowing that flexibility.

After the container is created, it needs to be configured for your application. The ConfigureContainer implementation in the MefBootstrapper registers a number of core Prism services by default, as shown in the following code example. If you override this method, consider carefully whether you should invoke the base class's implementation to register the core Prism services, or if you will provide these services in your implementation.
 
protected virtual void ConfigureContainer()
{
   this.RegisterBootstrapperProvidedTypes();
}

protected virtual void RegisterBootstrapperProvidedTypes()
{
     this.Container.ComposeExportedValue<ILoggerFacade>(this.Logger);
     this.Container.ComposeExportedValue<IModuleCatalog>(this.ModuleCatalog);
     this.Container.ComposeExportedValue<IServiceLocator>(new MefServiceLocatorAdapter(this.Container));
     this.Container.ComposeExportedValue<AggregateCatalog>(this.AggregateCatalog);
}

In the MefBootstrapper, the core services of Prism are added to the container as singletons so they can be located through the container throughout the application. In addition to providing the CreateContainer and ConfigureContainer methods, the MefBootstrapper also provides two methods to create and configure the AggregateCatalog used by MEF. The CreateAggregateCatalog method simply creates and returns an AggregateCatalog object. Like the other methods in the MefBootstrapper, CreateAggregateCatalog is virtual and can be overridden if necessary.

The ConfigureAggregateCatalog method allows you to add type registrations to the AggregateCatalog imperatively. For example, the QuickStartBootstrapper from the Modularity with MEF for Silverlight QuickStart explicitly adds ModuleA and ModuleC to the AggregateCatalog, as shown here.
protected override void ConfigureAggregateCatalog()
{
    base.ConfigureAggregateCatalog();
    // Add this assembly to export ModuleTracker
    this.AggregateCatalog.Catalogs.Add(
    new AssemblyCatalog(typeof(QuickStartBootstrapper).Assembly));
    // Module A is referenced in in the project and directly in code.
    this.AggregateCatalog.Catalogs.Add(
     new AssemblyCatalog(typeof(ModuleA.ModuleA).Assembly));

      // Module C is referenced in in the project and directly in code.
      this.AggregateCatalog.Catalogs.Add(
      new AssemblyCatalog(typeof(ModuleC.ModuleC).Assembly));
}

Creating the Shell
In a traditional Windows Presentation Foundation (WPF) application, a startup Uniform Resource Identifier (URI) is specified in the App.xaml file that launches the main window. In an application created with the Prism Library, it is the bootstrapper's responsibility to create the shell or the main window. This is because the shell relies on services, such as the Region Manager, that need to be registered before the shell can be displayed.

The CreateShell method allows a developer to specify the top-level window for a Prism application. The shell is usually the MainWindow or MainPage. Implement this method by returning an instance of your application's shell class. In a Prism application, you can create the shell object, or resolve it from the container, depending on your application's requirements.

An example of using the ServiceLocator to resolve the shell object is shown in the following code example.

protected override DependencyObject CreateShell()
{
return ServiceLocator.Current.GetInstance<Shell>();
}

You will often see the ServiceLocator being used to resolve instances of types instead of the specific dependency injection container. The ServiceLocator is implemented by calling the container, so it makes a good choice for container agnostic code. You can also directly reference and use the container instead of the ServiceLocator.

InitializeShell Method
After you create a shell, you may need to run initialization steps to ensure that the shell is ready to be displayed. Depending on whether you are writing a WPF or Silverlight application, the InitializeShell method implementations will vary. For Silverlight applications, you will set the shell as the application's visual root, as shown here.

protected override void InitializeShell()
{
Application.Current.RootVisual = Shell;
}

For WPF applications, you will create the shell application object and set it as the application's main window, as shown here (from the Modularity QuickStarts for WPF).
protected override void InitializeShell()
{
    Application.Current.MainWindow = Shell;
    Application.Current.MainWindow.Show();
}

The base implementation of InitializeShell does nothing. It is safe to not call the base class implementation.

New Conversion Function in SQL server 2012







Introduction

SQL Server 2012 introduced the four new conversion functions: TRY_CAST, TRY_CONVERT, PARSE and TRY_PARSE.

TRY_CAST

TRY_CAST function returns a value cast to the specific data type if cast operation done successfully otherwise it returns NULL.

Syntax

TRY_CAST (Expression AS Data type(optional Length)

Argument / parameter

  • Expression - Expression is any valid expression that to be cast.
  • Data Type - This is a Data Type which expression to be cast. Here we can also specify optional length of target data type.
TRY_CAST takes the value and tries to convert it to the given data type. TRY_CAST returns the value to the specified data type if cast is successes otherwise it return NULL. This means any error is occurs during the casting this function return null value.

Example

--Simple TRY_CAST example
SELECT TRY_CAST('10.25' AS FLOAT)
--output
--10.25
-- TRY_CAST example with NULL value return
SELECT TRY_CAST ('This my test' AS INT) AS output;
--Output
-- NULL
SELECT CASE WHENTRY_CAST ('This my test' AS INT)  IS NULL
THEN 'Cast is not valid'
ELSE 'Cast Valid'
END;
--Output
-- Cast is not Valid

The TRY_CAST function returns an error when the cast is not permitted explicitly.
--Simple TRY_CAST with error
SELECT TRY_CAST( 15.34 AS XML)
--output
--Explicit conversion from data type float to xml is not allowed.
  TRY_CONVERT
TRY_CONVERT function returns a value in to given data type if cast is done successfully otherwise this function returns NULL. TRY_CONVERT function is threw exception if explicitly conversion is not permitted.

Syntax

TRY_CONVERT (Data type (optional Length), Expression [, optional style])

Argument / parameter
  • Expression:  Expression is any valid expression that to be cast.
  • Data Type:  This is a Data Type which expression to be cast. Here we can also specify optional length of target data type.
  • Style: Style is optional integer expression that specifies this function is to translate the given expression.
Example
--Simple TRY_CONVERT example
SELECT TRY_CONVERT(FLOAT , '10.25')
--output
--10.25
-- TRY_CONVERT example with NULL value return
SELECT TRY_CONVERT (INT , 'This my test') AS output;
--Output
-- NULL
SELECT CASE WHENTRY_CONVERT (INT, 'This my test')  IS NULL
THEN 'Cast is not valid'
ELSE 'Cast Valid'
END;
--Output
-- Cast is not Valid
--Simple TRY_CONVERT with error
SELECT TRY_CONVERT(XML , 15.34)
--output
--Explicit conversion from data type float to xml is not allowed.
  PARSE
PARSE function translates expression in to requested data type.

Syntax

PARSE ( string value AS data type [ USING culture ] )

Argument / parameter
  • String value: this is nvarchar (4000) value representing the expression to be parsing in to specified data type. This must be a valid representation of the requested data type otherwise PARSE raises an error.
  • Data Type: any valid data type supported by SQL server and this is result's data type.
  • Culture: Optional string value that specifies the culture in which given string value is formatted.
A culture argument is optional, if we are not providing the culture value then the language of current session is used. The language can be set implicitly or explicitly (using a SET LANGUAGE statement). The culture argument accepts all cultures supported by the .NET Framework. It is not limited to the languages supported by SQL Server. If the culture argument is invalid then this function throws an error.

Use PARSE function for converting string value into data/time and number types. For the other conversions uses CAST or CONVERT function. Note that PARSE function is depending on .NET framework CLR (Common Language Runtime).

Example

--Simple PARSE example with Date time
SELECT PARSE('Monday, 3 Octomber 2013' AS datetime2 USING 'en-US)
--output
-- 2013-10-03 00:00:00.0000000
-- PARSE example with currency symbol
SELECT PARSE('$128.34' AS FLOAT USING 'en-US)
--Output
-- 128.34
 
TRY_PARSE

TRY_PARSE function is similar to PARSE function except TRY_PARSE function does not threw an error, if conversion is failed instead of this function return NULL value.

Syntax

TRY_PARSE ( string value AS data type [ USING culture ] )

Argument / parameter

All parameters are same as PARSE function.

Example
--Simple PARSE example with Date time
SELECT TRY_PARSE('Monday, 3 Octomber 2013' AS datetime2 USING 'en-US)
--output
-- 2013-10-03 00:00:00.0000000
-- PARSE example with currency symbol
SELECT TRY_PARSE('$128.34' AS FLOAT USING 'en-US)
--Output
-- 128.34
SELECT TRY_PARSE('Test Data' AS datetime2 USING 'en-US)
--output
-- NULL

Friday, September 27, 2013

Abstract Factory Design Pattern

I know there are hundreds of articles available in the web on this topic but this is such an interesting design pattern that I decided to also write about it. It took a lot of search and deep understanding of the concepts of one of the most important design patterns, the Abstract Factory Pattern. I will try to explain it. At first, it may sound quite similar to the Factory Pattern, but they are not the same. The Abstract Factory Pattern is at one level above the Factory Pattern.

When I began to get an idea of this pattern, I persisted and continued to scratch my head for days, with many articles on the internet, but all of them had the same definition as in the following:

"Results in an interface that provides families of related or dependent objects without providing their concrete classes."

I was not able to figure out what exactly does families of related or dependent objects in the definition mean. Finally, I was able to figure out the understanding of this definition, by comparing some of the articles that provided good real-world examples and related their examples with my example from the real world. So the result I developed is explained in the examples below.

What is Abstract Factory Design Pattern?

According to GoF's definition, this pattern "Provides an interface for creating families of related or dependent objects without specifying their concrete classes".

This pattern is generally referred to as a super-set of the Factory Pattern.

Since it is a broader, or a super-set, of the Factory Pattern, we will try to explain this with an example of the Factory Pattern and try to move on to the Abstract factory. So let's start with the Factory Pattern example.

A real-world example for Factory Pattern

Suppose your company gets an assignment in .NET technologies. You have a team of 20 members in .NET, with one team of Silverlight and another of .Net. Your business development team says, provide us the team of members with a required skillset. So your manager receives the required skill set from the BD team, selects a team from the two and gives them the assignment. So here:
  • BD team acts as the client who requests a team to work on a project.
  • Your manager acts on the Factory, who will decide the team to be selected.
  • Silverlight and .Net teams are one of the final outputs/resources received by the client or the BD team to work on the project.
So our factory structure will be something like the following:

image1.gif

And our BD team or client will be sending the request as:

image2.gif

Run the code and see the results.

So this was the Factory Pattern, where you have a single requirement (or a single product to be more precise, in terms of the definition of this pattern).

So now your company needs to receive a new project that requires two different teams, one of .NET and another of mobile technologies (could be Windows or could be Android based), to work on another project.

So here we have more options to do this requirement.

Option One

Add another method to the same factory we created above, to get the team for the mobile technologies. So our code becomes:

image3.gif

But this will not be a good approach. This is because we have added one more functionality; we should be making it more abstract, since we might get another project in the future, that may want to have its own criteria of selecting a team or that may add another requirement like a Java team to be included.

Option Two

We create another factory like the one we created for the .NET and then get the required team from the Mobile technologies team. But this is also not a good approach, since this may result in more and more factories as the requirements of adding more and more teams arises. Also for the same reason above, we may have more projects in the future, that may also need the selection of a mobile and .Net technology team to work on.

The concept that we have discussed until now provides us the following points:
  • We may have more requirements in the future (in other words we may need a third type of team like Java or any other technology team), for other projects. So we may be required to create another factory and may increase in number, as more requirements are added.
  • Right now, we have only one project and based on it we send the requirements and get the required output. But another project, may have different requirements. That project may have its own way of deciding the basic criteria for selecting the required team.
So, these are the two main points, that are addressed by the Abstract Factory Pattern. Let's see, how to do this technically in code and then we will discuss the preceding two points, handled by our code.

So our first job will be to create a Base Factory, that will be encapsulating the definition of the factories (or Team deciding factories).

image4.gif
Now comes the main point of explanation. We will be creating 2 different factories that will implementing the base factory, in their own ways . These factories are nothing but the Projects that we receive. So both of these projects will be deciding their requirements for getting the two different teams, in their own way. This is nothing but point 2 from above.

So our code becomes the following, for the two different projects. For project one:
image5.gif
For project two:

image6.gif

As we can see, we have no specific requirements for selecting the teams for project two. Both of these projects provide their own implementations.

Finally, further class implementations will remain the same like the following
:
image7.gif
Now the BD team will be getting the requirements as:

image8.gif

Now, run the code and see the results.

image9.gif

The following explains how the code above covers the two points that we covered earlier.
  1. For the first point we discussed above, since more technology teams are to be added, we have the same requirements being addressed by the use of a common base interface IBaseFactory. So we can add more teams whenever required, in the base factory (like we added the IMobileTechnologiesTeamFactory team).
  2. The second point we discussed was regarding different requirements of different projects. So class ProjectOne specifies its own requirements for the teams required and class ProjectTwo, does not have any specific requirements. So both have their own definitions.
These are, what I feel, are the reason for the use of this design pattern and what I understand about this pattern from my study. One last point, remember the definition that I mentioned in the start of the article:

"Results in an interface that provides families of related or dependent objects without providing their concrete classes."

This is also in sync with the example we discussed, since we have the interface IBaseFactory, with related groups of functionality in a single unit.
Any more suggestion/points you would like to add to this article are welcome.

Extended Properties Introduction

Extended Properties Introduction

I imagine that most SQL Server database developers and DBAs have used extended properties to add comments to the database objects which they create. However my experience is that this surprisingly useful aspect of the database metadata is often either underused or even avoided. This could be for a variety of reasons, some of which could include:
  • The system stored procedures which manipulate extended properties are initially unintuitive to use.
  • Collections of extended properties once they have been added to a database are not immediately accessible.
  • Adding extended properties is perceived as laborious and not suited to documenting hundreds or thousands of database objects quickly and easily.
  • Updating multiple extended properties in the database metadata can be equally hard work.
  • Exporting extended properties to use in external documentation (which normally means Word or Excel) can be a long-winded process.
So, in order to dispel any negative perceptions which may exist or persist, I intend to take the reader on a tour of extended properties which, I hope, will convince you that they are:
  • Easy to understand
  • Simple to create, modify and delete.
  • Not at all difficult to extract and use in other programs.
Given the amount of information and code to cover, this will be spread over four short articles:
  • An introduction to extended properties.
  • Extracting extended properties, and a macro to generate a Word document of extended properties.
  • Modifying and deleting extended properties.
  • Industrial-strength documentation with extended properties.
Yes, I know that there are plenty of resources about the system view, stored procedures and function which SQL Server uses to manage extended properties, and there are some excellent articles available (not least David Poole's recent article on SQLServerCentral). However, I thought that there was room for a step by step introduction to the subject with lots of practical examples - so here goes.
Let's begin at the start, what exactly are we talking about?
We are talking about ensuring the longevity of your database, by making sure that all your hard work spent creating and modifying a complex system is commented and noted inside the database itself. Nothing can write the documentation for you, but SQL Server can store comments on a specified subset of database objects inside the database metadata. There is no limit (as far as I can see) to the number of separate comments that you can add to a database object, and each one can have a separate heading, or classification. Each comment can be up to 7500 characters (whether you are using Unicode characters or not). Each classification is a sysname data type so has a 128 unicode character limit. Names can also include blank or non-alphanumeric character strings, and binary values - but in practice simple alphanumeric one or two word titles seem to be the most useful.
If you are feeling adventurous, you can use extended properties in many other imaginative ways. How about:
  • Storing data lineage information.
  • Storing versioning information (last modified by, succinct history of change reasons).
  • To store tooltips for user interfaces.
  • To store tooltips in multiple languages, using the name as a linguistic classification.
  • To store column headings for user interfaces.
  • To store column headings in multiple languages, using the name as a linguistic classification.
  • And many, many more!
However, I will presume that most developers and DBAs use extended properties for database documentation, and so this will be the main focus of these articles.

A quick run through extended properties and SSMS

Before we delve into code, and into code tricks and traps, it is worth noting in the interest of completeness, that you can enter extended properties for many (but not all) objects using SSMS. This has the following advantages:
  • Quick for ad-hoc creation and modification of extended properties.
  • Easy to see all the extended properties for a single object.
And the following disadvantages:
  • Very slow if you want to document dozens - or hundreds - of objects.
  • Not all objects are accessible for extended property creations and modification this way.
  • Not easy to get a more holistic view of extended properties, as you are operating only at a detail level.
The objects that you can document using SSMS are:
  • Database
  • Table
  • Table Column
  • Table Index
  • View
  • View Column
  • View Index
  • Stored Procedures
  • User-defined Functions
So, to document an object using SSMS, simply:
  1. Right-click on the object in the object explorer
  2. Select Properties
  3. Select "Extended Properties" in the left pane
  4. Add a new property by entering a name (or classification) and value (or comment).
  5. And, of course, confirm with "OK"
Yes, I know that this is probably too easy and you know it already, but in the interest of thoroughness, I though it best to include it. Besides, it can be really useful to tweak an extended property (and the ellipse button to have a text window to edit in is very useful)!
So, with the GUI approach dealt with, it is time to dig deeper into extended properties using T-SQL. I will begin with the basic database object, the database itself. When you document the database you are including in the database metadata, the following:
Extremely simple Extended Properties
Object type Classification Comment
Database Description This is my database, and I am very proud of it!
Database Description Database Developer Adam
As the metadata is limited to the database itself, you do not need to specify the database - it is a given.
It is fundamental to note that the only things that you can change are the "Classification" and the "Comment". The object is specified by the interface to the metadata (three stored procedures which we will start to look at in a few seconds), and there are only a limited set of objects which allow self-documentation using the extended properties built into SQL Server. However, when we look at all these objects, I am sure that you will agree that nearly everything that you could want to document has been made accessible by the development team.
The entire extended properties paradigm is based very simply on this model. Namely that you add a classification and a comment to any object that supports extended properties. There is no limit to the number of classifications (or property names as SQL Server calls them) that you can create, and nothing enforces their use across objects - it is up to you to apply a coherent naming convention to your extended properties. As you will see later, having a coherent set of classifications will help you create robust and resilient extended properties which can become the core of your database's documentation.
Let's take this one step further, and suppose that you want to add comments to a table. This could be explained schematically as:
Table Extended Properties
Object type Object Name Classification Comment
Table myTable Description This is my table!
Table myTable Developer Adam
And finally, to describe a column:
Column Extended Properties
Object type Object Name Object Sub-Name Classification Comment
Table myTable ID Description This is the ID column
Table myTable ID Developer Adam

As you can see, there is a clear hierarchy to extended properties. As it is this which seems to cause the most confusion amongst the database developers I have spoken to, let's look at it in a little more detail.
Firstly, the good news, there are only four levels.
  1. Database level (as shown in the example above)
  2. Level 0 (the first sub-level, called Level0 by SQL Server)
  3. Level 1 (the second sub-level, called Level1 by SQL Server)
  4. Level 2 (the third sub-level, called Level2 by SQL Server)
These levels correspond to a drill-down hierarchy of database objects, which is extremely coherent once understood, but which can seem a little peculiar at first sight.
So when adding extended properties, it is simply a question of understanding at which level an object is situated (as far as the developers of the extended properties stored procedures are concerned), and how to reference the object.
Rather than start with the system stored procedures which let you add extended properties, I want to take a different tack and start with the hierarchy of objects. Then I will give sample code for nearly every database object which can be documented, to give you reusable code snippets to document your databases.

The extended properties object hierarchy

This is the extended properties object hierarchy, which also covers all the extended properties that I will be covering in this article. Please note that this is not an exhaustive selection, but a subset based on my experience.
Extended Properties Hierarchy
Top Level Database
First Level Schema
Filegroup
Partition Function
Partition Schema
DDL Trigger
Second Level Table
View
Stored Procedure
User Defined Function (Scalar inline and table-valued)
Filegroup Logical file name
Third Level Column (table or view)
Table Constraint (Unique constraint, Default, Primary Key, Foreign Key)
Index (table or view)
Trigger (table or view)
Parameter (Stored Procedure or User Defined Function)

So once you know which object you want to document, all you have to do is find out the appropriate level, and add the comment using the built-in system stored procedure, adding the appropriate reference for the required levels.
The next step, therefore, is actually to do this, using the system stored procedure "sp_addextendedproperty", and seeing how it is used for the objects listed in the table above. I realise that you may not need all the code snippets given below, but preferred to give a reasonable selection so that you will, hopefully, find the one that you need to document your database objects ready to copy and paste.

Documenting the database itself

This is the simplest of all the extended properties to add, because there is no complexity about which level the object is at.
EXECUTE sys.sp_addextendedproperty 
@name = N'Overview' -- The "Classification"
,@value = N'Database comment' -- The "Comment"
;
As there is only one database - and the script has to be using that database for the metadata to be extended, you do not even have to specify the database name, or even the fact that you are referring to a database object. So please note that if you add extended properties without specifying any levels using "sp_addextendedproperty", then you will be adding comments at database-level.
Note:
  • You can, of course, not used named parameters to add extended properties, and merely add the parameter values in the correct order. I will leave the parameters in my examples to clarify the code.
  • If you are not using Unicode characters, then you can forget the "N" before the text that you are adding.

First-level objects

A basic first-level object is a schema, so here s how to add an extended property to an existing database schema:
EXECUTE sys.sp_addextendedproperty 
@level0type = N'SCHEMA' -- The object type
,@level0name = [Sales] -- The object name
,@name = N'Overview' -- The "Classification"
,@value = N'Main Schema' -- The "Comment"
;
The other main objects you can document with this variation on a theme of "sp_addextendedproperty"are:

Filegroup

EXEC sys.sp_addextendedproperty 
@level0type = N'FILEGROUP' 
,@level0name = [PRIMARY] 
,@name = N'Overview' 
,@value = N'Filegroup comment' 
;

Partition Function

EXEC sys.sp_addextendedproperty 
@level0type = N'PARTITION FUNCTION' 
,@level0name = [Partition1] 
,@name = N'Overview' 
,@value = N'Partition Function Comment' 
;

Partition Scheme

EXEC sys.sp_addextendedproperty 
@level0type = N'PARTITION SCHEME' 
,@level0name = [PS2] 
,@name = N'Overview' 
,@value = N'Partition Scheme Comment' 
;

DDL Trigger

EXECUTE sys.sp_addextendedproperty 
@level0type = N'TRIGGER' 
,@level0name = [trg_PreventDDL]
,@name = N'Overview'
,@value = N'Comment on DDL trigger'
;
There are a couple of points worth noting here, that will also apply to the database objects to come in this article:
  • The object type which is specified as "level0type" in the above examples is predefined, and though you may change the case of the object specification (unless your database objects are case-sensitive), you must use the exact object name, including any spaces.
  • You do not have to enclose an object name in square brackets or single quotes - unless the object name contains spaces or other sources of object-naming confusion.

Second-level objects

Probably the most documented object in a database is the table, so let's begin with one of those:
EXEC sys.sp_addextendedproperty 
@level0type = N'SCHEMA' -- The object schema
,@level0name = [Sales] -- The object schema name
,@level1type = N'TABLE' -- The object type
,@level1name = [Stock] -- The object name
,@name = N'Overview' -- The "Classification"
,@value = N'This is a stock table' -- The "Comment" 
;
You need to note here that all objects which are potentially schema-qualified need to have the schema specified, even if they use the default schema. So if an object is in the dbo schema - you need to specify this.
The other main objects you can document with this variation on a theme of "sp_addextendedproperty"are:

View

EXEC sys.sp_addextendedproperty 
@level0type = N'SCHEMA' 
,@level0name = [Sales] 
,@level1type = N'VIEW' 
,@level1name = [vw_Invoices]
,@name = N'Overview' 
,@value = N'View Comment' 
;

Stored Procedure

EXEC sys.sp_addextendedproperty 
@level0type = N'SCHEMA' 
,@level0name = [dbo] 
,@level1type = N'PROCEDURE' 
,@level1name = [pr_ListClients]
,@name = N'Overview' 
,@value = N'Procedure comment' 
;

User-defined Function

EXEC sys.sp_addextendedproperty 
@level0type = N'SCHEMA' 
,@level0name = [dbo] 
,@level1type = N'FUNCTION' 
,@level1name = [ufn_GetTownCounty]
,@name = N'Overview' 
,@value = N'Scalar Function' 
;
This code will work for all types of user-defined Function - Scalar, Inline table and Table-Valued parameter.

Filegroup and logical file name

EXEC sys.sp_addextendedproperty 
@level0type = N'FILEGROUP' 
,@level0name = Secondary 
,@level1type = N'LOGICAL FILE NAME' 
,@level1name = [ExtendedProperties_2]
,@name = N'Overview' 
,@value = N'Filegroup and logical file name 2' 
;

Third-level objects

A frequently documented object is a table column, so here is how to add an extended property for a column:
EXEC sys.sp_addextendedproperty 
@level0type = N'SCHEMA' -- The object schema
,@level0name = [Sales] -- The object schema name
,@level1type = N'TABLE' -- The object type
,@level1name = [Stock] -- The object name
,@level2type = N'COLUMN' -- The object attribute
,@level2name = [Make] -- The object attribute name
,@name = N'Overview' -- The "Classification"
,@value = N'Basic column definition' -- The "Comment"
;
Other third-level objects are:

Column View

EXEC sys.sp_addextendedproperty 
@level0type = N'SCHEMA' 
,@level0name = [Sales]
,@level1type = N'VIEW'
,@level1name = [vw_Invoices] 
,@level2type = N'COLUMN' 
,@level2name = [ClientName]
,@name = N'Overview' 
,@value = N'Basic column definition' 
;

Table Index

EXEC sys.sp_addextendedproperty 
@level0type = N'SCHEMA' 
,@level0name = [Sales] 
,@level1type = N'TABLE' 
,@level1name = [Stock]
,@level2type = N'INDEX' 
,@level2name = [IX_Stock_Registration_Year]
,@name = N'Overview' 
,@value = N'Index comment' 
;

View Index

EXEC sys.sp_addextendedproperty 
@level0type = N'SCHEMA' 
,@level0name = [Sales] 
,@level1type = N'VIEW' 
,@level1name = [vw_Stock]
,@level2type = N'INDEX' 
,@level2name = [CX_vw_Stock]
,@name = N'Overview' 
,@value = N'Comment on Index of Indexed view' 
;

Table Constraints

EXEC sys.sp_addextendedproperty 
@level0type = N'SCHEMA' 
,@level0name = [Sales] 
,@level1type = N'TABLE' 
,@level1name = [Stock]
,@level2type = N'CONSTRAINT' 
,@level2name = [UQ_VehicleRegistration]
,@name = N'Overview' 
,@value = N'Unique constraint' 
;
EXEC sys.sp_addextendedproperty 
@level0type = N'SCHEMA' 
,@level0name = [Sales] 
,@level1type = N'TABLE' 
,@level1name = [Invoice]
,@level2type = N'CONSTRAINT' 
,@level2name = [DF_Invoice_InvoiceDate]
,@name = N'Overview' 
,@value = N'Default constraint' 
;
EXEC sys.sp_addextendedproperty 
@level0type = N'SCHEMA' 
,@level0name = [Sales] 
,@level1type = N'TABLE' 
,@level1name = [Invoice_Lines]
,@level2type = N'CONSTRAINT' 
,@level2name = [PK_Invoice_Lines]
,@name = N'Overview' 
,@value = N'Primary Key' 
;
EXEC sys.sp_addextendedproperty 
@level0type = N'SCHEMA' 
,@level0name = [Sales] 
,@level1type = N'TABLE' 
,@level1name = [Invoice_Lines]
,@level2type = N'CONSTRAINT' 
,@level2name = [FK_Invoice_Lines_Invoice]
,@name = N'Overview' 
,@value = N'Foreign Key' 
;
Although the code is the same for each of the four types of constraint (Unique, Default, Primary Key and Foreign Key) I have given the code for each so you can test them and see that they all work.

Table Trigger

EXEC sys.sp_addextendedproperty 
@level0type = N'SCHEMA' 
,@level0name = [Sales] 
,@level1type = N'TABLE' 
,@level1name = [Invoice_Lines]
,@level2type = N'TRIGGER' 
,@level2name = [trg_Invoice_Lines]
,@name = N'Overview' 
,@value = N'Comment on table trigger' 
;

View Trigger

EXEC sys.sp_addextendedproperty 
@level0type = N'SCHEMA' 
,@level0name = [Sales] 
,@level1type = N'VIEW' 
,@level1name = [vw_InvoiceLineSalePrice]
,@level2type = N'TRIGGER' 
,@level2name = [NoViewDeletes]
,@name = N'Overview' 
,@value = N'Comment on view trigger' 
;

Stored Procedure Parameter

EXEC sys.sp_addextendedproperty 
@level0type = N'SCHEMA' 
,@level0name = [dbo] 
,@level1type = N'PROCEDURE' 
,@level1name = [pr_ListClients]
,@level2type = N'PARAMETER' 
,@level2name = '@ID'
,@name = N'Overview' 
,@value = N'Procedure parameter comment' 
;

User-defined Function parameter

EXEC sys.sp_addextendedproperty 
@level0type = N'SCHEMA' 
,@level0name = [dbo] 
,@level1type = N'FUNCTION' 
,@level1name = [ufn_GetTownCounty]
,@level2type = N'PARAMETER' 
,@level2name = '@ID'
,@name = N'Overview' 
,@value = N'Function Parameter' 
;
Notes:
  • Indexes on views only apply to indexed views - and so you can only add extended properties if you have the Enterprise or Developer versions of SQL server.
  • You need the rights to read the database metadata tables - but I am assuming that as a SQL Server developer, this is a given.
  • Avoid adding extended property names beginning with "MS_" - this is used in views to describe the graphical positioning of query tables, for instance. It is probably best to consider this a kind of reserved term.
  • You need the rights to write to the database metadata tables - but I am assuming that as a SQL Server developer, this is a given.
There are a few more extended properties which can be created, and they apply to the following objects:
  • Aggregate
  • Asymmetric Key
  • Assembly
  • Certificate
  • Service Broker Contract
  • Event Notification
  • Message Type
  • Plan Guide
  • Service Broker Queue
  • Remote Service Binding
  • Service Broker Route
  • Rule
  • Service
  • Symmetric Key
  • Synonym
  • Type
  • XML Schema Collection
The decision to exclude them from this explanation is a purely personal one based on the need not to swamp the reader in code, and to focus on essential database documentation. However, as these objects follow strictly the same logic as those described above, you can easily adapt the code snippets in this article to add extended properties to them also. If you are ever overcome by the need to do this, your first port of call should be: http://msdn.microsoft.com/en-us/library/ms190243.aspx.
The attached script contains all the code given above, grouped slightly differently. It collects the code snippets by object (Table, View etc.) , to help you document all an object's attributes. The next article will deal with Extracting, updating and deleting Extended Properties.
Have fun - and I hope that you found this useful.