Category Archives: C#

Don’t run production ASP.NET Applications with debug=”true” enabled

One of the things you want to avoid when deploying an ASP.NET application into production is to accidentally (or deliberately) leave the <compilation debug=”true”/> switch on within the application’s web.config file.

 

Doing so causes a number of non-optimal things to happen including:

 

1) The compilation of ASP.NET pages takes longer (since some batch optimizations are disabled)

2) Code can execute slower (since some additional debug paths are enabled)

3) Much more memory is used within the application at runtime

4) Scripts and images downloaded from the WebResources.axd handler are not cached

 

This last point is particularly important, since it means that all client-javascript libraries and static images that are deployed viaWebResources.axd will be continually downloaded by clients on each page view request and not cached locally within the browser.  This can slow down the user experience quite a bit for things like Atlas, controls like TreeView/Menu/Validators, and any other third-party control or custom code that deploys client resources.  Note that the reason why these resources are not cached when debug is set to true is so that developers don’t have to continually flush their browser cache and restart it every-time they make a change to a resource handler (our assumption is that when you have debug=true set you are in active development on your site).

 

When <compilation debug=”false”/> is set, the WebResource.axd handler will automatically set a long cache policy on resources retrieved via it – so that the resource is only downloaded once to the client and cached there forever (it will also be cached on any intermediate proxy servers). If you have Atlas installed for your application, it will also automatically compress the content from the WebResources.axd handler for you when <compilation debug=”false”/> is set – reducing the size of any client-script javascript library or static resource for you (and not requiring you to write any custom code or configure anything within IIS to get it).

 

What about binaries compiled with debug symbols?

 

One scenario that several people find very useful is to compile/pre-compile an application or associated class libraries with debug symbols so that more detailed stack trace and line error messages can be retrieved from it when errors occur.

 

The good news is that you can do this without having the have the <compilation debug=”true”/> switch enabled in production.  Specifically, you can use either a web deployment project or a web application project to pre-compile the code for your site with debug symbols, and then change the <compilation debug=”true”/> switch to false right before you deploy the application on the server.

 

The debug symbols and metadata in the compiled assemblies will increase the memory footprint of the application, but this can sometimes be an ok trade-off for more detailed error messages.

 

The <deployment retail=”true”/> Switch in Maching.config

 

If you are a server administrator and want to ensure that no one accidentally deploys an ASP.NET application in production with the <compilation debug=”true”/> switch enabled within the application’s web.config file, one trick you can use with ASP.NET V2.0 is to take advantage of the <deployment> section within your machine.config file.

 

Specifically, by setting this within your machine.config file:

 

<configuration>

<system.web>

<deployment retail=”true”/>

</system.web>

</configuration>

 

You will disable the <compilation debug=”true”/> switch, disable the ability to output trace output in a page, and turn off the ability to show detailed error messages remotely.  Note that these last two items are security best practices you really want to follow (otherwise hackers can learn a lot more about the internals of your application than you should show them).

 

Setting this switch to true is probably a best practice that any company with formal production servers should follow to ensure that an application always runs with the best possible performance and no security information leakages.  There isn’t a ton of documentation on this switch – but you can learn a little more about it here.

Protection against SQL injection

Protection against SQL injection needs to take place on server side, regardless where the incoming call comes from.

Javascript-based sanitation methods are always useless because Javascript runs on client side, and therefore can be forged.

This also applies for AJAX calls: The client doesn’t need to turn JavaScript off; they just need to manipulate the Javascript code they download from your site to fake validation.

CDN in ASP.NET MVC bundling

Once you create ASP.NET MVC project and you looked into the App_start folder BundleConfig.cs file you will fine following code for jQuery bundle.

bundles.Add(new ScriptBundle("~/bundles/jquery").Include(
        "~/Scripts/jquery-{version}.js"));

Here you can see that there is a ScriptBundle object created with new. This ScriptBundle object also contains another constructor which also take path of cdn as parameter.

Now let’s first run application without adding CDN and see how its works.

bundle-without-cdn-asp-net-mvc

You can see its loaded from bundle not using any CDN there. So Let’s see how we are going to use CDN there. There are lots of CDNs are available there but I’m going to use Microsoft Hosted CDN file in our sample application.

You can find all CDN hosted file on following location.

http://www.asp.net/ajaxlibrary/cdn.ashx

and here all the jQuery CDN listing for all the versions.

http://www.asp.net/ajaxlibrary/cdn.ashx#jQuery_Releases_on_the_CDN_0

I’m going to use jQuery 1.10.2 version as ASP.NET MVC application by default comes with that version only. Following is a code for that.

bundles.UseCdn = true;
   bundles.Add(new ScriptBundle("~/bundles/jquery",
   @"//ajax.aspnetcdn.com/ajax/jQuery/jquery-1.10.2.js"
   ).Include(
    "~/Scripts/jquery-{version}.js"));

Here you can see that I have used path without http so that whether you use https or http it will work on both side. Also you can see that I have enabled UseCdn proper to true that will tell bundle to use CDN whenever its available.

As CDN is used in release version, to test application we need to set Debug=”False” in our web.config check that in local machine. So once you are done with that and run in browser you can see in below image its using CDN instead of local file.

cdn-for-bundling-asp-net-mvc-browser-demo

bundle files in specific order

 

internal class AsIsBundleOrderer : IBundleOrderer
{
    public virtual IEnumerable<BundleFile> OrderFiles(BundleContext context, IEnumerable<BundleFile> files)
    {
        return files;
    }
}

internal static class BundleExtensions
{
    public static Bundle ForceOrdered(this Bundle sb)
    {
        sb.Orderer = new AsIsBundleOrderer();
        return sb;
    }
}

Usage:

    bundles.Add(new ScriptBundle("~/content/js/site")
        .Include("~/content/scripts/jquery-{version}.js")
        .Include("~/content/scripts/bootstrap-{version}.js")
        .Include("~/content/scripts/jquery.validate-{version}")
        .ForceOrdered());

I like using fluent syntax but it also works with a single method call and all the scripts passed as parameters.

Allow only alpha numeric and space keys

Table. Here are the first 128 characters. Some of the characters are escaped on the ASCII column. Many of the characters are control characters, which are not widely used. They are not easy to display.

ASCII table

Decimal   ASCII     Hex
0         control   00
1         control   01
2         control   02
3         control   03
4         control   04
5         control   05
6         control   06
7         control   07
8         control   08
9         \t        09
10        \n        0A
11        \v        0B
12        \f        0C
13        \r        0D
14        control   0E
15        control   0F
16        control   10
17        control   11
18        control   12
19        control   13
20        control   14
21        control   15
22        control   16
23        control   17
24        control   18
25        control   19
26        control   1A
27        control   1B
28        control   1C
29        control   1D
30        control   1E
31        control   1F
32        space     20
33        !         21
34        "         22
35        #         23
36        $         24
37        %         25
38        &         26
39        '         27
40        (         28
41        )         29
42        *         2A
43        +         2B
44        ,         2C
45        -         2D
46        .         2E
47        /         2F
48        0         30
49        1         31
50        2         32
51        3         33
52        4         34
53        5         35
54        6         36
55        7         37
56        8         38
57        9         39
58        :         3A
59        ;         3B
60        <         3C
61        =         3D
62        >         3E
63        ?         3F
64        @         40
65        A         41
66        B         42
67        C         43
68        D         44
69        E         45
70        F         46
71        G         47
72        H         48
73        I         49
74        J         4A
75        K         4B
76        L         4C
77        M         4D
78        N         4E
79        O         4F
80        P         50
81        Q         51
82        R         52
83        S         53
84        T         54
85        U         55
86        V         56
87        W         57
88        X         58
89        Y         59
90        Z         5A
91        [         5B
92        \         5C
93        ]         5D
94        ^         5E
95        _         5F
96        `         60
97        a         61
98        b         62
99        c         63
100       d         64
101       e         65
102       f         66
103       g         67
104       h         68
105       i         69
106       j         6A
107       k         6B
108       l         6C
109       m         6D
110       n         6E
111       o         6F
112       p         70
113       q         71
114       r         72
115       s         73
116       t         74
117       u         75
118       v         76
119       w         77
120       x         78
121       y         79
122       z         7A
123       {         7B
124       |         7C
125       }         7D
126       ~         7E
127       control   7F

private string RemoveNonAlphanumeric(string text)
 {
 StringBuilder sb = new StringBuilder(text.Length);

 for (int i = 0; i < text.Length; i++)
 {
 char c = text[i];
 if (c >= 'a' && c <= 'z' || c >= 'A' && c <= 'Z' || c >= '0' && c <= '9' || c == 32)
 sb.Append(text[i]);
 }

 return sb.ToString();
 }

Performance Test between SortedList vs. SortedDictionary vs. Dictionary vs. Hashtable

Please note that the advantage of Hashtable over generic Dictionary for insert and search operations demonstrated here is actually because the tests are based on NON generic IDictionary interface, so each insert or search action is accompanied with check for the key type. For more information see the Sean’s comment below. (Thanks Sean!) Without that Dictionary seem to perform better than the Hashtable. The rest of the test conclusions will not be affected. So to summarize, generic Dictionary is the absolute winner for insert and search operations if used with generic dictionary interface or directly.

This is a sequence of tests comparing the performance results for four different implementations of IDictionary and in particular generic Dictionary, generic SortedDictionary, the old non-generic Hashtable and generic SortedList.

I performed several tests, comparing the following parameters: memory used in bytes, time for the insertion in ticks, time for the item search in ticks, and the time for looping with foreach in ticks. The test was performed 8000 times for all the four implementation, and the order was random so each implementation has been tested at least 1000 times.

I performed the tests in five stages, to observe the relationship between the number of entries and the performance. In the first stage the collections had 50 items, in the second 500, in the third 5,000 in the fourth 50,000 items.

In this particular test, lower numbers of memory usage or time taken for the execution means better performance. So if we want to present visually a performance chart for each of the parameters we have to deduce a performance coefficient from the raw data. I have used the following code to calculate the performance coefficient:

Performance Coefficient for value x = min(value 1, value 2, … value n) / value x

This way, because the best performing value is the lowest one it will be transformed as value 1 of the performance coefficient and any other value will be a fraction of that value.

This is the chart of the memory usage:

Memory Allocation

The results stay consistent with the increase of the number of items in collections. Best memory footprint we see in the SortedList, followed by Hashtable, SortedDictionary and the Dictionary has highest memory usage. Despite all that, we have to note, that the differences are not significant and unless your solution requires extreme sensitivity about the memory usage you should consider the other two parameters: time taken for the insert operations and time taken for searching a key as more important. It is important to note that this test does not take into consideration the effects of garbage collection and thus can only be taken in a very general way.

This is the chart for the time taken for insert operations:

Insertion

When the number of records is small the differences between all four implementations are not significant but with the increase of items in the collection the performance of the SortedList drops dramatically, SortedDictionaryis better but still taking significantly more time for inserts than the other two implementations. Hashtable is the next in the list and the ultimate leader is the generic Dictionary.

This is the chart for the time taken for search operations:

Searching

The absolute leader is Hashtable, but the test does not consider the type of item being stored. That could be a possibility for a future test. The next best performer is the generic Dictionary followed by the other two implementations. The differences here between SortedList and SortedDictionary are not significant.

This is the chart for the time taken for for-each collection loop operations:

For Each

Here the leader is SortedList then Dictionary consistantly better than Hashtable and the words performer isSortedDictionary

Here you can see the task manager during the test it shows us a picture of the memory usage. Area A is during the insertion phase when more and more memory has been allocating. Area B is from the end of the insertion until the garbage collection of the object.

Memory - Task Manager

This is the code I used for this test. The variable NumberInsertedKeys was changed to 50, 500, 5000, 50000, or 10000 for the different stages.

using System;
using System.Collections;
using System.Collections.Generic;
using System.Diagnostics;
/*
to compile enter in the command prompt:
C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\csc IDictTest.cs
to run enter in the command prompt:
IDictTest
*/
namespace IDictTest{

    public class RunResult{
        public decimal MemoryUsed;
        public decimal InsertTicks;
        public decimal SearchTicks;
        public decimal ForEachTicks;
    }

    public class Program
    {

        private static int SearchIndex = 27;
        //private const int NumberInsertedKeys = 50;
        //private const int NumberInsertedKeys = 500;
        //private const int NumberInsertedKeys = 5000;
        private const int NumberInsertedKeys = 50000;
        //private const int NumberInsertedKeys = 100000;
        private const int NumberTests = 8000;

        private static readonly string[] Letters = 
                {"A","B","C","D","E","F","G","H","I","J"};

        public static void Main(string[] args)  {
            try{
            // TRY STARTS HERE ----------

                List<RunResult> listDictionary = new List<RunResult>();
                List<RunResult> listSortedDictionary = new List<RunResult>();
                List<RunResult> listHashtable = new List<RunResult>();
                List<RunResult> listSorderList = new List<RunResult>();
                Stopwatch watch = Stopwatch.StartNew();

                for(int i = 0; i < NumberTests; i++){
                    SearchIndex += 1;
                    Random rand = new Random();
                    int randInt = rand.Next(0, 4);
                    if(randInt == 0){
                      listDictionary.Add(
                          Test("Dictionary", new Dictionary<string, string>(), i));
                    }else if(randInt == 1){
                      listSortedDictionary.Add(
                          Test("SortedDictionary", 
                              new SortedDictionary<string, string>(), i));
                    }else if(randInt == 2){
                      listHashtable.Add(
                          Test("Hashtable", new Hashtable(), i));
                    }else if(randInt == 3){
                      listSorderList.Add(
                          Test("SortedList", new SortedList(), i));
                    }
                }

                Console.Clear();
                Msg("Time taken (minutes): {0} or about {1} minutes and {2} seconds", 
                    watch.Elapsed.TotalMinutes,
                    watch.Elapsed.Minutes,
                    watch.Elapsed.Seconds);
                
                RunResult resultDict = CalculateAvg(listDictionary);
                RunResult resultSortDict = CalculateAvg(listSortedDictionary);
                RunResult resultHash = CalculateAvg(listHashtable);
                RunResult resultSortList = CalculateAvg(listSorderList);
                
                RunResult min = 
                    new RunResult{
                        MemoryUsed = Math.Min(Math.Min(Math.Min(resultDict.MemoryUsed, resultSortDict.MemoryUsed),resultHash.MemoryUsed),resultSortList.MemoryUsed), 
                        InsertTicks = Math.Min(Math.Min(Math.Min(resultDict.InsertTicks, resultSortDict.InsertTicks), resultHash.InsertTicks), resultSortList.InsertTicks), 
                        SearchTicks = Math.Min(Math.Min(Math.Min(resultDict.SearchTicks, resultSortDict.SearchTicks), resultHash.SearchTicks), resultSortList.SearchTicks),
                        ForEachTicks = Math.Min(Math.Min(Math.Min(resultDict.ForEachTicks, resultSortDict.ForEachTicks), resultHash.ForEachTicks), resultSortList.ForEachTicks)
                    }; 
                
                // print the results
                PrintResults(resultDict, listDictionary.Count, min, "Dictionary");
                PrintResults(resultSortDict, listDictionary.Count, min, "SortedDictionary");
                PrintResults(resultHash, listDictionary.Count, min, "Hashtable");
                PrintResults(resultSortList, listDictionary.Count, min, "SortedList");

            // TRY ENDS HERE ----------

            }catch(Exception ex){
                Msg("{0}", ex);
            }
        }
        
        private static RunResult CalculateAvg(List<RunResult> list){
            decimal sumMemory = 0;
            decimal sumInsert = 0;
            decimal sumSearch = 0;
            decimal sumForEach = 0;
            for(int i = 0; i < list.Count; i++){
                RunResult curr = list[i];
                sumMemory += curr.MemoryUsed;
                sumInsert += curr.InsertTicks;
                sumSearch += curr.SearchTicks;
                sumForEach += curr.ForEachTicks;
                // uncomment to print each line
                //Msg("{0,11} {1,13} {2,14}", 
                    //curr.MemoryUsed, curr.InsertTicks, curr.SearchTicks);
            }
            return new RunResult{
                      MemoryUsed = sumMemory / list.Count, 
                      InsertTicks = sumInsert / list.Count, 
                      SearchTicks = sumSearch / list.Count,
                      ForEachTicks = sumForEach / list.Count,
                    };
        }

        private static void PrintResults(RunResult result, int count, RunResult min, string name){
            Msg("--------- Results for {0}", name);
            Msg("# Tests {0}", count);
            Msg("Memory Used    Insert Ticks    Search Ticks    ForEach Ticks");
            Msg("Average Values:");
            Msg("{0,11:N} {1,13:N} {2,14:N} {3,14:N}", 
                result.MemoryUsed, 
                result.InsertTicks, 
                result.SearchTicks, 
                result.ForEachTicks);
            Msg("Performance Coefficient:");
            Msg("{0,11:N} {1,13:N} {2,14:N} {3,14:N}", 
                min.MemoryUsed/result.MemoryUsed, 
                min.InsertTicks/result.InsertTicks, 
                min.SearchTicks/result.SearchTicks, 
                min.ForEachTicks/result.ForEachTicks);
            Msg("");
        }

        private static void Msg(string name, params object[] args){
            Console.WriteLine(name, args);
        }

        private static RunResult Test(string name, IDictionary dict, int n){
            Console.Clear();
            Msg("Currently executing test {1} of {2} for {0} object", 
                name, n + 1, NumberTests);
            RunResult rr = new RunResult();
            Stopwatch watch;
            Random rand = new Random( );
            long memoryStart = System.GC.GetTotalMemory(true);
            long insertTicksSum = 0;
            for(int i = 0; i < NumberInsertedKeys; i++){
                string key = GetRandomLetter(rand, i)+"_key"+i;
                string value = "value"+i;
                
                watch = Stopwatch.StartNew();
                dict.Add(key, value);
                watch.Stop();
                
                insertTicksSum += watch.ElapsedTicks;                
            }
            rr.MemoryUsed = System.GC.GetTotalMemory(true) - memoryStart;
            

            rr.InsertTicks = insertTicksSum;

            watch = Stopwatch.StartNew();
            object searchResult = dict["C_key"+SearchIndex];
            watch.Stop();
            
            rr.SearchTicks = watch.ElapsedTicks;
            
            watch = Stopwatch.StartNew();
            foreach(var curr in dict){}
            watch.Stop();

            rr.ForEachTicks = watch.ElapsedTicks;

            return rr;
        }

        private static string GetRandomLetter(Random rand, int i){
            if(i == SearchIndex){
                return "C";
            }
            return Letters[rand.Next(0, 10)];
        }

    }

}

The computer used for the test has the following characteristics:

Processor: Intel(R) Core(TM)2 Duo CPU T7300 @ 2.00GHz 2.GHz
Memory (RAM): 2.00 GB
System type: 32-bit Operating System
Windows Vista

These is the raw data. Memory Used is in bytes and the insert, search and looping time in ticks:

Time taken (minutes): 0.324318411666667 or about 0 minutes and 19 seconds
--------- Results for Dictionary
# Tests 2063
Memory Used    Insert Ticks    Search Ticks    ForEach Ticks
Average Values:
   4,748.48      3,019.60         126.40         198.21
Performance Coefficient:
       0.73          0.92           0.83           0.80

--------- Results for SortedDictionary
# Tests 2063
Memory Used    Insert Ticks    Search Ticks    ForEach Ticks
Average Values:
   4,345.04      5,000.54         193.05         377.99
Performance Coefficient:
       0.80          0.55           0.55           0.42

--------- Results for Hashtable
# Tests 2063
Memory Used    Insert Ticks    Search Ticks    ForEach Ticks
Average Values:
   4,096.38      2,775.27         105.36         209.91
Performance Coefficient:
       0.85          1.00           1.00           0.75

--------- Results for SortedList
# Tests 2063
Memory Used    Insert Ticks    Search Ticks    ForEach Ticks
Average Values:
   3,488.50      4,350.48         167.89         158.41
Performance Coefficient:
       1.00          0.64           0.63           1.00

8000 tests 500 collection entries
Time taken (minutes): 1.26410570333333 or about 1 minutes and 15 seconds
--------- Results for Dictionary
# Tests 2158
Memory Used    Insert Ticks    Search Ticks    ForEach Ticks
Average Values:
  53,353.52     28,345.86         123.94       1,051.16
Performance Coefficient:
       0.73          0.96           0.92           0.71

--------- Results for SortedDictionary
# Tests 2158
Memory Used    Insert Ticks    Search Ticks    ForEach Ticks
Average Values:
  48,949.40     67,007.68         232.22       2,562.93
Performance Coefficient:
       0.80          0.41           0.49           0.29

--------- Results for Hashtable
# Tests 2158
Memory Used    Insert Ticks    Search Ticks    ForEach Ticks
Average Values:
  48,053.46     27,341.10         114.07       1,303.74
Performance Coefficient:
       0.81          1.00           1.00           0.57

--------- Results for SortedList
# Tests 2158
Memory Used    Insert Ticks    Search Ticks    ForEach Ticks
Average Values:
  39,077.27     54,255.06         209.31         744.81
Performance Coefficient:
       1.00          0.50           0.54           1.00

8000 tests 5000 collection entries
Time taken (minutes): 11.14550681 or about 11 minutes and 8 seconds
--------- Results for Dictionary
# Tests 1578
Memory Used    Insert Ticks    Search Ticks    ForEach Ticks
Average Values:
 527,352.00    274,339.83         171.81       9,724.85
Performance Coefficient:
       0.80          0.98           0.85           0.72

--------- Results for SortedDictionary
# Tests 1578
Memory Used    Insert Ticks    Search Ticks    ForEach Ticks
Average Values:
 498,944.56    853,251.69         288.00      22,777.03
Performance Coefficient:
       0.85          0.32           0.50           0.31

--------- Results for Hashtable
# Tests 1578
Memory Used    Insert Ticks    Search Ticks    ForEach Ticks
Average Values:
 480,048.17    268,945.01         145.19      11,911.69
Performance Coefficient:
       0.88          1.00           1.00           0.59

--------- Results for SortedList
# Tests 1578
Memory Used    Insert Ticks    Search Ticks    ForEach Ticks
Average Values:
 424,512.12    913,408.34         247.98       7,011.65
Performance Coefficient:
       1.00          0.29           0.59           1.00

8000 tests 50000 collection entries
Time taken (minutes): 299.723793791667 or about 4 hours, 59 minutes and 43 seconds
--------- Results for Dictionary
# Tests 1574
Memory Used    Insert Ticks    Search Ticks    ForEach Ticks
Average Values:
5,427,604.19  2,818,130.50         302.05     100,556.45
Performance Coefficient:
       0.82          1.00           0.87           0.73

--------- Results for SortedDictionary
# Tests 1574
Memory Used    Insert Ticks    Search Ticks    ForEach Ticks
Average Values:
5,318,944.00 10,506,892.94         525.14     294,219.13
Performance Coefficient:
       0.84          0.27           0.50           0.25

--------- Results for Hashtable
# Tests 1574
Memory Used    Insert Ticks    Search Ticks    ForEach Ticks
Average Values:
5,005,100.00  2,867,786.62         263.39     111,733.71
Performance Coefficient:
       0.89          0.98           1.00           0.66

--------- Results for SortedList
# Tests 1574
Memory Used    Insert Ticks    Search Ticks    ForEach Ticks
Average Values:
4,443,275.91 82,146,134.48         493.06      73,798.76
Performance Coefficient:
       1.00          0.03           0.53           1.00

 

Differences between Hashtable and Dictionary

Dictionary:

  • Dictionary returns error if we try to find a key which does not exist.
  • Dictionary faster than a Hashtable because there is no boxing and unboxing.
  • Dictionary is a generic type which means we can use it with any data type.

Hashtable:

  • Hashtable returns null if we try to find a key which does not exist.
  • Hashtable slower than dictionary because it requires boxing and unboxing.
  • Hashtable is not a generic type,

If you care about reading that will always return the objects in the order they are inserted in a Dictionary, you may have a look at

OrderedDictionary – values can be accessed via an integer index (by order in which items were added) SortedDictionary – items are automatically sorted

MSDN Article: “The Dictionary class has the same functionality as the Hashtable class. A Dictionary of a specific type (other than Object) has better performance than a Hashtable for value types because the elements of Hashtable are of type Object and, therefore, boxing and unboxing typically occur if storing or retrieving a value type”. Link: http://msdn.microsoft.com/en-us/library/4yh14awz(v=vs.90).aspx

Dictionary is faster than hashtable as dictionary is a generic strong type. Hashtable is slower as it takes object as data type which leads to boxing and unboxing.

 

Keeping domain models and view models separate with ASP.NET MVC and WCF

The location, role and responsibility of objects within a software system is a common topic in the forums with plenty of disagreement about what is and isn’t correct. But first, let’s just define some commonly found objects and their roles:

Domain repository entity: Object which is an in-memory representation of a record persisted in some sort of backing data store. In most business scenarios, a repository entity will be mapped to a physical database table via an ORM tool such as NHibernate or the Microsoft Entity Framework. 

Domain model entity: An object which lives in the service layer and typically contains both logic and/or data. This is where your business rules should be defined. Often, a domain model entity and a domain repository entity will be the same thing. In other words, an object which contains business logic and is also mapped to a data store (e.g. an Order domain entity may be mapped to an Order database table, but also contains business logic to calculate the total cost of the order, including tax). This arrangement allows data to be processed by business code and persisted via the same object.

Data contract: This is a DTO (data transfer object) which is decorated with the WCFDataContract attribute. This is essentially a ‘dumb’ object containing just data and no logic, and is used to pass data across service boundaries via serialisation. Data contracts will usually by mapped to domain models to copy data from one to the other.

View model: Another type of DTO but limited to just the MVC UI layer. View model objects are used to pass data between MVC controller action methods and the views which display and capture model data.

Passing data between services and the web UI
So with these definitions in mind, it is OK (and necessary) for data contracts which are defined in your WCF service layer to be passed to and from your MVC web application. However for this to be possible, the MVC web app must have access to the data contract type declarations. The way you achieve this is to always have your WCF service implementation and WCF service & data contracts in separate assemblies. Then from your MVC web app, you can reference the project/assembly containing the service & data contracts (but do not reference the service implementation assembly).  Now your MVC web app can happily use data contracts defined in the service layer, and will still compile.

However it’s not OK for domain entities to be passed across the same service boundaries because then you blur the line between what should be a simple data transfer object and something containing logic which should never be directly exposed to your MVC views and controllers. Ignore this at your peril otherwise you will end up with business logic contaminating your UI layer which will lead to no end of problems in the future and result in a lot of refactoring and maintenance.

But how do you keep them apart?  Well from a coding point of view, one approach is that only classes that have absolutely no business logic should be decorated with DataContractand DataMember attributes. This prevents serialisation of domain entities meaning that they can’t accidentally or intentionally be used as a parameter in service contract. Another more basic check is to make sure that the web UI project never references an assembly containing domain entities (sounds obvious but I’ve seen it happen). This will keep them safely isolated from the UI. However from a physical point of view, the simple answer is that you can’t absolutely guarantee this won’t happen. It only takes one developer to unwittingly do the wrong thing and the rot starts to set in.

So if you can’t physically keep them apart, what can you do to contain the problem?  Well you have to rely on some fundamental development techniques which have been around for years: discipline, team communication and code reviews. OK I’m not going to win any awards for innovation here, but building a reliable system is more than just writing a lot of tests and then assuming everything’s OK. You have to enforce design rules and best practices which the development team sticks to, and the appropriate use of domain entities, view models, data contracts, etc. is all part of that.

Passing data within the UI
Any application with an MVC web client and WCF services will reach a point where types defined in the service layer and types defined in the web UI meet, and in most cases that will happen in your MVC controllers. But for most scenarios, that’s the only time they will share the same space. It will also help if you give data contracts and view models different names. View model names typically should reflect the view they relate to, and I usually add a ‘Model’ suffix to them for clarity (but it’s down to personal preference how you do it). However if you are using the SvcUtil tool to generate service proxies, I recommend you specify a namespace so that it’s clear which models are defined in the service layer (see this post for generating service proxies).

In a typical case where you need to get data from a view model object into a data contract object so that it can be sent to a service, all you have to do is map the properties between the two via an object initialiser, in a constructor, or using a mapping tool such as AutoMapper, although AutoMapper can be quite hungy on memory resources so be aware of this when you decide what to use. Writing your own mapping code is trivial so why use anything else?

There’s a bone of contention about whether data contracts defined in the service layer should be used as view models. Personally I don’t have a problem with it and actually prefer to have my view model types declared as data contracts in the service layer rather than in the MVC app. This is so I only have to define data annotation validation rules once rather than defining them against data contracts in the service layer, and then again for  view models in the MVC application. This makes unit/integration testing easier and reduces the chance of a property not being validated properly. As a result, data contracts declared in the service layer are passed directly into my views (so there are very few actual view models declared in the MVC app).

However you may not like this and prefer to have separate, dedicated view models because it avoids the situation where views end up being strongly-typed to classes defined in a different layer of your architecture. How you do it is entirely down to person choice.

original article by Phil Munro

Validating with a Service Layer

original article By Stephen Walther|March 2, 2009

http://www.asp.net/mvc/tutorials/older-versions/models-(data)/validating-with-a-service-layer-cs

Learn how to move your validation logic out of your controller actions and into a separate service layer. In this tutorial, Stephen Walther explains how you can maintain a sharp separation of concerns by isolating your service layer from your controller layer.

The goal of this tutorial is to describe one method of performing validation in an ASP.NET MVC application. In this tutorial, you learn how to move your validation logic out of your controllers and into a separate service layer.

Separating Concerns

When you build an ASP.NET MVC application, you should not place your database logic inside your controller actions. Mixing your database and controller logic makes your application more difficult to maintain over time. The recommendation is that you place all of your database logic in a separate repository layer.

For example, Listing 1 contains a simple repository named the ProductRepository. The product repository contains all of the data access code for the application. The listing also includes the IProductRepository interface that the product repository implements.

Listing 1 — Models\ProductRepository.cs

using System.Collections.Generic;
using System.Linq;

namespace MvcApplication1.Models
{
    public class ProductRepository : MvcApplication1.Models.IProductRepository
    {
        private ProductDBEntities _entities = new ProductDBEntities();


        public IEnumerable<Product> ListProducts()
        {
            return _entities.ProductSet.ToList();
        }


        public bool CreateProduct(Product productToCreate)
        {
            try
            {
                _entities.AddToProductSet(productToCreate);
                _entities.SaveChanges();
                return true;
            }
            catch
            {
                return false;
            }
        }


    }

    public interface IProductRepository
    {
        bool CreateProduct(Product productToCreate);
        IEnumerable<Product> ListProducts();
    }


}

The controller in Listing 2 uses the repository layer in both its Index() and Create() actions. Notice that this controller does not contain any database logic. Creating a repository layer enables you to maintain a clean separation of concerns. Controllers are responsible for application flow control logic and the repository is responsible for data access logic.

Listing 2 – Controllers\ProductController.cs

using System.Web.Mvc;
using MvcApplication1.Models;

namespace MvcApplication1.Controllers
{
    public class ProductController : Controller
    {
        private IProductRepository _repository;

        public ProductController():
            this(new ProductRepository()) {}


        public ProductController(IProductRepository repository)
        {
            _repository = repository;
        }


        public ActionResult Index()
        {
            return View(_repository.ListProducts());
        }


        //
        // GET: /Product/Create

        public ActionResult Create()
        {
            return View();
        }

        //
        // POST: /Product/Create

        [AcceptVerbs(HttpVerbs.Post)]
        public ActionResult Create([Bind(Exclude="Id")] Product productToCreate)
        {
            _repository.CreateProduct(productToCreate);
            return RedirectToAction("Index");
        }


    }
}

Creating a Service Layer

So, application flow control logic belongs in a controller and data access logic belongs in a repository. In that case, where do you put your validation logic? One option is to place your validation logic in a service layer.

A service layer is an additional layer in an ASP.NET MVC application that mediates communication between a controller and repository layer. The service layer contains business logic. In particular, it contains validation logic.

For example, the product service layer in Listing 3 has a CreateProduct() method. The CreateProduct() method calls the ValidateProduct() method to validate a new product before passing the product to the product repository.

Listing 3 – Models\ProductService.cs

using System.Collections.Generic;
using System.Web.Mvc;

namespace MvcApplication1.Models
{
    public class ProductService : MvcApplication1.Models.IProductService
    {

        private ModelStateDictionary _modelState;
        private IProductRepository _repository;

        public ProductService(ModelStateDictionary modelState, IProductRepository repository)
        {
            _modelState = modelState;
            _repository = repository;
        }

        protected bool ValidateProduct(Product productToValidate)
        {
            if (productToValidate.Name.Trim().Length == 0)
                _modelState.AddModelError("Name", "Name is required.");
            if (productToValidate.Description.Trim().Length == 0)
                _modelState.AddModelError("Description", "Description is required.");
            if (productToValidate.UnitsInStock < 0)
                _modelState.AddModelError("UnitsInStock", "Units in stock cannot be less than zero.");
            return _modelState.IsValid;
        }

        public IEnumerable<Product> ListProducts()
        {
            return _repository.ListProducts();
        }

        public bool CreateProduct(Product productToCreate)
        {
            // Validation logic
            if (!ValidateProduct(productToCreate))
                return false;

            // Database logic
            try
            {
                _repository.CreateProduct(productToCreate);
            }
            catch
            {
                return false;
            }
            return true;
        }


    }

    public interface IProductService
    {
        bool CreateProduct(Product productToCreate);
        IEnumerable<Product> ListProducts();
    }
}

The Product controller has been updated in Listing 4 to use the service layer instead of the repository layer. The controller layer talks to the service layer. The service layer talks to the repository layer. Each layer has a separate responsibility.

Listing 4 – Controllers\ProductController.cs

Listing 4 – Controllers\ProductController.cs
using System.Web.Mvc;
using MvcApplication1.Models;

namespace MvcApplication1.Controllers
{
    public class ProductController : Controller
    {
        private IProductService _service;

        public ProductController()
        {
            _service = new ProductService(this.ModelState, new ProductRepository());
        }

        public ProductController(IProductService service)
        {
            _service = service;
        }


        public ActionResult Index()
        {
            return View(_service.ListProducts());
        }


        //
        // GET: /Product/Create

        public ActionResult Create()
        {
            return View();
        }

        //
        // POST: /Product/Create

        [AcceptVerbs(HttpVerbs.Post)]
        public ActionResult Create([Bind(Exclude = "Id")] Product productToCreate)
        {
            if (!_service.CreateProduct(productToCreate))
                return View();
            return RedirectToAction("Index");
        }


    }
}

Notice that the product service is created in the product controller constructor. When the product service is created, the model state dictionary is passed to the service. The product service uses model state to pass validation error messages back to the controller.

Decoupling the Service Layer

We have failed to isolate the controller and service layers in one respect. The controller and service layers communicate through model state. In other words, the service layer has a dependency on a particular feature of the ASP.NET MVC framework.

We want to isolate the service layer from our controller layer as much as possible. In theory, we should be able to use the service layer with any type of application and not only an ASP.NET MVC application. For example, in the future, we might want to build a WPF front-end for our application. We should find a way to remove the dependency on ASP.NET MVC model state from our service layer.

In Listing 5, the service layer has been updated so that it no longer uses model state. Instead, it uses any class that implements the IValidationDictionary interface.

Listing 5 – Models\ProductService.cs (decoupled)

using System.Collections.Generic;

namespace MvcApplication1.Models
{
    public class ProductService : IProductService
    {

        private IValidationDictionary _validatonDictionary;
        private IProductRepository _repository;

        public ProductService(IValidationDictionary validationDictionary, IProductRepository repository)
        {
            _validatonDictionary = validationDictionary;
            _repository = repository;
        }

        protected bool ValidateProduct(Product productToValidate)
        {
            if (productToValidate.Name.Trim().Length == 0)
                _validatonDictionary.AddError("Name", "Name is required.");
            if (productToValidate.Description.Trim().Length == 0)
                _validatonDictionary.AddError("Description", "Description is required.");
            if (productToValidate.UnitsInStock < 0)
                _validatonDictionary.AddError("UnitsInStock", "Units in stock cannot be less than zero.");
            return _validatonDictionary.IsValid;
        }

        public IEnumerable<Product> ListProducts()
        {
            return _repository.ListProducts();
        }

        public bool CreateProduct(Product productToCreate)
        {
            // Validation logic
            if (!ValidateProduct(productToCreate))
                return false;

            // Database logic
            try
            {
                _repository.CreateProduct(productToCreate);
            }
            catch
            {
                return false;
            }
            return true;
        }


    }

    public interface IProductService
    {
        bool CreateProduct(Product productToCreate);
        System.Collections.Generic.IEnumerable<Product> ListProducts();
    }
}

The IValidationDictionary interface is defined in Listing 6. This simple interface has a single method and a single property.

Listing 6 – Models\IValidationDictionary.cs

namespace MvcApplication1.Models
{
    public interface IValidationDictionary
    {
        void AddError(string key, string errorMessage);
        bool IsValid { get; }
    }
}

The class in Listing 7, named the ModelStateWrapper class, implements the IValidationDictionary interface. You can instantiate the ModelStateWrapper class by passing a model state dictionary to the constructor.

Listing 7 – Models\ModelStateWrapper.cs

using System.Web.Mvc;

namespace MvcApplication1.Models
{
    public class ModelStateWrapper : IValidationDictionary
    {

        private ModelStateDictionary _modelState;

        public ModelStateWrapper(ModelStateDictionary modelState)
        {
            _modelState = modelState;
        }

        #region IValidationDictionary Members

        public void AddError(string key, string errorMessage)
        {
            _modelState.AddModelError(key, errorMessage);
        }

        public bool IsValid
        {
            get { return _modelState.IsValid; }
        }

        #endregion
    }
}

Finally, the updated controller in Listing 8 uses the ModelStateWrapper when creating the service layer in its constructor.

Listing 8 – Controllers\ProductController.cs

using System.Web.Mvc;
using MvcApplication1.Models;

namespace MvcApplication1.Controllers
{
    public class ProductController : Controller
    {
        private IProductService _service;

        public ProductController()
        {
            _service = new ProductService(new ModelStateWrapper(this.ModelState), new ProductRepository());
        }

        public ProductController(IProductService service)
        {
            _service = service;
        }


        public ActionResult Index()
        {
            return View(_service.ListProducts());
        }


        //
        // GET: /Product/Create

        public ActionResult Create()
        {
            return View();
        }

        //
        // POST: /Product/Create

        [AcceptVerbs(HttpVerbs.Post)]
        public ActionResult Create([Bind(Exclude = "Id")] Product productToCreate)
        {
            if (!_service.CreateProduct(productToCreate))
                return View();
            return RedirectToAction("Index");
        }


    }
}

Using the IValidationDictionary interface and the ModelStateWrapper class enables us to completely isolate our service layer from our controller layer. The service layer is no longer dependent on model state. You can pass any class that implements the IValidationDictionary interface to the service layer. For example, a WPF application might implement the IValidationDictionary interface with a simple collection class.

Programatically checkin file to TFS using C#

original Article at :http://blogs.msdn.com/b/buckh/archive/2012/03/10/team-foundation-version-control-client-api-example-for-tfs-2010-and-newer.aspx

Over six years ago, I posted a sample on how to use the version control API.  The API changed in TFS 2010, but I hadn’t updated the sample.  Here is a version that works with 2010 and newer and is a little less aggressive on clean up in the finally block.

This is a really simple example that uses the version control API.  It shows how to create a workspace, pend changes, check in those changes, and hook up some important event listeners.  This sample doesn’t do anything useful, but it should get you going.

You have to supply a Team Project as an argument.

The only real difference in this version is that it uses the TeamFoundationServer constructor (in beta 3, you were forced to use the factory class).

You’ll need to add references to the following TFS assemblies to compile this example.

Microsoft.TeamFoundation.VersionControl.Client.dll
Microsoft.TeamFoundation.Client.dll

 

  1. using System;
  2. using System.Collections.Generic;
  3. using System.Diagnostics;
  4. using System.IO;
  5. using System.Text;
  6. using Microsoft.TeamFoundation.Client;
  7. using Microsoft.TeamFoundation.VersionControl.Client;
  8. namespace BasicSccExample
  9. {
  10.     class Example
  11.     {
  12.         static void Main(string[] args)
  13.         {
  14.             // Verify that we have the arguments we require.
  15.             if (args.Length < 2)
  16.             {
  17.                 String appName =Path.GetFileName(Process.GetCurrentProcess().MainModule.FileName);
  18.                 Console.Error.WriteLine(“Usage: {0} collectionURL teamProjectPath”, appName);
  19.                 Console.Error.WriteLine(“Example: {0} http://tfsserver:8080/tfs/DefaultCollection $/MyProject”, appName);
  20.                 Environment.Exit(1);
  21.             }
  22.             // Get a reference to our Team Foundation Server.
  23.             TfsTeamProjectCollection tpc = newTfsTeamProjectCollection(new Uri(args[0]));
  24.             // Get a reference to Version Control.
  25.             VersionControlServer versionControl = tpc.GetService<VersionControlServer>();
  26.             // Listen for the Source Control events.
  27.             versionControl.NonFatalError +=Example.OnNonFatalError;
  28.             versionControl.Getting += Example.OnGetting;
  29.             versionControl.BeforeCheckinPendingChange +=Example.OnBeforeCheckinPendingChange;
  30.             versionControl.NewPendingChange +=Example.OnNewPendingChange;
  31.             // Create a workspace.
  32.             Workspace workspace = versionControl.CreateWorkspace(“BasicSccExample”, versionControl.AuthorizedUser);
  33.             String topDir = null;
  34.             try
  35.             {
  36.                 String localDir = @”c:\temp\BasicSccExample”;
  37.                 Console.WriteLine(“\r\n— Create a mapping: {0} -> {1}”, args[1], localDir);
  38.                 workspace.Map(args[1], localDir);
  39.                 Console.WriteLine(“\r\n— Get the files from the repository.\r\n”);
  40.                 workspace.Get();
  41.                 Console.WriteLine(“\r\n— Create a file.”);
  42.                 topDir =Path.Combine(workspace.Folders[0].LocalItem, “sub”);
  43.                 Directory.CreateDirectory(topDir);
  44.                 String fileName = Path.Combine(topDir,”basic.cs”);
  45.                 using (StreamWriter sw = newStreamWriter(fileName))
  46.                 {
  47.                     sw.WriteLine(“revision 1 of basic.cs”);
  48.                 }
  49.                 Console.WriteLine(“\r\n— Now add everything.\r\n”);
  50.                 workspace.PendAdd(topDir, true);
  51.                 Console.WriteLine(“\r\n— Show our pending changes.\r\n”);
  52.                 PendingChange[] pendingChanges = workspace.GetPendingChanges();
  53.                 Console.WriteLine(”  Your current pending changes:”);
  54.                 foreach (PendingChange pendingChange inpendingChanges)
  55.                 {
  56.                     Console.WriteLine(”    path: ” + pendingChange.LocalItem +
  57.                                       “, change: ” +PendingChange.GetLocalizedStringForChangeType(pendingChange.ChangeType));
  58.                 }
  59.                 Console.WriteLine(“\r\n— Checkin the items we added.\r\n”);
  60.                 int changesetNumber = workspace.CheckIn(pendingChanges, “Sample changes”);
  61.                 Console.WriteLine(”  Checked in changeset ” + changesetNumber);
  62.                 Console.WriteLine(“\r\n— Checkout and modify the file.\r\n”);
  63.                 workspace.PendEdit(fileName);
  64.                 using (StreamWriter sw = newStreamWriter(fileName))
  65.                 {
  66.                     sw.WriteLine(“revision 2 of basic.cs”);
  67.                 }
  68.                 Console.WriteLine(“\r\n— Get the pending change and check in the new revision.\r\n”);
  69.                 pendingChanges = workspace.GetPendingChanges();
  70.                 changesetNumber = workspace.CheckIn(pendingChanges, “Modified basic.cs”);
  71.                 Console.WriteLine(”  Checked in changeset ” + changesetNumber);
  72.             }
  73.             finally
  74.             {
  75.                 if (topDir != null)
  76.                 {
  77.                     Console.WriteLine(“\r\n— Delete all of the items under the test project.\r\n”);
  78.                     workspace.PendDelete(topDir,RecursionType.Full);
  79.                     PendingChange[] pendingChanges = workspace.GetPendingChanges();
  80.                     if (pendingChanges.Length > 0)
  81.                     {
  82.                         workspace.CheckIn(pendingChanges, “Clean up!”);
  83.                     }
  84.                     Console.WriteLine(“\r\n— Delete the workspace.”);
  85.                     workspace.Delete();
  86.                 }
  87.             }
  88.         }
  89.         internal static void OnNonFatalError(Object sender,ExceptionEventArgs e)
  90.         {
  91.             if (e.Exception != null)
  92.             {
  93.                 Console.Error.WriteLine(”  Non-fatal exception: “+ e.Exception.Message);
  94.             }
  95.             else
  96.             {
  97.                 Console.Error.WriteLine(”  Non-fatal failure: ” + e.Failure.Message);
  98.             }
  99.         }
  100.         internal static void OnGetting(Object sender,GettingEventArgs e)
  101.         {
  102.             Console.WriteLine(”  Getting: ” + e.TargetLocalItem +”, status: ” + e.Status);
  103.         }
  104.         internal static void OnBeforeCheckinPendingChange(Objectsender, ProcessingChangeEventArgs e)
  105.         {
  106.             Console.WriteLine(”  Checking in ” + e.PendingChange.LocalItem);
  107.         }
  108.         internal static void OnNewPendingChange(Object sender,PendingChangeEventArgs e)
  109.         {
  110.             Console.WriteLine(”  Pending ” +PendingChange.GetLocalizedStringForChangeType(e.PendingChange.ChangeType) +
  111.                               ” on ” + e.PendingChange.LocalItem);
  112.         }
  113.     }
  114. }