Monthly Archives: April 2016

ui-router ui-sref

ui-sref

directive in module ui.router.state

Description

A directive that binds a link (<a> tag) to a state. If the state has an associated URL, the directive will automatically generate & update the hrefattribute via the $state.href() method. Clicking the link will trigger a state transition with optional parameters.

Also middle-clicking, right-clicking, and ctrl-clicking on the link will be handled natively by the browser.

You can also use relative state paths within ui-sref, just like the relative paths passed to $state.go(). You just need to be aware that the path is relative to the state that the link lives in, in other words the state that loaded the template containing the link.

You can specify options to pass to $state.go() using the ui-sref-opts attribute. Options are restricted to location, inherit, and reload.

Dependencies

Usage

as attribute

  1. <ANY ui-sref=“{string}”
  2. ui-sref-opts=“{Object}”>
  3. </ANY>

Parameters

Param Type Details
ui-sref string

‘stateName’ can be any valid absolute or relative state

ui-sref-opts Object

options to pass to $state.go()

Example

Here’s an example of how you’d use ui-sref and how it would compile. If you have the following template:

  1. <a ui-sref=“home”>Home</a> | <a ui-sref=“about”>About</a> | <a ui-sref=“{page: 2}”>Next page</a>
  2.  
  3. <ul>
  4. <li ng-repeat=“contact in contacts”>
  5. <a ui-sref=“contacts.detail({ id: contact.id })”>{{ contact.name }}</a>
  6. </li>
  7. </ul>

Then the compiled html would be (assuming Html5Mode is off and current state is contacts):

  1. <a href=“#/home” ui-sref=“home”>Home</a> | <a href=“#/about” ui-sref=“about”>About</a> | <a href=“#/contacts?page=2” ui-sref=“{page: 2}”>Next page</a>
  2.  
  3. <ul>
  4. <li ng-repeat=“contact in contacts”>
  5. <a href=“#/contacts/1” ui-sref=“contacts.detail({ id: contact.id })”>Joe</a>
  6. </li>
  7. <li ng-repeat=“contact in contacts”>
  8. <a href=“#/contacts/2” ui-sref=“contacts.detail({ id: contact.id })”>Alice</a>
  9. </li>
  10. <li ng-repeat=“contact in contacts”>
  11. <a href=“#/contacts/3” ui-sref=“contacts.detail({ id: contact.id })”>Bob</a>
  12. </li>
  13. </ul>
  14.  
  15. <a ui-sref=“home” ui-sref-opts=“{reload: true}”>Home</a>

HTTP Status Codes

HTTP Status Codes

Each Status-Code is described below, including a description of which method(s) it can follow and any metainformation required in the response.

1xx – Informational

This class of status code indicates a provisional response, consisting only of the Status-Line and optional headers, and is terminated by an empty line. There are no required headers for this class of status code. Since HTTP/1.0 did not define any 1xx status codes, servers MUST NOT send a 1xx response to an HTTP/1.0 client except under experimental conditions.

A client MUST be prepared to accept one or more 1xx status responses prior to a regular response, even if the client does not expect a 100 (Continue) status message. Unexpected 1xx status responses MAY be ignored by a user agent.

Proxies MUST forward 1xx responses, unless the connection between the proxy and its client has been closed, or unless the proxy itself requested the generation of the 1xx response. (For example, if a proxy adds a “Expect: 100-continue” field when it forwards a request, then it need not forward the corresponding 100 (Continue) response(s).)

2xx – Successful

This class of status code indicates that the client’s request was successfully received, understood, and accepted.

3xx – Redirection

This class of status code indicates that further action needs to be taken by the user agent in order to fulfill the request.  The action required MAY be carried out by the user agent without interaction with the user if and only if the method used in the second request is GET or HEAD. A client SHOULD detect infinite redirection loops, since such loops generate network traffic for each redirection.

Note: previous versions of this specification recommended a maximum of five redirections. Content developers should be aware that there might be clients that implement such a fixed limitation.

4xx – Client Error

The 4xx class of status code is intended for cases in which the client seems to have erred. Except when responding to a HEAD request, the server SHOULD include an entity containing an explanation of the error situation, and whether it is a temporary or permanent condition. These status codes are applicable to any request method. User agents SHOULD display any included entity to the user.

If the client is sending data, a server implementation using TCP SHOULD be careful to ensure that the client acknowledges receipt of the packet(s) containing the response, before the server closes the input connection. If the client continues sending data to the server after the close, the server’s TCP stack will send a reset packet to the client, which may erase the client’s unacknowledged input buffers before they can be read and interpreted by the HTTP application.

5xx – Server Error

Response status codes beginning with the digit “5” indicate cases in which the server is aware that it has erred or is incapable of performing the request. Except when responding to a HEAD request, the server SHOULD include an entity containing an explanation of the error situation, and whether it is a temporary or permanent condition. User agents SHOULD display any included entity to the user. These response codes are applicable to any request method.

Binding & posting to a Dictionary in MVC 3 using jQuery

Binding, posting to a Dictionary in MVC 3 using jQuery ajax

In this post we are going to discuss how to post data to a Dictionary using jQuery ajax. Lets have a simple model from the previous post and pass value to the action method as dictionary of the model we get the following controller method.

[HttpPost]
public ActionResult Binding_posting_to_a_dictionary_using_ajax(Dictionary<string, UserModel> userList)
{
    return PartialView("Success");
}

To work with dictionary the default model binder will accept the data in a particular format. If we consider key of the first element of the dictionary as firstkey then we have to give the name of the input parameter as [0].value.ModelPropertyName. So, for the first item we can give the names like-

<input name="[0].key" value="firstkey" type="hidden">
<input name="[0].value.FirstName" value="" type="text">
<input name="[0].value.LastName" value="" type="text">
<input name="[0].value.Age" value="" type="text">

If we see the above code block, we can see there is a hidden field for maintaining the key of the dictionary element and the value of the hidden filed is nothing but the key of the dictionary element

<div class="data">
    <h4>
        First User</h4>
    <input type="hidden" name="[0].key" value="first" />
    First Name: @Html.TextBox("[0].value.FirstName")
    Last Name: @Html.TextBox("[0].value.LastName")
    Age: @Html.TextBox("[0].value.Age")
</div>
<div class="data">
    <h4>
        Second User</h4>
    <input type="hidden" name="[1].key" value="second" />
    First Name: @Html.TextBox("[1].value.FirstName")
    Last Name: @Html.TextBox("[1].value.LastName")
    Age: @Html.TextBox("[1].value.Age")
</div>
<div class="data">
    <h4>
        Third User</h4>
    <input type="hidden" name="[2].key" value="third" />
    First Name: @Html.TextBox("[2].value.FirstName")
    Last Name: @Html.TextBox("[2].value.LastName")
    Age: @Html.TextBox("[2].value.Age")
</div>
<input type="button" id="submitData" value="Submit data" />
<script type="text/javascript">
    $(document).ready(function () {
        $("#submitData").click(function () {
            var datatopost = new Object();
            $(".data").each(function (i, item) {
                datatopost[$(item).find("input[name*=FirstName]").attr("name")] = $(item).find("input[name*=FirstName]").val();
                datatopost[$(item).find("input[name*=LastName]").attr("name")] = $(item).find("input[name*=LastName]").val();
                datatopost[$(item).find("input[name*=Age]").attr("name")] = $(item).find("input[name*=Age]").val();
                datatopost[$(item).find("input[name*=key]").attr("name")] = $(item).find("input[name*=key]").val();
            });
            $.ajax({
                url: '@Url.Action("Binding_posting_to_a_dictionary_using_ajax")',
                type: 'POST',
                traditional: true,
                data: datatopost,
                dataType: "json",
                success: function (response) {
                    alert(response);
                },
                error: function (xhr) {
                    alert(xhr);
 
                }
            });
        });
    });
</script>

MICROSERVICES

Introduction To Microservices

Microservice architecture, or simply microservices, is a distinctive method of developing software systems that has grown in popularity in recent years.  In fact, even though there isn’t a whole lot out there on what it is and how to do it, for many developers it has become a preferred way of creating enterprise applications.  Thanks to its scalability, this architectural method is considered particularly ideal when you have to enable support for a range of platforms and devices—spanning web, mobile, Internet of Things, and wearables—or simply when you’re not sure what kind of devices you’ll need to support in an increasingly cloudy future.

While there is no standard, formal definition of microservices, there are certain characteristics that help us identify the style.  Essentially, microservice architecture is a method of developing software applications as a suite of independently deployable, small, modular services in which each service runs a unique process and communicates through a well-defined, lightweight mechanism to serve a business goal.

How the services communicate with each other depends on your application’s requirements, but many developers use HTTP/REST with JSON or Protobuf.  DevOps professionals are, of course, free to choose any communication protocol they deem suitable, but in most situations, REST (Representational State Transfer) is a useful integration method because of its comparatively lower complexity over other protocols.

To begin to understand microservices architecture, it helps to consider its opposite: the monolithic architectural style.  Unlike microservices, a monolith application is always built as a single, autonomous unit.  In a client-server model, the server-side application is a monolith that handles the HTTP requests, executes logic, and retrieves/updates the data in the underlying database.  The problem with a monolithic architecture, though, is that all change cycles usually end up being tied to one another.  A modification made to a small section of an application might require building and deploying an entirely new version.  If you need to scale specific functions of an application, you may have to scale the entire application instead of just the desired components.  This is where creating microservices can come to the rescue.

SOA vs. Microservices

“Wait a minute,” some of you may be murmuring over your morning coffee, “isn’t this just another name for SOA?”  Service-Oriented Architecture (SOA) sprung up during the first few years of this century, and microservice architecture (abbreviated by some as MSA) bears a number of similarities.  Traditional SOA, however, is a broader framework and can mean a wide variety of things.  Some microservices advocates reject the SOA tag altogether, while others consider microservices to be simply an ideal, refined form of SOA.  In any event, we think there are clear enough differences to justify a distinct “microservice” concept (at least as a special form of SOA, as we’ll illustrate later).

The typical SOA model, for example, usually has more dependent ESBs, with microservices using faster messaging mechanisms.  SOA also focuses on imperative programming, whereas microservices architecture focuses on a responsive-actor programming style.  Moreover, SOA models tend to have an outsized relational database, while microservices frequently use NoSQL or micro-SQL databases (which can be connected to conventional databases).  But the real difference has to do with the architecture methods used to arrive at an integrated set of services in the first place.

Since everything changes in the digital world, agile development techniques that can keep up with the demands of software evolution are invaluable.  Most of the practices used in microservices architecture come from developers who have created software applications for large enterprise organizations, and who know that today’s end users expect dynamic yet consistent experiences across a wide range of devices.  Scalable, adaptable, modular, and quickly accessible cloud-based applications are in high demand.  And this has led many developers to change their approach.

Examples of Microservices

As Martin Fowler points out, Netflix, eBay, Amazon, the UK Government Digital Service, realestate.com.au, Forward, Twitter, PayPal, Gilt, Bluemix, Soundcloud, The Guardian, and many other large-scale websites and applications have all evolved from monolithic to microservices architecture.

Netflix has a widespread architecture that has evolved from monolithic to SOA.  It receives more than one billion calls every day, from more than 800 different types of devices, to its streaming-video API.  Each API call then prompts around five additional calls to the backend service.

Amazon has also migrated to microservices.  They get countless calls from a variety of applications—including applications that manage the web service API as well as the website itself—which would have been simply impossible for their old, two-tiered architecture to handle.

The auction site eBay is yet another example that has gone through the same transition.  Their core application comprises several autonomous applications, with each one executing the business logic for different function areas.

Understanding Microservice Architecture

Just as there is no formal definition of the term microservices, there’s no standard model that you’ll see represented in every system based on this architectural style.  But you can expect most microservice systems to share a few notable characteristics.

First, software built as microservices can, by definition, be broken down into multiple component services.  Why?  So that each of these services can be deployed, tweaked, and then redeployed independently without compromising the integrity of an application.  As a result, you might only need to change one or more distinct services instead of having to redeploy entire applications.  But this approach does have its downsides, including expensive remote calls (instead of in-process calls), coarser-grained remote APIs, and increased complexity when redistributing responsibilities between components.

Second, the microservices style is usually organized around business capabilities and priorities.  Unlike a traditional monolithic development approach—where different teams each have a specific focus on, say, UIs, databases, technology layers, or server-side logic—microservice architecture utilizes cross-functional teams.  The responsibilities of each team are to make specific products based on one or more individual services communicating via message bus.  That means that when changes are required, there won’t necessarily be any reason for the project, as a whole, to take more time or for developers to have to wait for budgetary approval before individual services can be improved.  Most development methods focus on projects: a piece of code that has to offer some predefined business value, must be handed over to the client, and is then periodically maintained by a team.  But in microservices, a team owns the product for its lifetime, as in Amazon’s oft-quoted maxim “You build it, you run it.”

Third, microservices act somewhat like the classical UNIX system: they receive requests, process them, and generate a response accordingly.  This is opposite to how many other products such as ESBs (Enterprise Service Buses) work, where high-tech systems for message routing, choreography, and applying business rules are utilized.  You could say that microservices have smart endpoints that process info and apply logic, and dumb pipes through which the info flows.

Fourth, since microservices involve a variety of technologies and platforms, old-school methods of centralized governance aren’t optimal.  Decentralized governance is favored by the microservices community because its developers strive to produce useful tools that can then be used by others to solve the same problems.  A practical example of this is Netflix—the service responsible for about 30% of traffic on the web.  The company encourages its developers to save time by always using code libraries established by others, while also giving them the freedom to flirt with alternative solutions when needed.  Just like decentralized governance, microservice architecture also favors decentralized data management.  Monolithic systems use a single logical database across different applications.  In a microservice application, each service usually manages its unique database.

Fifth, like a well-rounded child, microservices are designed to cope with failure.  Since several unique and diverse services are communicating together, it’s quite possible that a service could fail, for one reason or another (e.g., when the supplier isn’t available).  In these instances, the client should allow its neighboring services to function while it bows out in as graceful a manner as possible.  For obvious reasons, this requirement adds more complexity to microservices as compared to monolithic systems architecture.

Finally, microservices architecture is an evolutionary design and, again, is ideal for evolutionary systems where you can’t fully anticipate the types of devices that may one day be accessing your application.  This is because the style’s practitioners see decomposition as a powerful tool that gives them control over application development.  A good instance of this scenario could be seen with The Guardian’s website (prior to the late 2014 redesign).  The core application was initially based on monolithic architecture, but as several unforeseen requirements surfaced, instead of revamping the entire app the developers used microservices that interact over an older monolithic architecture through APIs.

To sum up: Microservice architecture uses services to componentize and is usually organized around business capabilities; focuses on products instead of projects; has smart end points but not-so-smart info flow mechanisms; uses decentralized governance as well as decentralized data management; is designed to accommodate service interruptions; and, last but not least, is an evolutionary model.

Custom AngularJS Directives

In the first post in this series I introduced custom directives in AngularJS and showed a few simple examples of getting started. In this post we’re going to explore the topic of Isolate Scope and see how important it is when building directives.

What is Isolate Scope?

Directives have access to the parent scope by default in AngularJS applications. For example, the following directive relies on the parent scope to write out a customer object’s name and street properties:

 

angular.module('directivesModule').directive('mySharedScope', function () {
    return {
        template: 'Name: {{customer.name}} Street: {{customer.street}}'
    };
});

Although this code gets the job done, you have to know a lot about the parent scope in order to use the directive and could just as easily use ng-include and an HTML template to accomplish the same thing (this was discussed in the first post). If the parent scope changes at all the directive is no longer useful.

If you want to make a reuseable directive you can’t rely on the parent scope and must use something calledIsolate Scope instead. Here’s a diagram that compares shared scope with isolate scope:

 

image

 

Looking at the diagram you can see that shared scope allows the parent scope to flow down into the directive. Isolate scope on the other hand doesn’t work that way. With isolate scope it’s as if you’re creating a wall around your directive that can’t be penetrated by the parent scope. Here’s a (silly) visual of that concept just in case you need further clarification:

image

Creating Isolate Scope in a Directive

Isolating the scope in a directive is a is a simple process. Start by adding a scope property into the directive as shown next. This automatically isolates the directive’s scope from any parent scope(s).

angular.module('directivesModule').directive('myIsolatedScope', function () {
    return {
        scope: {},
        template: 'Name: {{customer.name}} Street: {{customer.street}}'
    };
});

Now that the scope is isolated, the customer object from the parent scope will no longer be accessible. When the directive is used in a view it’ll result in the following output (notice that the customer name and street values aren’t rendered):

Name: Street:
Since the directive is now completely cut-off from the parent scope how do we pass data into the directive for data binding purposes? You use @, =, and & characters which seems a bit strange at first glance but isn’t too bad once you understand what the characters represent. Let’s take a look at how these characters can be used in directives.

 

Introducing Local Scope Properties

Directives that use isolate scope provide 3 different options to interact with the outside world (the world on the other side of the wall). The 3 options are referred to as Local Scope Properties and can be defined using the @, =, and & characters mentioned earlier. Here’s how they work.

@ Local Scope Property

The @ local scope property is used to access string values that are defined outside the directive. For example, a controller may defined a name property on the $scope object and you need to get access to that property within the directive. To do that, you can use @ within the directive’s scope property. Here’s a high level example of that concept with a step-by-step explanation:

image

  1. A controller defines $scope.name.
  2. The $scope.name property value needs to be passed into the directive.
  3. The directive creates a custom local scope property within its isolate scope named name (note that the property can be named anything and doesn’t have to match with the $scope object’s property name). This is done using scope { name: ‘@’ }.
  4. The @ character tells the directive that the value passed into the new name property will be accessed as a string. If the outside value changes the name property in the directive’s isolate scope will automatically be updated.
  5. The template within the directive can now bind to the isolate scope’s name property.

Here’s an example of putting all of the steps together. Assume that the following controller is defined within an app:

 

var app = angular.module('directivesModule', []);

app.controller('CustomersController', ['$scope', function ($scope) {
    var counter = 0;
    $scope.customer = {
        name: 'David',
        street: '1234 Anywhere St.'
    };
            
    $scope.customers = [
        {
            name: 'David',
            street: '1234 Anywhere St.'
        },
        {
            name: 'Tina',
            street: '1800 Crest St.'
        },
        {
            name: 'Michelle',
            street: '890 Main St.'
        }
    ];

    $scope.addCustomer = function () {
        counter++;
        $scope.customers.push({
            name: 'New Customer' + counter,
            street: counter + ' Cedar Point St.'
        });
    };

    $scope.changeData = function () {
        counter++;
        $scope.customer = {
            name: 'James',
            street: counter + ' Cedar Point St.'
        };
    };
}]);

The directive code creates an isolate scope that allows a name property to be bound to a value that is passed in from the “outside world”:

angular.module('directivesModule').directive('myIsolatedScopeWithName', function () {
    return {
        scope: {
            name: '@'
        },
        template: 'Name: {{ name }}'
    };
});

The directive can be used in the following way:

<div my-isolated-scope-with-name name="{{ customer.name }}"></div>

Notice how the $scope.customer.name value from the controller is bound to the name property of the directive’s isolate scope. The content rendered by the directive is shown next:

Name: David

As mentioned earlier, if the $scope.customer.name value changes, the directive will automatically pick up the change. However, if the directive changes its name property, the outside value in $scope.customer.name will not change in this case. If you need to keep properties in-sync check out the = local scope property that’s covered next.

It’s important to note that if you want the local scope property name to be different from the property name used when the directive is defined in a view that you can use the following alternate syntax:

angular.module('directivesModule').directive('myIsolatedScopeWithName', function () {
    return {
        scope: {
            name: '@someOtherName'
        },
        template: 'Name: {{ name }}'
    };
});

In this case the name property will be used internally within the directive while someOtherName will be used by the consumer of the directive to pass data into it:

<div my-isolated-scope-with-name someOtherName="{{ customer.name }}"></div>

I prefer to keep local scope property names the same as the attributes defined on the directive in a view so I don’t typically use this approach. However, there may be situations where you need this flexibility. It’s available when using @, =, and & to define local scope properties.

= Local Scope Property

The @ character works well for accessing a string value that is passed into a directive as shown in the previous example. However, it won’t keep changes made in the directive in-sync with the external/outer scope. In cases where you need to create a two-way binding between the outer scope and the directive’s isolate scope you can use the = character. Here’s a high level example of that concept with a step-by-step explanation:

 

image

 

  1. A controller defines a $scope.person object.
  2. The $scope.person object needs to be passed into the directive in a way that creates a two-way binding.
  3. The directive creates a custom local scope property within its isolate scope named customer. This is done using scope { customer: ‘=’ }.
  4. The = character tells the directive that the object passed into the customer property should be bound using a two-way binding. If the outside property value changes then the directive’s customer property should automatically be updated. If the directive’s customer property changes then the object in the external scope should automatically  be updated.
  5. The template within the directive can now bind to the isolate scope’s customer property.

Here’s an example of a directive that uses the = local scope property to define a property in the isolate scope.

angular.module('directivesModule').directive('myIsolatedScopeWithModel', function () {
    return {
        scope: {
            customer: '=' //Two-way data binding
        },
        template: '<ul><li ng-repeat="prop in customer">{{ prop }}</li></ul>'
    };
});

In this example the directive takes the object provided to the customer property and iterates through all of the properties in the object using ng-repeat. It then writes out the property values using <li> elements.

To pass data into the directive the following code can be used:

<div my-isolated-scope-with-model customer="customer"></div>

Notice that with the = local scope property you don’t pass {{ customer }} as with @. You instead pass the name of the property directly. In this example the customer object passed into the directive’s customer local scope property comes from the controller shown earlier. The directive iterates through all of the properties in the customer object using ng-repeat and writes them out. This yields the following output:

  • David
  • 1234 Anywhere St.

& Local Scope Property

At this point you’ve seen how to use local scope properties to pass values into a directive as strings (@) and how to bind to external objects using a two-way data binding technique (=). The final local scope property uses the &character and is used to bind to external functions.

The & local scope property allows the consumer of a directive to pass in a function that the directive can invoke. For example, let’s assume you’re writing a directive and as the end user clicks on a button that the directive emits you want to notify the controller. You can’t hard-code the click function to use in the directive since the controller would never know that anything happened. Raising an event could get the job done (using $emit or $broadcast) but the controller would have to know the specific event name to listen for which isn’t optimal either.

A better approach would be to let the consumer of the directive pass in a function that the directive can invoke as it needs. Whenever the directive does useful work (such as detecting when a user clicked on a button) it could then invoke the function that was passed into it. That way the consumer of the directive has 100% control over what happens and the directive simply delegates control to the function that was passed in. Here’s a high-level example of that concept with a step-by-step example:

image

  1. A controller defines a $scope.click function.
  2. The $scope.click function needs to be passed into the directive so that the directive can invoke it when a button is clicked.
  3. The directive creates a custom local scope property within its isolate scope named action. This is done using scope { action: ‘&’ }. In this example action is really just an alias for click.  When action is invoked the click function that was passed in will be called.
  4. The & character is basically saying, “Hey, pass me a function that I can invoke when something happens inside of the directive!”.
  5. The template within the directive may contain a button. When the button is clicked the action (which is really a reference to the external function that was passed in) can then be invoked.

Here’s an example of the & local scope property in action:

angular.module('directivesModule').directive('myIsolatedScopeWithModelAndFunction', function () {
    return {
        scope: {
            datasource: '=',
            action: '&'
        },
        template: '<ul><li ng-repeat="prop in datasource">{{ prop }}</li></ul> ' +
                  '<button ng-click="action()">Change Data</button>'
    };
});

Notice that the following code from the directive’s template refers to the action local scope property and invokes it (since it’s simply a reference to a function) as the button is clicked. This delegates control to whatever function was passed to action.

<button ng-click="action()">Change Data</button>

Here’s an example of using the directive. I’d of course recommend picking a shorter name for your “real life” directives.

<div my-isolated-scope-with-model-and-function 
     datasource="customer" 
     action="changeData()">
</div>

The changeData() function passed into the action property is defined in the controller shown at the beginning of this post. It changes the name and address when invoked:

$scope.changeData = function () {
      counter++;
      $scope.customer = {
          name: 'James',
          street: counter + ' Cedar Point St.'
      };
};

Conclusion

At this point in the custom AngularJS directives series you’ve seen several of the key aspects available in directives such as templates, isolate scope, and local scope properties. As a review, isolate scope is created in a directive by using the scope property and assigning it an object literal. Three types of local scope properties can be added into isolate scope including:

  • @  Used to pass a string value into the directive
  • =    Used to create a two-way binding to an object that is passed into the directive
  • &    Allows an external function to be passed into the directive and invoked

LINQ to Entities does not recognize the method ‘System.DateTime AddDays(Double)’ method, and this method cannot be translated into a store expression.

LINQ to Entities does not recognize the method ‘System.DateTime AddDays(Double)’ method, and this method cannot be translated into a store expression.

Some of the operations you can perform in code don’t translate (at least not cleanly) to SQL, you’ll need to move around certain operations so they translate better.

//Figure out the day you want in advance
var sevenDaysAgo = DateTime.Now.Date.AddDays(-7);
var results = users.Select(user => new {
     user.Key,
     user.Value,
     Count = users.Join(context.Attendances, 
                        user => user.Key,
                        attendance => attendance.EmployeeId,
                        (user, attendance) => attendance.LoginDate)
                  .Where(date => date > sevenDaysAgo)
                  .Select(date => date.Day)
                  .Distinct()
                  .Count()
});

foreach (var result in results)
{        
    _attendanceTable.Rows.Add(result.Key, result.Value, result.Count);
}

Mocks vs Stubs

Style

Mocks vs Stubs = Behavioral testing vs State testing

Principle

According to the principle of Test only one thing per test, there may be several stubs in one test, but generally there is only one mock.

Lifecycle

Test lifecycle with stubs:

  1. Setup – Prepare object that is being tested and its stubs collaborators.
  2. Exercise – Test the functionality.
  3. Verify state – Use asserts to check object’s state.
  4. Teardown – Clean up resources.

Test lifecycle with mocks:

  1. Setup data – Prepare object that is being tested.
  2. Setup expectations – Prepare expectations in mock that is being used by primary object.
  3. Exercise – Test the functionality.
  4. Verify expectations – Verify that correct methods has been invoked in mock.
  5. Verify state – Use asserts to check object’s state.
  6. Teardown – Clean up resources.

Liskov Substitution Principle with a good C# example

LSP is about following the contract of the base class.

You can for instance not throw new exceptions in the sub classes as the one using the base class would not expect that. Same goes for if the base class throws ArgumentNullException if an argument is missing and the sub class allows the argument to be null, also a LSP violation.

Here is an example of a class structure which violates LSP:

public interface IDuck
{
   void Swim();
   // contract says that IsSwimming should be true if Swim has been called.
   bool IsSwimming { get; }
}
public class OrganicDuck : IDuck
{
   public void Swim()
   {
      //do something to swim
   }

   bool IsSwimming { get { /* return if the duck is swimming */ } }
}
public class ElectricDuck : IDuck
{
   bool _isSwimming;

   public void Swim()
   {
      if (!IsTurnedOn)
        return;

      _isSwimming = true;
      //swim logic  

   }

   bool IsSwimming { get { return _isSwimming; } }
}

And the calling code

void MakeDuckSwim(IDuck duck)
{
    duck.Swim();
}

As you can see, there are two examples of ducks. One organic duck and one electric duck. The electric duck can only swim if it’s turned on. This breaks the LSP principle since it must be turned on to be able to swim as the IsSwimming (which also is part of the contract) won’t be set as in the base class.

You can of course solve it by doing something like this

void MakeDuckSwim(IDuck duck)
{
    if (duck is ElectricDuck)
        ((ElectricDuck)duck).TurnOn();
    duck.Swim();
}

But that would break Open/Closed principle and has to be implemented everywhere (and thefore still generates unstable code).

The proper solution would be to automatically turn on the duck in the Swim method and by doing so make the electric duck behave exactly as defined by the IDuck interface

The solution with turning on the duck inside the Swim method can have side effects when working with the actual implementation (ElectricDuck). But that can be solved by using a explicit interface implementation. imho it’s more likely that you get problems by NOT turning it on in Swim since it’s expected that it will swim when using the IDuck interface

EXAMPLE OF VALID LISKOVS PRINCIPLE IMPLEMENTATION

public interface IDuck
{
    bool IsTurnedOn { get; set; }
    void Swim();
    void MakeDuckSwim();
}

public class Duck : IDuck
{
    public bool IsTurnedOn
    {
        get { return true; }
        set { value = true; }
    }

    public void Swim()
    {
        //do something to swim
    }
    private void TurnOn() 
    {
        //do nothing already on
    }
    public void MakeDuckSwim() 
    {
        Swim();
    }
}

public class ElectricDuck : IDuck
{
    public bool IsTurnedOn {get; set;}

    public void Swim()
    {
        if (!IsTurnedOn)
            TurnOn();

        //swim logic  
    }
    private void TurnOn()
    {
        IsTurnedOn = true;
    }
    public void MakeDuckSwim() 
    {
        if (!IsTurnedOn)
            TurnOn();

        Swim();
    }       
}

Then this will always work and follows all SOLID principles:

    duck.MakeDuckSwim();