Category Archives: C#

HTTP Status Codes

Each Status-Code is described below, including a description of which method(s) it can follow and any metainformation required in the response.

1xx – Informational

This class of status code indicates a provisional response, consisting only of the Status-Line and optional headers, and is terminated by an empty line. There are no required headers for this class of status code. Since HTTP/1.0 did not define any 1xx status codes, servers MUST NOT send a 1xx response to an HTTP/1.0 client except under experimental conditions.

A client MUST be prepared to accept one or more 1xx status responses prior to a regular response, even if the client does not expect a 100 (Continue) status message. Unexpected 1xx status responses MAY be ignored by a user agent.

Proxies MUST forward 1xx responses, unless the connection between the proxy and its client has been closed, or unless the proxy itself requested the generation of the 1xx response. (For example, if a proxy adds a “Expect: 100-continue” field when it forwards a request, then it need not forward the corresponding 100 (Continue) response(s).)

2xx – Successful

This class of status code indicates that the client’s request was successfully received, understood, and accepted.

3xx – Redirection

This class of status code indicates that further action needs to be taken by the user agent in order to fulfill the request.  The action required MAY be carried out by the user agent without interaction with the user if and only if the method used in the second request is GET or HEAD. A client SHOULD detect infinite redirection loops, since such loops generate network traffic for each redirection.

http://macromicro.com/unknown/?share=linkedin Note: previous versions of this specification recommended a maximum of five redirections. Content developers should be aware that there might be clients that implement such a fixed limitation.

4xx – Client Error

The 4xx class of status code is intended for cases in which the client seems to have erred. Except when responding to a HEAD request, the server SHOULD include an entity containing an explanation of the error situation, and whether it is a temporary or permanent condition. These status codes are applicable to any request method. User agents SHOULD display any included entity to the user.

If the client is sending data, a server implementation using TCP SHOULD be careful to ensure that the client acknowledges receipt of the packet(s) containing the response, before the server closes the input connection. If the client continues sending data to the server after the close, the server’s TCP stack will send a reset packet to the client, which may erase the client’s unacknowledged input buffers before they can be read and interpreted by the HTTP application.

5xx – Server Error

Response status codes beginning with the digit “5” indicate cases in which the server is aware that it has erred or is incapable of performing the request. Except when responding to a HEAD request, the server SHOULD include an entity containing an explanation of the error situation, and whether it is a temporary or permanent condition. User agents SHOULD display any included entity to the user. These response codes are applicable to any request method.

Binding & posting to a Dictionary in MVC 3 using jQuery

http://vivianreed.com/events/list/?tribe_event_display=past Binding, posting to a Dictionary in MVC 3 using jQuery ajax

In this post we are going to discuss how to post data to a Dictionary using jQuery ajax. Lets have a simple model from the previous post and pass value to the action method as dictionary of the model we get the following controller method.

[HttpPost]
ranbaxy penegra price public ActionResult Binding_posting_to_a_dictionary_using_ajax(Dictionary<string, UserModel> userList)
{
    return PartialView("Success");
}

To work with dictionary the default model binder will accept the data in a particular format. If we consider key of the first element of the dictionary as firstkey then we have to give the name of the input parameter as [0].value.ModelPropertyName. So, for the first item we can give the names like-

<input name="[0].key" value="firstkey" type="hidden">
<input name="[0].value.FirstName" value="" type="text">
<input name="[0].value.LastName" value="" type="text">
<input name="[0].value.Age" value="" type="text">

If we see the above code block, we can see there is a hidden field for maintaining the key of the dictionary element and the value of the hidden filed is nothing but the key of the dictionary element

<div class="data">
    <h4>
        First User</h4>
    <input type="hidden" name="[0].key" value="first" />
    First Name: @Html.TextBox("[0].value.FirstName")
    Last Name: @Html.TextBox("[0].value.LastName")
    Age: @Html.TextBox("[0].value.Age")
</div>
<div class="data">
    <h4>
        Second User</h4>
    <input type="hidden" name="[1].key" value="second" />
    First Name: @Html.TextBox("[1].value.FirstName")
    Last Name: @Html.TextBox("[1].value.LastName")
    Age: @Html.TextBox("[1].value.Age")
</div>
<div class="data">
    <h4>
        Third User</h4>
    <input type="hidden" name="[2].key" value="third" />
    First Name: @Html.TextBox("[2].value.FirstName")
    Last Name: @Html.TextBox("[2].value.LastName")
    Age: @Html.TextBox("[2].value.Age")
</div>
<input type="button" id="submitData" value="Submit data" />
<script type="text/javascript">
    $(document).ready(function () {
        $("#submitData").click(function () {
            var datatopost = new Object();
            $(".data").each(function (i, item) {
                datatopost[$(item).find("input[name*=FirstName]").attr("name")] = $(item).find("input[name*=FirstName]").val();
                datatopost[$(item).find("input[name*=LastName]").attr("name")] = $(item).find("input[name*=LastName]").val();
                datatopost[$(item).find("input[name*=Age]").attr("name")] = $(item).find("input[name*=Age]").val();
                datatopost[$(item).find("input[name*=key]").attr("name")] = $(item).find("input[name*=key]").val();
            });
            $.ajax({
                url: '@Url.Action("Binding_posting_to_a_dictionary_using_ajax")',
                type: 'POST',
                traditional: true,
                data: datatopost,
                dataType: "json",
                success: function (response) {
                    alert(response);
                },
                error: function (xhr) {
                    alert(xhr);
 
                }
            });
        });
    });
</script>

MICROSERVICES

Introduction To Microservices

Microservice architecture, or simply microservices, is a distinctive method of developing software systems that has grown in popularity in recent years.  In fact, even though there isn’t a whole lot out there on what it is and how to do it, for many developers it has become a preferred way of creating enterprise applications.  Thanks to its scalability, this architectural method is considered particularly ideal when you have to enable support for a range of platforms and devices—spanning web, mobile, Internet of Things, and wearables—or simply when you’re not sure what kind of devices you’ll need to support in an increasingly cloudy future.

While there is no standard, formal definition of microservices, there are certain characteristics that help us identify the style.  Essentially, microservice architecture is a method of developing software applications as a suite of independently deployable, small, modular services in which each service runs a unique process and communicates through a well-defined, lightweight mechanism to serve a business goal.

How the services communicate with each other depends on your application’s requirements, but many developers use HTTP/REST with JSON or Protobuf.  DevOps professionals are, of course, free to choose any communication protocol they deem suitable, but in most situations, REST (Representational State Transfer) is a useful integration method because of its comparatively lower complexity over other protocols.

To begin to understand microservices architecture, it helps to consider its opposite: the monolithic architectural style.  Unlike microservices, a monolith application is always built as a single, autonomous unit.  In a client-server model, the server-side application is a monolith that handles the HTTP requests, executes logic, and retrieves/updates the data in the underlying database.  The problem with a monolithic architecture, though, is that all change cycles usually end up being tied to one another.  A modification made to a small section of an application might require building and deploying an entirely new version.  If you need to scale specific functions of an application, you may have to scale the entire application instead of just the desired components.  This is where creating microservices can come to the rescue.

SOA vs. Microservices

“Wait a minute,” some of you may be murmuring over your morning coffee, “isn’t this just another name for SOA?”  Service-Oriented Architecture (SOA) sprung up during the first few years of this century, and microservice architecture (abbreviated by some as MSA) bears a number of similarities.  Traditional SOA, however, is a broader framework and can mean a wide variety of things.  Some microservices advocates reject the SOA tag altogether, while others consider microservices to be simply an ideal, refined form of SOA.  In any event, we think there are clear enough differences to justify a distinct “microservice” concept (at least as a special form of SOA, as we’ll illustrate later).

The typical SOA model, for example, usually has more dependent ESBs, with microservices using faster messaging mechanisms.  SOA also focuses on imperative programming, whereas microservices architecture focuses on a responsive-actor programming style.  Moreover, SOA models tend to have an outsized relational database, while microservices frequently use NoSQL or micro-SQL databases (which can be connected to conventional databases).  But the real difference has to do with the architecture methods used to arrive at an integrated set of services in the first place.

Since everything changes in the digital world, agile development techniques that can keep up with the demands of software evolution are invaluable.  Most of the practices used in microservices architecture come from developers who have created software applications for large enterprise organizations, and who know that today’s end users expect dynamic yet consistent experiences across a wide range of devices.  Scalable, adaptable, modular, and quickly accessible cloud-based applications are in high demand.  And this has led many developers to change their approach.

Examples of Microservices

As Martin Fowler points out, Netflix, eBay, Amazon, the UK Government Digital Service, realestate.com.au, Forward, Twitter, PayPal, Gilt, Bluemix, Soundcloud, The Guardian, and many other large-scale websites and applications have all evolved from monolithic to microservices architecture.

Netflix has a widespread architecture that has evolved from monolithic to SOA.  It receives more than one billion calls every day, from more than 800 different types of devices, to its streaming-video API.  Each API call then prompts around five additional calls to the backend service.

Amazon has also migrated to microservices.  They get countless calls from a variety of applications—including applications that manage the web service API as well as the website itself—which would have been simply impossible for their old, two-tiered architecture to handle.

The auction site eBay is yet another example that has gone through the same transition.  Their core application comprises several autonomous applications, with each one executing the business logic for different function areas.

Understanding Microservice Architecture

Just as there is no formal definition of the term microservices, there’s no standard model that you’ll see represented in every system based on this architectural style.  But you can expect most microservice systems to share a few notable characteristics.

First, software built as microservices can, by definition, be broken down into multiple component services.  Why?  So that each of these services can be deployed, tweaked, and then redeployed independently without compromising the integrity of an application.  As a result, you might only need to change one or more distinct services instead of having to redeploy entire applications.  But this approach does have its downsides, including expensive remote calls (instead of in-process calls), coarser-grained remote APIs, and increased complexity when redistributing responsibilities between components.

Second, the microservices style is usually organized around business capabilities and priorities.  Unlike a traditional monolithic development approach—where different teams each have a specific focus on, say, UIs, databases, technology layers, or server-side logic—microservice architecture utilizes cross-functional teams.  The responsibilities of each team are to make specific products based on one or more individual services communicating via message bus.  That means that when changes are required, there won’t necessarily be any reason for the project, as a whole, to take more time or for developers to have to wait for budgetary approval before individual services can be improved.  Most development methods focus on projects: a piece of code that has to offer some predefined business value, must be handed over to the client, and is then periodically maintained by a team.  But in microservices, a team owns the product for its lifetime, as in Amazon’s oft-quoted maxim “You build it, you run it.”

Third, microservices act somewhat like the classical UNIX system: they receive requests, process them, and generate a response accordingly.  This is opposite to how many other products such as ESBs (Enterprise Service Buses) work, where high-tech systems for message routing, choreography, and applying business rules are utilized.  You could say that microservices have smart endpoints that process info and apply logic, and dumb pipes through which the info flows.

Fourth, since microservices involve a variety of technologies and platforms, old-school methods of centralized governance aren’t optimal.  Decentralized governance is favored by the microservices community because its developers strive to produce useful tools that can then be used by others to solve the same problems.  A practical example of this is Netflix—the service responsible for about 30% of traffic on the web.  The company encourages its developers to save time by always using code libraries established by others, while also giving them the freedom to flirt with alternative solutions when needed.  Just like decentralized governance, microservice architecture also favors decentralized data management.  Monolithic systems use a single logical database across different applications.  In a microservice application, each service usually manages its unique database.

Fifth, like a well-rounded child, microservices are designed to cope with failure.  Since several unique and diverse services are communicating together, it’s quite possible that a service could fail, for one reason or another (e.g., when the supplier isn’t available).  In these instances, the client should allow its neighboring services to function while it bows out in as graceful a manner as possible.  For obvious reasons, this requirement adds more complexity to microservices as compared to monolithic systems architecture.

Finally, microservices architecture is an evolutionary design and, again, is ideal for evolutionary systems where you can’t fully anticipate the types of devices that may one day be accessing your application.  This is because the style’s practitioners see decomposition as a powerful tool that gives them control over application development.  A good instance of this scenario could be seen with The Guardian’s website (prior to the late 2014 redesign).  The core application was initially based on monolithic architecture, but as several unforeseen requirements surfaced, instead of revamping the entire app the developers used microservices that interact over an older monolithic architecture through APIs.

To sum up: Microservice architecture uses services to componentize and is usually organized around business capabilities; focuses on products instead of projects; has smart end points but not-so-smart info flow mechanisms; uses decentralized governance as well as decentralized data management; is designed to accommodate service interruptions; and, last but not least, is an evolutionary model.

LINQ to Entities does not recognize the method ‘System.DateTime AddDays(Double)’ method, and this method cannot be translated into a store expression.

LINQ to Entities does not recognize the method ‘System.DateTime AddDays(Double)’ method, and this method cannot be translated into a store expression.

Some of the operations you can perform in code don’t translate (at least not cleanly) to SQL, you’ll need to move around certain operations so they translate better.

//Figure out the day you want in advance
var sevenDaysAgo = DateTime.Now.Date.AddDays(-7);
var results = users.Select(user => new {
     user.Key,
     user.Value,
     Count = users.Join(context.Attendances, 
                        user => user.Key,
                        attendance => attendance.EmployeeId,
                        (user, attendance) => attendance.LoginDate)
                  .Where(date => date > sevenDaysAgo)
                  .Select(date => date.Day)
                  .Distinct()
                  .Count()
});

foreach (var result in results)
{        
    _attendanceTable.Rows.Add(result.Key, result.Value, result.Count);
}

Mocks vs Stubs

Style

Mocks vs Stubs = Behavioral testing vs State testing

Principle

According to the principle of Test only one thing per test, there may be several stubs in one test, but generally there is only one mock.

Lifecycle

Test lifecycle with stubs:

  1. Setup – Prepare object that is being tested and its stubs collaborators.
  2. Exercise – Test the functionality.
  3. Verify state – Use asserts to check object’s state.
  4. Teardown – Clean up resources.

Test lifecycle with mocks:

  1. Setup data – Prepare object that is being tested.
  2. Setup expectations – Prepare expectations in mock that is being used by primary object.
  3. Exercise – Test the functionality.
  4. Verify expectations – Verify that correct methods has been invoked in mock.
  5. Verify state – Use asserts to check object’s state.
  6. Teardown – Clean up resources.

Liskov Substitution Principle with a good C# example

LSP is about following the contract of the base class.

You can for instance not throw new exceptions in the sub classes as the one using the base class would not expect that. Same goes for if the base class throws ArgumentNullException if an argument is missing and the sub class allows the argument to be null, also a LSP violation.

Here is an example of a class structure which violates LSP:

public interface IDuck
{
   void Swim();
   // contract says that IsSwimming should be true if Swim has been called.
   bool IsSwimming { get; }
}
public class OrganicDuck : IDuck
{
   public void Swim()
   {
      //do something to swim
   }

   bool IsSwimming { get { /* return if the duck is swimming */ } }
}
public class ElectricDuck : IDuck
{
   bool _isSwimming;

   public void Swim()
   {
      if (!IsTurnedOn)
        return;

      _isSwimming = true;
      //swim logic  

   }

   bool IsSwimming { get { return _isSwimming; } }
}

And the calling code

void MakeDuckSwim(IDuck duck)
{
    duck.Swim();
}

As you can see, there are two examples of ducks. One organic duck and one electric duck. The electric duck can only swim if it’s turned on. This breaks the LSP principle since it must be turned on to be able to swim as the IsSwimming (which also is part of the contract) won’t be set as in the base class.

You can of course solve it by doing something like this

void MakeDuckSwim(IDuck duck)
{
    if (duck is ElectricDuck)
        ((ElectricDuck)duck).TurnOn();
    duck.Swim();
}

But that would break Open/Closed principle and has to be implemented everywhere (and thefore still generates unstable code).

The proper solution would be to automatically turn on the duck in the Swim method and by doing so make the electric duck behave exactly as defined by the IDuck interface

The solution with turning on the duck inside the Swim method can have side effects when working with the actual implementation (ElectricDuck). But that can be solved by using a explicit interface implementation. imho it’s more likely that you get problems by NOT turning it on in Swim since it’s expected that it will swim when using the IDuck interface

EXAMPLE OF VALID LISKOVS PRINCIPLE IMPLEMENTATION

public interface IDuck
{
    bool IsTurnedOn { get; set; }
    void Swim();
    void MakeDuckSwim();
}

public class Duck : IDuck
{
    public bool IsTurnedOn
    {
        get { return true; }
        set { value = true; }
    }

    public void Swim()
    {
        //do something to swim
    }
    private void TurnOn() 
    {
        //do nothing already on
    }
    public void MakeDuckSwim() 
    {
        Swim();
    }
}

public class ElectricDuck : IDuck
{
    public bool IsTurnedOn {get; set;}

    public void Swim()
    {
        if (!IsTurnedOn)
            TurnOn();

        //swim logic  
    }
    private void TurnOn()
    {
        IsTurnedOn = true;
    }
    public void MakeDuckSwim() 
    {
        if (!IsTurnedOn)
            TurnOn();

        Swim();
    }       
}

Then this will always work and follows all SOLID principles:

    duck.MakeDuckSwim();
 

 

 


					

Injecting Repository into Custom Action Filter using Unity

create UnityFilterAttributeFilterProvider .cs class
//https://github.com/jakubgarfield/Bonobo-Git-Server/blob/master/Bonobo.Git.Server/FilterProviders/UnityFilterAttributeFilterProvider.cs
public class UnityFilterAttributeFilterProvider : FilterAttributeFilterProvider
{

private readonly IUnityContainer _container;

public UnityFilterAttributeFilterProvider(IUnityContainer container)
{
_container = container;
}

protected override IEnumerable<FilterAttribute> GetControllerAttributes(ControllerContext controllerContext, ActionDescriptor actionDescriptor)
{
var attributes = base.GetControllerAttributes(controllerContext, actionDescriptor).ToList();
foreach (var attribute in attributes)
{
_container.BuildUp(attribute.GetType(), attribute);
}

return attributes;
}

protected override IEnumerable<FilterAttribute> GetActionAttributes(ControllerContext controllerContext, ActionDescriptor actionDescriptor)
{
var attributes = base.GetActionAttributes(controllerContext, actionDescriptor).ToList();
foreach (var attribute in attributes)
{
_container.BuildUp(attribute.GetType(), attribute);
}

return attributes;
}
}

In global.asax
protected void Application_Start() {
// ...

var oldProvider = FilterProviders.Providers.Single(
f => f is FilterAttributeFilterProvider
);
FilterProviders.Providers.Remove(oldProvider);

var container = new UnityContainer();
var provider = new UnityFilterAttributeFilterProvider(container);
FilterProviders.Providers.Add(provider);

// ...
}

Since Unity is not instantiating the Filters, it cannot inject them. You would have to resort to service locator pattern such as in the following:
Authenticate is the custom action filter where I am injecting IAuthenticate and ILibrary repository
public class Authenticate : AuthorizeAttribute
{
public IAuthenticate AuthenticateLibrary { get; private set; }

public ILibrary BaseLibrary { get; private set; }

public Authenticate()
{
AuthenticateLibrary = DependencyResolver.Current.GetService<IAuthenticate>();
BaseLibrary = DependencyResolver.Current.GetService<ILibrary >();
}
...
}

The concurrent dictionary-Multithreading

convert a ‘foreach’ loop into a parallel foreach loop

Let’s say you have the following foreach loop, which sends an output of results to Grasshopper:

var results new List<bool>(new bool[pts.Count]);
foreach (Point3d pt in pts)
{
    bool isIntersecting = CheckIntersection(pt, shape);
    results.Add(isIntersecting);
}

DA.SetDataList(0, results);

We set up a list to take our results, we cycle through our points, do a calculation on each, and then save the result. When finished, we send the result as an output back to Grasshopper. Simple enough.

The best thing to do is I show you how to parallelise this, and then explain what’s happening. Here’s the parallel version of the above code:

var results = new System.Collections.Concurrent.ConcurrentDictionary<Point3d, bool>(Environment.ProcessorCount, pts.Count);
foreach (Point3d pt in pts) results[pt] = false; //initialise dictionary

Parallel.ForEach(pt, pts=>
{
    bool isIntersecting = CheckIntersection(pt, shape);
    results[pt] = isIntersecting; //save your output here
}
);

DA.SetDataList(0, results.Values); //send back to GH output

Going from ‘foreach’ to ‘Parallel.ForEach’

The Parallel.ForEach method is what is doing the bulk of the work for us in parallelising our foreach loop. Rather than waiting for each loop to finish as in single threaded, the Parallel.ForEach hands out loops to each core to each of them as they become free. It is very simple – we don’t have to worry about dividing tasks between the processors, as it handles all that for us.

Aside from the slightly different notation, the information we provide is the same when we set up the loop. We are reading from collection ‘pts’, and the current item we are reading within ‘pts’ we will call ‘pt’.

The concurrent dictionary

One real problem though with multithreading the same task is that sometimes two or more processes can attempt to overwrite the same piece of data. This creates data errors, which may or may not be silent. This is why we don’t use a List for the multithreading example – List provides no protection from multiple processes attempting to write into itself.

Instead, we use the ConcurrentDictionary. The ConcurrentDictionary is a member of the Concurrent namespace, which provides a range of classes that allow us to hold and process data without having to worry about concurrent access problems. The ConcurrentDictionary works just about the same as regular Dictionaries, but for our purposes we can use it much like a list.

It is a little more complicated to set up, as you need to specify keys for each value in the dictionary, as well as the value itself. (I have used dictionaries once before here.) Rather than having a collection of items in an order that we can look up with the index (the first item is 0, the second item is 1 and so on) in a dictionary we look up items by referring to a specific item. This specific item is called the ‘key’. Here, I am using the points as the key, and then saving their associated value (the results we are trying to calculate) as a bool.

How much faster is parallel computation?

Basic maths will tell you that you will that your code should increase at a rate of how many cores you have. With my 4-core machine, I might expect a 60 second single-threaded loop to complete in about 15 seconds.

However, there are extra overheads in parallel processing. Dividing up the work, waiting for processes to finish towards the end, and working with the concurrent dictionary, all take up processor power that would otherwise be used in computing the problem itself.

In reality, you will still likely get a substantial improvement compared to single-threaded performance. With 4 cores, I am finding I am getting a performance improvement of around 2.5 to 3.5 times. So a task that would normally take 60 seconds is now taking around 20 seconds.

C# Multithreading Loop with Parallel.For or Parallel.ForEach

Loop is such a trivial and so frequently used thing. And sometimes loops just killing performance of the whole application. I want the loop perform faster, but what can I do?

Starting from .NET Framework 4 we can run loops in parallel mode using Parallel.For method instead of regular for or Parallel.ForEach instead of regular foreach. And the most beautiful thing is that .NET Framework automatically manages the threads that service the loop!
But don’t expect to receive performance boost each time you replace your for loop with Parallel.For, especially if your loop is relatively simple and short. And keep in mind that results will not come in proper order because of parallelism.

Now let’s see the code. Here is our regular loop:

for (int i = 1; i < 1000; i++)
{
    var result = HeavyFoo(i);
    Console.WriteLine("Line {0}, Result : {1} ", i, result);
}

And here is how we write it with Parallel.For:

Parallel.For(1, 1000, (i) =>
{
    var result = HeavyFoo(i);
    Console.WriteLine("Line {0}, Result : {1} ", i, result);
});

Same story with Parallel.ForEach:

foreach(string s in stringList)
{
    var result = HeavyFoo(s);
    Console.WriteLine("Initial string: {0}, Result : {1} ", s, result);
}

Parallel.ForEach(stringList, s =>
{
    var result = HeavyFoo(s);
    Console.WriteLine("Initial string {0}, Result : {1} ", s, result);
});