The concurrent dictionary-Multithreading

convert a ‘foreach’ loop into a parallel foreach loop

Let’s say you have the following foreach loop, which sends an output of results to Grasshopper:

var results new List<bool>(new bool[pts.Count]);
foreach (Point3d pt in pts)
    bool isIntersecting = CheckIntersection(pt, shape);

DA.SetDataList(0, results);

We set up a list to take our results, we cycle through our points, do a calculation on each, and then save the result. When finished, we send the result as an output back to Grasshopper. Simple enough.

The best thing to do is I show you how to parallelise this, and then explain what’s happening. Here’s the parallel version of the above code:

var results = new System.Collections.Concurrent.ConcurrentDictionary<Point3d, bool>(Environment.ProcessorCount, pts.Count);
foreach (Point3d pt in pts) results[pt] = false; //initialise dictionary

Parallel.ForEach(pt, pts=>
    bool isIntersecting = CheckIntersection(pt, shape);
    results[pt] = isIntersecting; //save your output here

DA.SetDataList(0, results.Values); //send back to GH output

Going from ‘foreach’ to ‘Parallel.ForEach’

The Parallel.ForEach method is what is doing the bulk of the work for us in parallelising our foreach loop. Rather than waiting for each loop to finish as in single threaded, the Parallel.ForEach hands out loops to each core to each of them as they become free. It is very simple – we don’t have to worry about dividing tasks between the processors, as it handles all that for us.

Aside from the slightly different notation, the information we provide is the same when we set up the loop. We are reading from collection ‘pts’, and the current item we are reading within ‘pts’ we will call ‘pt’.

The concurrent dictionary

One real problem though with multithreading the same task is that sometimes two or more processes can attempt to overwrite the same piece of data. This creates data errors, which may or may not be silent. This is why we don’t use a List for the multithreading example – List provides no protection from multiple processes attempting to write into itself.

Instead, we use the ConcurrentDictionary. The ConcurrentDictionary is a member of the Concurrent namespace, which provides a range of classes that allow us to hold and process data without having to worry about concurrent access problems. The ConcurrentDictionary works just about the same as regular Dictionaries, but for our purposes we can use it much like a list.

It is a little more complicated to set up, as you need to specify keys for each value in the dictionary, as well as the value itself. (I have used dictionaries once before here.) Rather than having a collection of items in an order that we can look up with the index (the first item is 0, the second item is 1 and so on) in a dictionary we look up items by referring to a specific item. This specific item is called the ‘key’. Here, I am using the points as the key, and then saving their associated value (the results we are trying to calculate) as a bool.

How much faster is parallel computation?

Basic maths will tell you that you will that your code should increase at a rate of how many cores you have. With my 4-core machine, I might expect a 60 second single-threaded loop to complete in about 15 seconds.

However, there are extra overheads in parallel processing. Dividing up the work, waiting for processes to finish towards the end, and working with the concurrent dictionary, all take up processor power that would otherwise be used in computing the problem itself.

In reality, you will still likely get a substantial improvement compared to single-threaded performance. With 4 cores, I am finding I am getting a performance improvement of around 2.5 to 3.5 times. So a task that would normally take 60 seconds is now taking around 20 seconds.

C# Multithreading Loop with Parallel.For or Parallel.ForEach

Loop is such a trivial and so frequently used thing. And sometimes loops just killing performance of the whole application. I want the loop perform faster, but what can I do?

Starting from .NET Framework 4 we can run loops in parallel mode using Parallel.For method instead of regular for or Parallel.ForEach instead of regular foreach. And the most beautiful thing is that .NET Framework automatically manages the threads that service the loop!
But don’t expect to receive performance boost each time you replace your for loop with Parallel.For, especially if your loop is relatively simple and short. And keep in mind that results will not come in proper order because of parallelism.

Now let’s see the code. Here is our regular loop:

for (int i = 1; i < 1000; i++)
    var result = HeavyFoo(i);
    Console.WriteLine("Line {0}, Result : {1} ", i, result);

And here is how we write it with Parallel.For:

Parallel.For(1, 1000, (i) =>
    var result = HeavyFoo(i);
    Console.WriteLine("Line {0}, Result : {1} ", i, result);

Same story with Parallel.ForEach:

foreach(string s in stringList)
    var result = HeavyFoo(s);
    Console.WriteLine("Initial string: {0}, Result : {1} ", s, result);

Parallel.ForEach(stringList, s =>
    var result = HeavyFoo(s);
    Console.WriteLine("Initial string {0}, Result : {1} ", s, result);

Token Based Authentication using ASP.NET Web API 2, Owin, and Identity

The demo application can be accessed on ( The back-end API can be accessed on ( and both are hosted on Microsoft Azure, for learning purposes feel free to integrate and play with the back-end API with your front-end application. The API supports CORS and accepts HTTP calls from any origin. You can check the source code for this tutorial on Github.

AngularJS Authentication


Token Based Authentication

As I stated before we’ll use token based approach to implement authentication between the front-end application and the back-end API, as we all know the common and old way to implement authentication is the cookie-based approach were the cookie is sent with each request from the client to the server, and on the server it is used to identify the authenticated user.

With the evolution of front-end frameworks and the huge change on how we build web applications nowadays the preferred approach to authenticate users is to use signed token as this token sent to the server with each request, some of the benefits for using this approach are:

  • Scalability of Servers: The token sent to the server is self contained which holds all the user information needed for authentication, so adding more servers to your web farm is an easy task, there is no dependent on shared session stores.
  • Loosely Coupling: Your front-end application is not coupled with specific authentication mechanism, the token is generated from the server and your API is built in a way to understand this token and do the authentication.
  • Mobile Friendly: Cookies and browsers like each other, but storing cookies on native platforms (Android, iOS, Windows Phone) is not a trivial task, having standard way to authenticate users will simplify our life if we decided to consume the back-end API from native applications.

What we’ll build in this tutorial?

The front-end SPA will be built using HTML5, AngularJS, and Twitter Bootstrap. The back-end server will be built using ASP.NET Web API 2 on top of Owin middleware not directly on top of ASP.NET; the reason for doing so that we’ll configure the server to issue OAuth bearer token authentication using Owin middleware too, so setting up everything on the same pipeline is better approach. In addition to this we’ll use ASP.NET Identity system which is built on top of Owin middleware and we’ll use it to register new users and validate their credentials before generating the tokens.

As I mentioned before our back-end API should accept request coming from any origin, not only our front-end, so we’ll be enabling CORS (Cross Origin Resource Sharing) in Web API as well for the OAuth bearer token provider.

Use cases which will be covered in this application:

  • Allow users to signup (register) by providing username and password then store credentials in secure medium.
  • Prevent anonymous users from viewing secured data or secured pages (views).
  • Once the user is logged in successfully, the system should not ask for credentials or re-authentication for the next 24 hours 30 minutes because we are using refresh tokens.

So in this post we’ll cover step by step how to build the back-end API, and on the next post we’ll cover how we’ll build and integrate the SPA with the API.

Enough theories let’s get our hands dirty and start implementing the API!

Building the Back-End API

Step 1: Creating the Web API Project

In this tutorial I’m using Visual Studio 2013 and .Net framework 4.5, you can follow along using Visual Studio 2012 but you need to install Web Tools 2013.1 for VS 2012 by visiting this link.

Now create an empty solution and name it “AngularJSAuthentication” then add new ASP.NET Web application named “AngularJSAuthentication.API”, the selected template for project will be as the image below. Notice that the authentication is set to “No Authentication” taking into consideration that we’ll add this manually.

Web API Project Template

Step 2: Installing the needed NuGet Packages:

Now we need to install the NuGet packages which are needed to setup our Owin server and configure ASP.NET Web API to be hosted within an Owin server, so open NuGet Package Manger Console and type the below:

The  package “Microsoft.Owin.Host.SystemWeb” is used to enable our Owin server to run our API on IIS using ASP.NET request pipeline as eventually we’ll host this API on Microsoft Azure Websites which uses IIS.

Step 3: Add Owin “Startup” Class

Right click on your project then add new class named “Startup”. We’ll visit this class many times and modify it, for now it will contain the code below:

What we’ve implemented above is simple, this class will be fired once our server starts, notice the “assembly” attribute which states which class to fire on start-up. The “Configuration” method accepts parameter of type “IAppBuilder” this parameter will be supplied by the host at run-time. This “app” parameter is an interface which will be used to compose the application for our Owin server.

The “HttpConfiguration” object is used to configure API routes, so we’ll pass this object to method “Register” in “WebApiConfig” class.

Lastly, we’ll pass the “config” object to the extension method “UseWebApi” which will be responsible to wire up ASP.NET Web API to our Owin server pipeline.

Usually the class “WebApiConfig” exists with the templates we’ve selected, if it doesn’t exist then add it under the folder “App_Start”. Below is the code inside it:

Step 4: Delete Global.asax Class

No need to use this class and fire up the Application_Start event after we’ve configured our “Startup” class so feel free to delete it.

Step 5: Add the ASP.NET Identity System

After we’ve configured the Web API, it is time to add the needed NuGet packages to add support for registering and validating user credentials, so open package manager console and add the below NuGet packages:

The first package will add support for ASP.NET Identity Owin, and the second package will add support for using ASP.NET Identity with Entity Framework so we can save users to SQL Server database.

Now we need to add Database context class which will be responsible to communicate with our database, so add new class and name it “AuthContext” then paste the code snippet below:

As you can see this class inherits from “IdentityDbContext” class, you can think about this class as special version of the traditional “DbContext” Class, it will provide all of the Entity Framework code-first mapping and DbSet properties needed to manage the identity tables in SQL Server. You can read more about this class on Scott Allen Blog.

Now we want to add “UserModel” which contains the properties needed to be sent once we register a user, this model is POCO class with some data annotations attributes used for the sake of validating the registration payload request. So under “Models” folder add new class named “UserModel” and paste the code below:

Now we need to add new connection string named “AuthContext” in our Web.Config class, so open you web.config and add the below section:

Step 6: Add Repository class to support ASP.NET Identity System

Now we want to implement two methods needed in our application which they are: “RegisterUser” and “FindUser”, so add new class named “AuthRepository” and paste the code snippet below:

What we’ve implemented above is the following: we are depending on the “UserManager” that provides the domain logic for working with user information. The “UserManager” knows when to hash a password, how and when to validate a user, and how to manage claims. You can read more about ASP.NET Identity System.

Step 7: Add our “Account” Controller

Now it is the time to add our first Web API controller which will be used to register new users, so under file “Controllers” add Empty Web API 2 Controller named “AccountController” and paste the code below:

By looking at the “Register” method you will notice that we’ve configured the endpoint for this method to be “/api/account/register” so any user wants to register into our system must issue HTTP POST request to this URI and the pay load for this request will contain the JSON object as below:

Now you can run your application and issue HTTP POST request to your local URI: “http://localhost:port/api/account/register” or you can try the published API using this end point: if all went fine you will receive HTTP status code 200 and the database specified in connection string will be created automatically and the user will be inserted into table “dbo.AspNetUsers”.

Note: It is very important to send this POST request over HTTPS so the sensitive information get encrypted between the client and the server.

The “GetErrorResult” method is just a helper method which is used to validate the “UserModel” and return the correct HTTP status code if the input data is invalid.

Step 8: Add Secured Orders Controller

Now we want to add another controller to serve our Orders, we’ll assume that this controller will return orders only for Authenticated users, to keep things simple we’ll return static data. So add new controller named “OrdersController” under “Controllers” folder and paste the code below:

Notice how we added the “Authorize” attribute on the method “Get” so if you tried to issue HTTP GET request to the end point “http://localhost:port/api/orders” you will receive HTTP status code 401 unauthorized because the request you send till this moment doesn’t contain valid authorization header. You can check this using this end point:

Step 9: Add support for OAuth Bearer Tokens Generation

Till this moment we didn’t configure our API to use OAuth authentication workflow, to do so open package manager console and install the following NuGet package:

After you install this package open file “Startup” again and call the new method named “ConfigureOAuth” as the first line inside the method “Configuration”, the implemntation for this method as below:

Here we’ve created new instance from class “OAuthAuthorizationServerOptions” and set its option as the below:

  • The path for generating tokens will be as :”http://localhost:port/token”. We’ll see how we will issue HTTP POST request to generate token in the next steps.
  • We’ve specified the expiry for token to be 24 hours, so if the user tried to use the same token for authentication after 24 hours from the issue time, his request will be rejected and HTTP status code 401 is returned.
  • We’ve specified the implementation on how to validate the credentials for users asking for tokens in custom class named “SimpleAuthorizationServerProvider”.

Now we passed this options to the extension method “UseOAuthAuthorizationServer” so we’ll add the authentication middleware to the pipeline.

Step 10: Implement the “SimpleAuthorizationServerProvider” class

Add new folder named “Providers” then add new class named “SimpleAuthorizationServerProvider”, paste the code snippet below:

As you notice this class inherits from class “OAuthAuthorizationServerProvider”, we’ve overridden two methods “ValidateClientAuthentication” and “GrantResourceOwnerCredentials”. The first method is responsible for validating the “Client”, in our case we have only one client so we’ll always return that its validated successfully.

The second method “GrantResourceOwnerCredentials” is responsible to validate the username and password sent to the authorization server’s token endpoint, so we’ll use the “AuthRepository” class we created earlier and call the method “FindUser” to check if the username and password are valid.

If the credentials are valid we’ll create “ClaimsIdentity” class and pass the authentication type to it, in our case “bearer token”, then we’ll add two claims (“sub”,”role”) and those will be included in the signed token. You can add different claims here but the token size will increase for sure.

Now generating the token happens behind the scenes when we call “context.Validated(identity)”.

To allow CORS on the token middleware provider we need to add the header “Access-Control-Allow-Origin” to Owin context, if you forget this, generating the token will fail when you try to call it from your browser. Not that this allows CORS for token middleware provider not for ASP.NET Web API which we’ll add on the next step.

Step 11: Allow CORS for ASP.NET Web API

First of all we need to install the following NuGet package manger, so open package manager console and type:

Now open class “Startup” again and add the highlighted line of code (line 8) to the method “Configuration” as the below:

Step 12: Testing the Back-end API

Assuming that you registered the username “Taiseer” with password “SuperPass” in the step below, we’ll use the same username to generate token, so to test this out open your favorite REST client application in order to issue HTTP requests to generate token for user “Taiseer”. For me I’ll be using PostMan.

Now we’ll issue a POST request to the endpoint the request will be as the image below:

OAuth Token Request

Notice that the content-type and payload type is “x-www-form-urlencoded” so the payload body will be on form (grant_type=password&username=”Taiseer”&password=”SuperPass”). If all is correct you’ll notice that we’ve received signed token on the response.

As well the “grant_type” Indicates the type of grant being presented in exchange for an access token, in our case it is password.

Now we want to use this token to request the secure data using the end point so we’ll issue GET request to the end point and will pass the bearer token in the Authorization header, so for any secure end point we’ve to pass this bearer token along with each request to authenticate the user.

Note: that we are not transferring the username/password as the case of Basic authentication.

The GET request will be as the image below:

Token Get Secure Resource

If all is correct we’ll receive HTTP status 200 along with the secured data in the response body, if you try to change any character with signed token you directly receive HTTP status code 401 unauthorized.

Now our back-end API is ready to be consumed from any front end application or native mobile app.

taken from:

Don’t run production ASP.NET Applications with debug=”true” enabled

One of the things you want to avoid when deploying an ASP.NET application into production is to accidentally (or deliberately) leave the <compilation debug=”true”/> switch on within the application’s web.config file.


Doing so causes a number of non-optimal things to happen including:


1) The compilation of ASP.NET pages takes longer (since some batch optimizations are disabled)

2) Code can execute slower (since some additional debug paths are enabled)

3) Much more memory is used within the application at runtime

4) Scripts and images downloaded from the WebResources.axd handler are not cached


This last point is particularly important, since it means that all client-javascript libraries and static images that are deployed viaWebResources.axd will be continually downloaded by clients on each page view request and not cached locally within the browser.  This can slow down the user experience quite a bit for things like Atlas, controls like TreeView/Menu/Validators, and any other third-party control or custom code that deploys client resources.  Note that the reason why these resources are not cached when debug is set to true is so that developers don’t have to continually flush their browser cache and restart it every-time they make a change to a resource handler (our assumption is that when you have debug=true set you are in active development on your site).


When <compilation debug=”false”/> is set, the WebResource.axd handler will automatically set a long cache policy on resources retrieved via it – so that the resource is only downloaded once to the client and cached there forever (it will also be cached on any intermediate proxy servers). If you have Atlas installed for your application, it will also automatically compress the content from the WebResources.axd handler for you when <compilation debug=”false”/> is set – reducing the size of any client-script javascript library or static resource for you (and not requiring you to write any custom code or configure anything within IIS to get it).


What about binaries compiled with debug symbols?


One scenario that several people find very useful is to compile/pre-compile an application or associated class libraries with debug symbols so that more detailed stack trace and line error messages can be retrieved from it when errors occur.


The good news is that you can do this without having the have the <compilation debug=”true”/> switch enabled in production.  Specifically, you can use either a web deployment project or a web application project to pre-compile the code for your site with debug symbols, and then change the <compilation debug=”true”/> switch to false right before you deploy the application on the server.


The debug symbols and metadata in the compiled assemblies will increase the memory footprint of the application, but this can sometimes be an ok trade-off for more detailed error messages.


The <deployment retail=”true”/> Switch in Maching.config


If you are a server administrator and want to ensure that no one accidentally deploys an ASP.NET application in production with the <compilation debug=”true”/> switch enabled within the application’s web.config file, one trick you can use with ASP.NET V2.0 is to take advantage of the <deployment> section within your machine.config file.


Specifically, by setting this within your machine.config file:




<deployment retail=”true”/>




You will disable the <compilation debug=”true”/> switch, disable the ability to output trace output in a page, and turn off the ability to show detailed error messages remotely.  Note that these last two items are security best practices you really want to follow (otherwise hackers can learn a lot more about the internals of your application than you should show them).


Setting this switch to true is probably a best practice that any company with formal production servers should follow to ensure that an application always runs with the best possible performance and no security information leakages.  There isn’t a ton of documentation on this switch – but you can learn a little more about it here.

ASP.NET: Session.SessionID changes between requests

Why does the property SessionID on the Session-object in an ASP.NET-page change between requests?

I have a page like this:

    SessionID: <%= SessionID %>

And the output keeps changing every time I hit F5, independent of browser.

This is the reason

When using cookie-based session state, ASP.NET does not allocate storage for session data until the Session object is used. As a result, a new session ID is generated for each page request until the session object is accessed. If your application requires a static session ID for the entire session, you can either implement the Session_Start method in the application’s Global.asax file and store data in the Session object to fix the session ID, or you can use code in another part of your application to explicitly store data in the Session object.

So basically, unless you access your session object on the backend, a new sessionId will be generated with each request


This code must be added on the file Global.asax. It adds an entry to the Session object so you fix the session until it expires.

protected void Session_Start(Object sender, EventArgs e) 
    Session["init"] = 0;

Web API Content Negotiation

Inside Web API Content Negotiation

“Content negotiation” is often used to describe the process of inspecting the structure of an incoming HTTP request to figure out the formats in which the client wishes to receive responses. Technically, though, content negotiation is the process in which client and server determine the best possible representation format to use in their interactions. Inspecting the request typically means looking into a couple of HTTP headers such as Accept and Content-Type. Content-Type, in particular, is used on the server for processing POST and PUT requests and on the client for choosing the formatter for HTTP responses. Content-Type is not used for GET requests.

The internal machinery of content negotiation, however, is much more sophisticated. The aforementioned scenario is the most typical—because of default conventions and implementations—but it isn’t the only one possible.

The component that governs the negotiation process in Web API is the class called DefaultContentNegotiator. It implements a public interface (IContentNegotiator), so you can replace it entirely if needed. Internally, the default negotiator applies several distinct criteria in order to figure out the ideal format for the response.

The negotiator works with a list of registered media type formatters—the components that actually turn objects into a specific format. The negotiator goes through the list of formatters and stops at the first match. A formatter has a couple of ways to let the negotiator know it can serialize the response for the current request.

The first check occurs on the content of the MediaTypeMappings collection, which is empty by default in all predefined media type formatters. A media type mapping indicates a condition that, if verified, entitles the formatter to serialize the response for the ongoing request. There are a few predefined media type mappings. One looks at a particular parameter in the query string. For example, you can enable XML serialization by simply requiring that an xml=true expression is added to the query string used to invoke Web API. For this to happen, you need to have the following code in the constructor of  your custom XML media type formatter:

MediaTypeMappings.Add(new QueryStringMapping("xml", "true", "text/xml"));

In a similar way, you can have callers express their preferences by adding an extension to the URL or by adding a custom HTTP header:

MediaTypeMappings.Add(new UriPathExtensionMapping("xml", "text/xml"));
MediaTypeMappings.Add(new RequestHeaderMapping("xml", "true",
  StringComparison.InvariantCultureIgnoreCase, false,"text/xml"));

For URL path extension, it means the following URL will map to the XML formatter:


Note that for URL path extensions to work you need to have an ad hoc route such as:

  name: "Url extension",
  routeTemplate: "api/{controller}/{action}.{ext}/{id}",
  defaults: new { id = RouteParameter.Optional }

For custom HTTP headers, the constructor of the RequestHeaderMapping class accepts the name of the header, its expected value and a couple of extra parameters. One optional parameter indicates the desired string comparison mode, and the other is a Boolean that indicates if the comparison is on the entire string. If the negotiator can’t find a match on the formatter using the media type mapping information, it looks at standard HTTP headers such as Accept and Content-Type. If no match is found, it again goes through the list of registered formatters and checks whether the return type of the request can be serialized by one of the formatters.

To add a custom formatter, insert something like the following code in the startup of the application (for example, in the Application_Start method):

config.Formatters.Add(xmlIndex, new NewsXmlFormatter());

Customizing the Negotiation Process

Most of the time, media type mappings let you easily fulfill any special requirements for serialization. However, you can always replace the default content negotiator by writing a derived class and overriding the MatchRequestMediaType method:

protected override MediaTypeFormatterMatch MatchRequestMediaType(
  HttpRequestMessage request, MediaTypeFormatter formatter)

You can create a completely custom content negotiator with a new class that implements the IContentNegotiator interface. Once you have a handmade negotiator, you register it with the Web API runtime:

  new YourOwnNegotiator());

The preceding code usually goes in global.asax or in one of those handy config handlers that Visual Studio creates for you in the ASP.NET MVC Web API project template.

Controlling Content Formatting from the Client

The most common scenario for content negotiation in Web API is when the Accept header is used. This approach makes content formatting completely transparent to your Web API code. The caller sets the Accept header appropriately (for example, to text/xml) and the Web API infrastructure handles it accordingly. The following code shows how to set the Accept header in a jQuery call to a Web API endpoint to get back some XML:

  url: "/api/news/all",
  type: "GET",
  headers: { Accept: "text/xml; charset=utf-8" }

In C# code, you set the Accept header like this:

var client = new HttpClient();
client.Headers.Add("Accept", "text/xml; charset=utf-8");

Any HTTP API in any programming environment lets you set HTTP headers. And if you foresee that you can have callers where this might be an issue, a best practice is to also add a media type mapping so the URL contains all the required information about content formatting.

Bear in mind that the response strictly depends on the structure of the HTTP request. Try requesting a Web API URL from the address bar of Internet Explorer 10 and Chrome. Don’t be surprised to see you get JSON in one case and XML in the other. The default Accept headers might be different in various browsers. In general, if the API will be publicly used by third parties, you should have a URL-based mechanism to select the output format.

Scenarios for Using Web API

Architecturally speaking, Web API is a big step forward. It’s becoming even more important with the recent Open Web Interface for .NET (OWIN) NuGet package (Microsoft.AspNet.Web­­Api.Owin) and Project Katana, which facilitate hosting the API in external apps through a standard set of interfaces. If you’re building solutions other than ASP.NET MVC applications, using Web API is a no-brainer. But what’s the point of using Web API within a Web solution based on ASP.NET MVC?

With plain ASP.NET MVC, you can easily build an HTTP façade without learning new things. You can negotiate content fairly easily with just a bit of code in some controller base class or in any method that needs it (or by creating a negotiated ActionResult). It’s as easy as having an extra parameter in the action method signature, checking it and then serializing the response to XML or JSON accordingly. This solution is practical as long as you limit yourself to using XML or JSON. But if you have more formats to take into account, you’ll probably want to use Web API.

As previously mentioned, Web API can be hosted outside of IIS—for example, in a Windows service. Clearly, if the API lives within an ASP.NET MVC application, you’re bound to IIS. The type of hosting therefore depends on the goals of the API layer you’re creating. If it’s meant to be consumed only by the surrounding ASP.NET MVC site, then you probably don’t need Web API. If your created API layer is really a “service” for exposing the API of some business context, then Web API used within ASP.NET MVC makes good sense.


I wonder if there is away to avoid copying references to objects when you need to create a simple object which has an array of embedded objects.The situation is as follow: I have server which accepts a JSON and applies some logic then stores the object in DB. lets say my form is for saving teams in DB. The server accepts team as json. the team has an array of TeamMember objects, my form has a simple field to enter team member info and add it to team teamMembers array. Now here is the problem, when I add a team member to array list and want to add another team member when I type into the field the added member is changed also !. I know the reason


and it is because I put same reference into the teamMembers array so I have same object added several times. to avoid this I should create a new team member object, copy all teamMember properties and and add it the array.

       var newTeamMember; /*<--- copy teamMember */
       $; /*and add newTeamMember*/

Your question says you want to “avoid deep copy”, but I’m not sure that’s accurate. It sounds like you just want to use angular.copy, because you need to create a copy of the team member and add that to the array:

$scope.addTeamMember = function(teamMember) {
   var newTeamMember = angular.copy(teamMember);
<!doctype html>
<html lang="en">
 <meta charset="UTF-8">
 <title>Example - example-example46-production</title>

 <script src="//"></script>

<body ng-app="copyExample">
 <div ng-controller="ExampleController">
<form novalidate class="simple-form">
Name: <input type="text" ng-model="" /><br />
E-mail: <input type="email" ng-model="" /><br />
Gender: <input type="radio" ng-model="user.gender" value="male" />male
<input type="radio" ng-model="user.gender" value="female" />female<br />
<button ng-click="reset()">RESET</button>
<button ng-click="update(user)">SAVE</button>
<pre>form = {{user | json}}</pre>
<pre>master = {{master | json}}</pre>

 angular.module('copyExample', [])
 .controller('ExampleController', ['$scope', function($scope) {
 $scope.master= {};

 $scope.update = function(user) {
 // Example with 1 argument
 $scope.master= angular.copy(user);

 $scope.reset = function() {
 // Example with 2 arguments
 angular.copy($scope.master, $scope.user);


Protecting Your Cookies with HttpOnly

Protecting Your Cookies: HttpOnly

Most of the time when you accept input from the user the very first thing you do is pass it through a HTML encoder. So tricksy things like:

<script>alert(‘hello XSS!’);</script>

are automagically converted into their harmless encoded equivalents:

&lt;script&gt;alert(‘hello XSS!’);&lt;/script&gt;

It all starts with this bit of script added to a user’s profile page.

<img src=””<script type=text/javascript

src=””>” /><<img


Through clever construction, the malformed URL just manages to squeak past the sanitizer. The final rendered code, when viewed in the browser, loads and executes a script from that remote server. Here’s what that JavaScript looks like:





That’s right — whoever loads this script-injected user profile page has just unwittingly transmitted their browser cookies to an evil remote server!

As we’ve already established, once someone has your browser cookies for a given website, they essentially have the keys to the kingdom for your identity there. If you don’t believe me, get the Add N Edit cookies extension for Firefox and try it yourself. Log into a website, copy the essential cookie values, then paste them into another browser running on another computer. That’s all it takes. It’s quite an eye opener.

If cookies are so precious, you might find yourself asking why browsers don’t do a better job of protecting their cookies. I know my friend was. Well, there is a way to protect cookies from most malicious JavaScript: HttpOnly cookies.

When you tag a cookie with the HttpOnly flag, it tells the browser that this particular cookie should only be accessed by the server. Any attempt to access the cookie from client script is strictly forbidden. Of course, this presumes you have:

  1. A modern web browser
  2. A browser that actually implements HttpOnly correctly

The good news is that most modern browsers do support the HttpOnly flag: Opera 9.5, Internet Explorer 7, and Firefox 3. I’m not sure if the latest versions of Safari do or not. It’s sort of ironic that the HttpOnly flag was pioneered by Microsoft in hoary old Internet Explorer 6 SP1, a bowser which isn’t exactly known for its iron-clad security record.

Regardless, HttpOnly cookies are a great idea, and properly implemented, make huge classes of common XSS attacks much harder to pull off. Here’s what a cookie looks like with the HttpOnly flag set:

HTTP/1.1 200 OK

Cache-Control: private

Content-Type: text/html; charset=utf-8

Content-Encoding: gzip

Vary: Accept-Encoding

Server: Microsoft-IIS/7.0

Set-Cookie: ASP.NET_SessionId=ig2fac55; path=/; HttpOnly

X-AspNet-Version: 2.0.50727

Set-Cookie: user=t=bfabf0b1c1133a822; path=/; HttpOnly

X-Powered-By: ASP.NET

Date: Tue, 26 Aug 2008 10:51:08 GMT

Content-Length: 2838

This isn’t exactly news; Scott Hanselman wrote about HttpOnly a while ago. I’m not sure he understood the implications, as he was quick to dismiss it as “slowing down the average script kiddie for 15 seconds”. In his defense, this was way back in 2005. A dark, primitive time. Almost pre YouTube.

HttpOnly cookies can in fact be remarkably effective. Here’s what we know:

  • HttpOnly restricts all access to cookiein IE7, Firefox 3, and Opera 9.5 (unsure about Safari)
  • HttpOnly removes cookie information from the response headers in getAllResponseHeaders()in IE7. It should do the same thing in Firefox, but it doesn’t, because there’s a bug.
  • XMLHttpObjectsmay only be submitted to the domain they originated from, so there is no cross-domain posting of the cookies.

The big security hole, as alluded to above, is that Firefox (and presumably Opera) allow access to the headers through XMLHttpObject. So you could make a trivial JavaScript call back to the local server, get the headers out of the string, and then post that back to an external domain. Not as easy as document.cookie, but hardly a feat of software engineering.

Even with those caveats, I believe HttpOnly cookies are a huge security win. If I — er, I mean, if my friend — had implemented HttpOnly cookies, it would have totally protected his users from the above exploit!

HttpOnly cookies don’t make you immune from XSS cookie theft, but they raise the bar considerably. It’s practically free, a “set it and forget it” setting that’s bound to become increasingly secure over time as more browsers follow the example of IE7 and implement client-side HttpOnly cookie security correctly. If you develop web applications, or you know anyone who develops web applications, make sure they know about HttpOnly cookies.

article taken from:


<httpCookies domain="String" 
             requireSSL="true|false" />

The following default httpCookies element is not explicitly configured in the machine configuration file or in the root Web.config file, but is the default configuration returned by an application in the .NET Framework version 2.0.

 Internet Explorer added support in Internet Explorer 6 SP1 for a cookie property called HttpOnlyCookies that can help mitigate cross-site scripting threats that result in stolen cookies. When a cookie that has HttpOnlyCookies set to true is received by a compliant browser, it is inaccessible to client-side script. For more information on possible attacks and how this cookie property can help mitigate them, please see Mitigating Cross-Site Scripting with HTTP-Only Cookies tutorial on MSDN.
<httpCookies httpOnlyCookies="false" 
  domain="" />

advantages and disadvantages of using a content delivery network (CDN)

What are the advantages and disadvantages of using a content delivery network (CDN)?

  • The disadvantages may be that it costs money, and it adds a bit of complexity to your deployment procedures.
  • The main advantage is an increase in the speed with which the content is delivered to the users.

When to use a CDN?

  • It will be most effective when you have a popular public website with some type of static content (images, scripts, CSS, etc.).

Is a CDN a performance booster?

  • In general, yes. When a specific request is made by a user, the server closest to that user (in terms of the minimum number of nodes between the server and the user) is dynamically determined. This optimizes the speed with which the content is delivered to that user.


  • It increases the parallelism available.
    (Most browsers will only download 3 or 4 files at a time from any given site.)
  • It increases the chance that there will be a cache-hit.
    (As more sites follow this practice, more users already have the file ready.)
  • It ensures that the payload will be as small as possible.
    (Google can pre-compress the file in a wide array of formats (like GZIP or DEFLATE). This makes the time-to-download very small, because it is super compressed and it isn’t compressed on the fly.)
  • It reduces the amount of bandwidth used by your server.
    (Google is basically offering free bandwidth.)
  • It ensures that the user will get a geographically close response.
    (Google has servers all over the world, further decreasing the latency.)
  • (Optional) They will automatically keep your scripts up to date.
    (If you like to “fly by the seat of your pants,” you can always use the latest version of any script that they offer. These could fix security holes, but generally just break your stuff.)