Sunday, 19 October 2014

5 Golden Rules for Great Web API Design

Ever found yourself wondering “what were they thinking?” when integrating a web service via its API? If not, you’ve been far luckier than I have.

Any software developer knows how easy it is to let a project devolve into spaghetti code, and web APIs are no less prone to resulting in a tangled web. But it doesn’t need to be that way. In truth, it’s possible to design great web APIs that people will actually enjoy using, and that you’ll enjoy creating as well. But how? The answer to that question is what this post is all about.

Perspective

Most of the time when you’re building solutions, you’re designing for end users who are not programmers, or who are generally not technically sophisticated. You’re giving them a graphical interface and, if you’ve been doing your job right, you’ve gleaned a pretty good idea from them of what they need the interface to do.

But API development is different. You’re designing an interface for programmers, probably without even knowing who they are. And whoever they are, they will have the technical sophistication (or at least will think they have the technical sophistication) to point out every little flaw in your software. Your users are likely to be as critical of your API as you would be of theirs, and will thoroughly enjoy critiquing it.

And therein lies part of the irony, by the way. If anyone should understand how to make a web API that’s easy-to-use, it’s you. After all, you’re a software engineer just like the users of your API, so you share their perspective. Don’t you?
Well, while you certainly understand their perspective, you don’t necessarily share their perspective. When you’re developing or enhancing your API, you have the perspective of an API designer whereas they have the perspective of an API user.

API designers typically focus on questions like “What does this service need to do?” or “What does this service need to provide?”, while API users are focused on “How can I use this API to do what I need?”, or more accurately, “How can I spend the bare minimum of effort to get what I need out of this API?”.

These different questions lead to two vastly different perspectives. As a result, the necessary prerequisite to designing a great API is to shift your perspective from that of the API designer to that of the API user. In other words, continually ask yourself the questions you would naturally ask if you were your own user. Rather than thinking about what your API can do, think about the different ways it may need or want to be used and then focus on making those tasks as easy as possible for your API’s users.

While this may sound easy and obvious, it’s astounding how infrequently APIs appear to be designed this way. Think about the APIs you’ve encountered in your career. How frequently do they appear to have been designed with this perspective in mind? Web API design can be challenging.
So with that said, let’s proceed and talk about the 5 Golden Rules for Designing a Great Web API, namely:
  1. Documentation
  2. Stability and Consistency
  3. Flexibility
  4. Security
  5. Ease of Adoption
A diagram of users accessing a well-designed web services API

Rule 1: Documentation

Documentation. Yes, I’m starting here.
Do you hate documentation? Well, I can empathize, but put on your “user perspective” hat and I’ll bet that the one thing you hate more than having to write documentation is having to try to use an undocumented API. I rest my case.

The bottom line is that, if you want anyone to use your API, documentation is essential. You’ve simply got to get this right. It’s the first thing users will see, so in some ways it’s like the gift wrap. Present well, and people are more likely to use your API and put up with any idiosyncrasies.
So how do we write good documentation?

The relatively easy part is documenting the API methods themselves; i.e., example requests and responses, along with descriptions of each of the elements in both. Fortunately, there are an increasing number of software tools that facilitate and simplify the task of generating documentation. Or you can write something yourself that introspects your API, endpoints, and functions, and generates the corresponding documentation for you.

But what separates great documentation from adequate documentation is the inclusion of usage examples and, ideally, tutorials. This is what helps the user understand your API and where to start. It orients them and helps them load your API into their brain.

For example, if the developers of Twilio were to list out every class, every method, and every possible response to their API, but didn’t bother to mention that you can send an SMS, track a call, or buy a phone number through their API, it would take a really long time for the API user to find that information and understand it cohesively. Can you imagine sorting through a giant tree of classes and methods without any insight into what they were used for, other than their name? Sounds terrible right? But that’s exactly what so many API providers do, thereby leaving their APIs opaque to anybody but themselves. The Rackspace CloudFiles developer and API guide is one such example; it’s difficult to get your bearings unless you already understand what they’re doing and what they’re providing.

So write concise tutorials that help get the developer up and running quickly, with at least a skeleton of what they’re trying to do, and then point them in the direction of the more detailed, fully-documented list of functionality so they can expand on what they have.
Once you’re done with your documentation, be sure to validate that it makes sense to people other than yourself. Send it out to other developers in your network, give them no instruction other than pointing them to the documentation, and ask them to follow a tutorial or build something really basic in about 15 minutes. If they can’t have a basic integration with your API in 15 minutes, you have more work to do.

For some noteworthy examples of excellent and detailed documentation, check out Twilio, Django, and MailChimp. None of these products are necessarily the best in their markets (although they are all good products), yet they do distinguish themeselves by providing some of the best documentation within their markets, which has certainly facilitated their wide acceptance and market share.

Rule 2: Stability and Consistency

If you’ve ever used Facebook’s API, you know how often they deprecate and completely rewrite their APIs. No matter how much you respect their hacker culture, or their product, their’s is not a developer-friendly perspective. The reason they are still successful is because they have a billion users, not because their API is great.

But you probably don’t have the luxury of such a mammoth user base and market share, so you’re going to need have a much less volatile API, keeping old versions running and supported for quite a long period of time. Maybe even years. So toward that end, here are some tips and tricks.
Let’s say, for example, that your API is accessible via the URL http://myapisite.com/api/widgets and provides its response in JSON format. While this may seem fine at first blush, what happens when you need to modify the format of the JSON response? Everyone that’s already integrated with you is going to break. Oops.

So do some planning ahead, and version your API from the outset, explicitly incorporating a version number into the URL (e.g., http://myapisite.com/api/widgets?version=1 or http://myapisite.com/api/widgets/v1) so that people can rely on version 1 working and can upgrade to any subsequent version when they’re ready to do so. If you need to phase out a prior version at some point, go ahead, but give plenty of notice and offer some sort of transition plan.
A good URL scheme will include major versions in the URL. Any change to the output format or supported data types should result in bumping up to a new major version. Generally, it’s acceptable to keep the same version if all you are doing is adding keys or nodes to your output, but to be on the safe side, any time the output changes, bump a version.
In addition to being stable over time, APIs need to be internally consistent. I’ve seen many APIs that change parameter names or methods of POSTing data, depending on the endpoint that is being used. Instead, you should handle common parameters globally within your API and use inheritance or a shared architecture to reuse the same naming conventions and data handling consistently throughout your API.
Finally, you need to record and publish a changelog to show differences between versions of your API so that users know exactly how to upgrade.

Rule 3: Flexibility

Garbage in, garbage out (GIGO) is a well known mantra to most programmers. As applied to web API design, this guiding principle tends to dictate a fairly rigid approach to request validation. Sounds great, right? No mess, no problem.

Yet as with everything, there needs to be some balance. As it is not possible to anticipate every way that users will want to employ your service, and since not every client platform is consistent (i.e., not every platform has very good JSON support, a decent OAuth library, etc.), it’s good to have at least some degree of flexibility or tolerance with regard to your input and output constraints.
For example, many APIs will support a variety of output formats, like JSON, YAML, XML, et. al., but will only support specifying the format in the URL itself. In the spirit of remaining flexible, you could allow this to also be specified in the URL (e.g., /api/v1/widgets.json), or you might also read and recognize an Accept: application/json HTTP header, or support a querystring variable such as ?format=JSON, and so on.

And while we’re at it, why not allow for the format specified to be case-insensitive, so the user could specify ?format=json as well? That’s a classic example of a way to alleviate unnecessary frustration for the user of your API.

Another example is allowing for different ways of inputting variables. So, just like you have a variety of output formats, allow for a variety of input formats as well (e.g., plain POST variables, JSON, XML, etc.). You should at least be supporting standard POST variables, and many modern applications support JSON as well, so those two are a good place to start.
The point here is that you shouldn’t assume that everyone shares your technical preferences. With a little research into how other APIs work, and through dialog with other developers, you can glean other valuable alternatives that are useful and include them in your API.

Rule 4: Security

Security is obviously one of the most important things to build into your web service, but so many developers make it ridiculously hard to use. As the API provider, you should be offering usable examples of how to authenticate and authorize when accessing your API. This should not be a difficult issue that an end user spends hours working on. Make it your goal that they either don’t have to write any code, or it takes them less than 5 minutes to write it.

For most APIs, I prefer a simple token-based authentication, where the token is a random hash assigned to the user and they can reset it at any point if it has been stolen. Allow the token to be passed in through POST or an HTTP header. For example, the user could (and should) send an SHA-1 token as a POST variable, or as a header in a format such as “Authorization: da39a3ee5e6b4b0d3255bfef95601890afd80709”.
Also, choose a secure token, not a short numeric identifier. Something irreversible is best. For example, it’s relatively simple to just generate out an SHA token during user creation and store it in the database. Then, you can simply query your database for any users matching that token. You could also do a token generated with a unique identifier and a salt value, something like SHA(User.ID + "abcd123"), and then query for any user that matches; e.g., where TokenFromPost = SHA(User.ID + "abcd123").

Another very good option is OAuth 2 + SSL. You should be using SSL anyway, but OAuth 2 is reasonably simple to implement on the server side, and libraries are available for many common programming languages.

If the API you have made is supposed to be accessible on a public website via JavaScript, you need to also make sure you validate a list of URLs per-account for the token. That way, nobody can go inspect the calls to your API, steal the token from your user, and go use it for themselves.
Here are some other important things to keep in mind:
  • Whitelisting Functionality. APIs generally allow you to do basic create, read, update, and delete operations on data. But you don’t want to allow these operations for every entity, so make sure each has a whitelist of allowable actions. Make sure, for example, that only authorized users can run commands like /user/delete/<id>. Similarly, all useful headers that are sent in the user’s request need to be validated against a whitelist as well. If you are allowing Content-type headers, verify that whatever the user sends in actually matches a whilelist of supported content types. If it doesn’t, then send back an error message such as a 406 Not Acceptable response. Whitelisting is important as a lot of APIs are automatically generated, or use a blacklist instead, which means you have to be explicit about what you don’t want. However, the golden rule of security is to start with absolutely nothing, and only explicitly allow what you do want.
  • Protect yourself against Cross-Site Request Forgery (CSRF). If you are allowing session or cookie authentication, you need to make sure that you’re protecting yourself from CSRF attacks. The Open Web Application Security Project (OWASP) provides useful guidance on ways to preclude these vulnerabilities.
  • Validate access to resources. In every request, you need to verify that a user is in fact allowed access to the specific item they are referencing. So, if you have an endpoint to view a user’s credit card details (e.g., /account/card/view/152423), be sure that the ID “152423” is referencing a resource that the user really is authorized to access.
  • Validate all input. All input from a user needs to be securely parsed, preferably using a well-known library if you are using complicated input like XML or JSON. Don’t build your own parser, or you’re in for a world of hurt.

Rule 5: Ease Of Adoption

This is really the most important rule in the bunch, and builds on all the others. As I mentioned during the documentation rule, try this out with people that are new to your API. Make sure that they can get up and running with at least a basic implementation of your API, even if it’s just following a tutorial, within a few minutes. I think 15 minutes is a good goal.
Here are some specific recommendations to ease and facilitate adoption of your API:
  • Make sure people can actually use your API and that it works the first time, every time. Have new people try to implement your API occasionally to verify that it’s not confusing in some way that you’ve become immune to.
  • Keep it simple. Don’t do any fancy authentication. Don’t do some crazy custom URL scheme. Don’t reinvent SOAP, or JSON, or REST, or anything. Use all the tools you can that have already been implemented and are widely accepted, so that developers only have to learn your API, not your API + 10 obscure new technologies.
  • Provide language-specific libraries to interface with your service. There are some nice tools to automatically generate a library for you, such as Alpaca or Apache Thrift. Currently Alpaca supports Node, PHP, Python, and Ruby. Thrift supports C++, Java, Python, PHP, Ruby, Erlang, Perl, Haskell, C#, Cocoa, JavaScript, Node.js, Smalltalk, OCaml, Delphi and more.
  • Simplify any necessary signup. If you are not developing an open source API, or if there is a signup process of any sort, make sure that upon signup, a user is very quickly directed to a tutorial. And make the signup process completely automated without any need for human interaction on your part.
  • Provide excellent support. A big barrier to adoption is lack of support. How will you handle and respond to a bug report? What about unclear documentation? An unsophisticated user? Forums, bug trackers, and email support are fantastic starts, but do make sure that when someone posts a bug, you really address it. Nobody wants to see a ghost town forum or a giant list of bugs that haven’t been addressed.

Wrap-up

Web services and their APIs abound. Unfortunately, the vast majority are difficult to use. Reasons range from poor design, to lack of documentation, to volatility, to unresolved bugs, or, in some cases, all of the above.
Following the guidance in this post will help ensure that your web API is clean, well-documented, and easy-to-use. Such APIs are truly rare and are therefore that much more likely to be widely adopted and used.

The 10 Most Common Web Development Mistakes

Today we have thousands of digital and printed resources that provide step-by-step instructions about developing all kinds of different web applications. Development environments are “smart” enough to catch and fix many mistakes that early developers battled with regularly. There are even many different development platforms that easily turn simple static HTML pages into highly interactive applications.

All of these development patterns, practices, and platforms share common ground, and they are all prone to similar mistakes caused by the very nature of web applications.
The purpose of this article is to shed light on some of the common mistakes made in different stages of the web development process and to help you mitigate them. I have touched on a few general topics that are common to virtually all web developers such as validation, security, scalability, and SEO. You should of course not be bound by the specific cases I’ve described, as they are listed only to give you an idea of the potential problems you might encounter.


Common mistake #1: Incomplete input validation

Validating user input on client and server side is simply a must do! We are all aware of the sage advice “do not trust user input” but, nevertheless, mistakes stemming from validation happen all too often.
One of the most common consequences of this mistake is SQL Injection which is in OWASP Top 10 year after year.
Remember that most front-end development frameworks provide out-of-the-box validation rules that are incredibly simple to use. Additionally, most major back-end development platforms use simple annotations to assure that submitted data are adhering to expected rules. Implementing validation might be time consuming, but it should be part of your standard coding practice and never set aside.

Common mistake #2: Authentication without proper Authorization


Authentication: Verifying that a person is (or at least appears to be) a specific user, since he/she has correctly provided their security credentials (password, answers to security questions, fingerprint scan, etc.).
Authorization: Confirming that a particular user has access to a specific resource or is granted permission to perform a particular action.
Stated another way, authentication is knowing who an entity is, while authorization is knowing what a given entity can do.
Let me demonstrate this mistake with an example:
Consider that your browser holds currently logged user information in an object similar to the following:
{
    username:'elvis',
    role:'singer',
    token:'123456789'
}
When doing a password change, your application makes the POST:
POST /changepassword/:username/:newpassword
In your /changepassword method, you verify that user is logged and token has not expired. Then you find the user profile based on the :username parameter, and you change your user’s password.
So, you validated that your user is properly logged-in, and then you executed his request thus changing his password. Process seems OK, right? Unfortunately, the answer is NO!
At this point it is important to verify that the user executing the action and the user whose password is changed are the same. Any information stored on the browser can be tampered with, and any advanced user could easily update username:'elvis' to username:'Administrator' without using anything else but built-in browser tools.
So in this case, we just took care of Authentication making sure that the user provided security credentials. We can even add validation that /changepassword method can only be executed by Authenticated users. However, this is still not enough to protect your users from malicious attempts.
You need to make sure that you verify actual requestor and content of request within your /changepassword method and implement proper Authorization of the request making sure that user can change only her data.
Authentication and Authorization are two sides of the same coin. Never treat them separately.

Common mistake #3: Not ready to scale

In today’s world of high speed development, startup accelerators, and instant global reach of great ideas, having your MVP (Minimum Viable Product) out in the market as soon as possible is a common goal for many companies.
However, this constant time pressure is causing development teams to often overlook certain issues. Scaling is often one of those things teams take for granted. The MVP concept is great, but push it too far, and you’ll have serious problems. Unfortunately, selecting a scalable database and web server and separating all application layers on independent scalable servers is not enough. There are many details you need to think about if you wish to avoid rewriting significant parts of your application later.
For example, say that you choose to store uploaded profile pictures of your users directly on a web server. This is a perfectly valid solution–files are quickly accessible to the application, file handling methods are available in every development platform, and you can even serve these images as static content, which means minimum load on your application.
But what happens when your application grows, and you need to use two or more web servers behind a load balancer? Even though you nicely scaled your database storage, session state servers, and web servers, your application scalability fails because of a simple thing like profile images. Thus, you need to implement some kind of file synchronization service (that will have a delay and will cause temporary 404 errors) or another workaround to assure that files are spread across your web servers.






What you needed to do to avoid the problem in the first place was just use shared file storage location, database, or any other remote storage solution. It would have probably cost few extra hours of work to have it all implemented, but it would have been worth the trouble.

Common mistake #4: Wrong or missing SEO

The root cause of incorrect or missing SEO best practices on web sites is misinformed “SEO specialists”. Many web developers believe that they know enough about SEO and that it is not especially complex, but that’s just not true. SEO mastery requires significant time spent researching best practices and the ever-changing rules about how Google, Bing, and Yahoo index the web. Unless you constantly experiment and have accurate tracking + analysis, you are not a SEO specialist, and you should not claim to be one.
Furthermore, SEO is too often postponed as some activity that is done at the end. This comes at a high price. SEO is not just related to setting good content, tags, keywords, meta-data, image alt tags, site map, etc. It also includes eliminating duplicate content, having crawlable site architecture, efficient load times, intelligent back linking, etc.
Like with scalability, you should think about SEO from the moment you start building your web application, or you might find that completing your SEO implementation project means rewriting your whole system.

Common mistake #5: Time or processor consuming actions in request handlers

One of the best examples of this mistake is sending email based on a user action. Too often developers think that making a SMTP call and sending a message directly from user request handler is the solution.
Let’s say you created an online book store, and you expect to start with a few hundred orders daily. As part of your order intake process, you send confirmation emails each time a user posts an order. This will work without problem at first, but what happens when you scale your system, and you suddenly get thousands of requests sending confirmation emails? You either get SMTP connection timeouts, quota exceeded, or your application response time degrades significantly as it is now handling emails instead of users.
Any time or processor consuming action should be handled by an external process while you release your HTTP requests as soon as possible. In this case, you should have an external mailing service that is picking up orders and sending notifications.
Like what you're reading?
Get the latest updates first.

Common mistake #6: Not optimizing bandwidth usage

Most development and testing takes place in a local network environment. So when you are downloading 5 background images each being 3MB or more, you might not identify an issue with 1Gbit connection speed in your development environment. But when your users start loading a 15MB home page over 3G connections on their smartphones, you should prepare yourself for a list of complaints.
Optimizing your bandwidth usage could give you a great performance boost, and to gain this boost you probably only need a couple of tricks. There are few things that many well-structured development teams do by default, including:
  1. Minification of all JavaScript
  2. Minification of all CSS
  3. Server side HTTP compression
  4. Optimization of image size and resolution

Common mistake #7: Not developing for different screen sizes

Responsive design has been a big topic in the past few years. Expansion of smartphones with different screen resolutions has brought many new ways of accessing online content. The number of website visits that come from smartphones and tablets grows every day, and this trend is accelerating.
In order to ensure seamless navigation and access to website content, you must enable users to access it from all types of devices.
There are numerous patterns and practices for building responsive web applications. Each development platform has its own tips and tricks, but there are some frameworks that are platform independent. The most popular is probably Twitter Bootstrap. It is an open-source and free HTML, CSS, and JavaScript framework that has been adopted by every major development platform. Just adhere to Bootstrap patterns and practices when building your application, and you will get responsive web application with no trouble at all.

Common mistake #8: Cross browser incompatibility

The development process is, in most cases, under a heavy time pressure. Every application needs to be released as soon as possible and developers are often focused on delivering functionality over design. Regardless of the fact that most developers have Chrome, Firefox, IE installed, they are using only one of these 90% of the time. It is common practice to use one browser during development and just as the application nears completion will you start testing it in other browsers. This is perfectly reasonable–assuming you have a lot of time to test and fix issues that show up at this stage.
However, there are some tricks that can save you significant time when your application reaches the cross-browser testing phase:
  1. You don’t need to test in all browsers during development; it is time consuming and ineffective. However, that does not mean that you cannot switch browsers frequently. Use a different browser every couple of days, and you will at least recognize major problems early in development phase.
  2. Be careful of using statistics to justify not supporting a browser. There are many organizations that are slow in adopting new software or upgrading. Thousands of users working there might still need access to your application, and they cannot install the latest free browser due to internal security and business policies.
  3. Avoid browser specific code. In most cases there is an elegant solution that is cross-browser compatible.

Common mistake #9: Not planning for portability

Assumption is the mother of all problems! When it comes to portability, this saying is more true than ever. How many times have you seen hard coded file paths, database connection strings, or assumptions that a certain library will be available on the server? Assuming that the production environment will match your local development computer is simply wrong.
Ideal application setup should be maintenance-free:
  1. Make sure that your application can scale and run on a load-balanced multiple server environment.
  2. Allow simple and clear configuration–possibly in a single configuration file.
  3. Handle exceptions when web server configuration is not as expected.

Common mistake #10: RESTful anti patterns

RESTful API’s have taken their place in web development and are here to stay. Almost every web application has implemented some kind of REST services, whether for internal use or integrating with external system. But we still see broken RESTful patterns and services that do not adhere to expected practices.
Two of the most common mistakes made when writing a RESTful API are:
  1. Using wrong HTTP verbs. For example using GET for writing data. HTTP GET has been designed to be idempotent and safe, meaning that no matter how many times you call GET on the same resource, the response should always be the same and no change in application state should occur.
  2. Not sending correct HTTP status codes. The best example of this mistake is sending error messages with response code 200.
     HTTP 200 OK
     {
         message:'there was an error'
     }
    
You should only send HTTP 200 OK when the request has not generated an error. In the case of an error, you should send 400, 401, 500 or any other status code that is appropriate for the error that has occurred.
A detailed overview of standard HTTP status codes can be found here.

Wrap up

Web development is an extremely broad term that can legitimately encompass development of a website, web service, or complex web application.
The main takeaway here is the reminder that you should always be careful about authentication and authorization, plan for scalability, and never hastily assume anything.

How do I “think in AngularJS” if I have a jQuery background

How do I “think in AngularJS” if I have a jQuery background?
Suppose I'm familiar with developing client-side applications in jQuery, but now I'd like to start using AngularJS. Can you describe the paradigm shift that is necessary? Here are a few questions that might help you frame an answer:
  • How do I architect and design client-side web applications differently? What is the biggest difference?
  • What should I stop doing/using; what should I start doing/using instead?
  • Are there any server-side considerations/restrictions?
I'm not looking for a detailed comparison between jQuery and AngularJS.

1. Don't design your page, and then change it with DOM manipulations

In jQuery, you design a page, and then you make it dynamic. This is because jQuery was designed for augmentation and has grown incredibly from that simple premise.
But in AngularJS, you must start from the ground up with your architecture in mind. Instead of starting by thinking "I have this piece of the DOM and I want to make it do X", you have to start with what you want to accomplish, then go about designing your application, and then finally go about designing your view.

2. Don't augment jQuery with AngularJS

Similarly, don't start with the idea that jQuery does X, Y, and Z, so I'll just add AngularJS on top of that for models and controllers. This is really tempting when you're just starting out, which is why I always recommend that new AngularJS developers don't use jQuery at all, at least until they get used to doing things the "Angular Way".
I've seen many developers here and on the mailing list create these elaborate solutions with jQuery plugins of 150 or 200 lines of code that they then glue into AngularJS with a collection of callbacks and $applys that are confusing and convoluted; but they eventually get it working! The problem is that in most cases that jQuery plugin could be rewritten in AngularJS in a fraction of the code, where suddenly everything becomes comprehensible and straightforward.
The bottom line is this: when solutioning, first "think in AngularJS"; if you can't think of a solution, ask the community; if after all of that there is no easy solution, then feel free to reach for the jQuery. But don't let jQuery become a crutch or you'll never master AngularJS.

3. Always think in terms of architecture

First know that single-page applications are applications. They're not webpages. So we need to think like a server-side developer in addition to thinking like a client-side developer. We have to think about how to divide our application into individual, extensible, testable components.
So then how do you do that? How do you "think in AngularJS"? Here are some general principles, contrasted with jQuery.

The view is the "official record"

In jQuery, we programmatically change the view. We could have a dropdown menu defined as a ul like so:
<ul class="main-menu">
    <li class="active">
        <a href="#/home">Home</a>
    </li>
    <li>
        <a href="#/menu1">Menu 1</a>
        <ul>
            <li><a href="#/sm1">Submenu 1</a></li>
            <li><a href="#/sm2">Submenu 2</a></li>
            <li><a href="#/sm3">Submenu 3</a></li>
        </ul>
    </li>
    <li>
        <a href="#/home">Menu 2</a>
    </li>
</ul>
In jQuery, in our application logic, we would activate it with something like:
$('.main-menu').dropdownMenu();
When we just look at the view, it's not immediately obvious that there is any functionality here. For small applications, that's fine. But for non-trivial applications, things quickly get confusing and hard to maintain.
In AngularJS, though, the view is the official record of view-based functionality. Our ul declaration would look like this instead:
<ul class="main-menu" dropdown-menu>
    ...
</ul>
These two do the same thing, but in the AngularJS version anyone looking at the template knows what's supposed to happen. Whenever a new member of the development team comes on board, she can look at this and then know that there is a directive called dropdownMenu operating on it; she doesn't need to intuit the right answer or sift through any code. The view told us what was supposed to happen. Much cleaner.
Developers new to AngularJS often ask a question like: how do I find all links of a specific kind and add a directive onto them. The developer is always flabbergasted when we reply: you don't. But the reason you don't do that is that this is like half-jQuery, half-AngularJS, and no good. The problem here is that the developer is trying to "do jQuery" in the context of AngularJS. That's never going to work well. The view is the official record. Outside of a directive (more on this below), you never, ever, never change the DOM. And directives are applied in the view, so intent is clear.
Remember: don't design, and then mark up. You must architect, and then design.

Data binding

This is by far one of the most awesome features of AngularJS and cuts out a lot of the need to do the kinds of DOM manipulations I mentioned in the previous section. AngularJS will automatically update your view so you don't have to! In jQuery, we respond to events and then update content. Something like:
$.ajax({
  url: '/myEndpoint.json',
  success: function ( data, status ) {
    $('ul#log').append('<li>Data Received!</li>');
  }
});
For a view that looks like this:
<ul class="messages" id="log">
</ul>
Apart from mixing concerns, we also have the same problems of signifying intent that I mentioned before. But more importantly, we had to manually reference and update a DOM node. And if we want to delete a log entry, we have to code against the DOM for that too. How do we test the logic apart from the DOM? And what if we want to change the presentation?
This a little messy and a trifle frail. But in AngularJS, we can do this:
$http( '/myEndpoint.json' ).then( function ( response ) {
    $scope.log.push( { msg: 'Data Received!' } );
});
And our view can look like this:
<ul class="messages">
    <li ng-repeat="entry in log">{{ entry.msg }}</li>
</ul>
But for that matter, our view could look like this:
<div class="messages">
    <div class="alert" ng-repeat="entry in log">
        {{ entry.msg }}
    </div>
</div>
And now instead of using an unordered list, we're using Bootstrap alert boxes. And we never had to change the controller code! But more importantly, no matter where or how the log gets updated, the view will change too. Automatically. Neat!
Though I didn't show it here, the data binding is two-way. So those log messages could also be editable in the view just by doing this: <input ng-model="entry.msg" />. And there was much rejoicing.

Distinct model layer

In jQuery, the DOM is kind of like the model. But in AngularJS, we have a separate model layer that we can manage in any way we want, completely independently from the view. This helps for the above data binding, maintains separation of concerns, and introduces far greater testability. Other answers mentioned this point, so I'll just leave it at that.

Separation of concerns

And all of the above tie into this over-arching theme: keep your concerns separate. Your view acts as the official record of what is supposed to happen (for the most part); your model represents your data; you have a service layer to perform reusable tasks; you do DOM manipulation and augment your view with directives; and you glue it all together with controllers. This was also mentioned in other answers, and the only thing I would add pertains to testability, which I discuss in another section below.

Dependency injection

To help us out with separation of concerns is dependency injection (DI). If you come from a server-side language (from Java to PHP) you're probably familiar with this concept already, but if you're a client-side guy coming from jQuery, this concept can seem anything from silly to superfluous to hipster. But it's not. :-)
From a broad perspective, DI means that you can declare components very freely and then from any other component, just ask for an instance of it and it will be granted. You don't have to know about loading order, or file locations, or anything like that. The power may not immediately be visible, but I'll provide just one (common) example: testing.
Let's say in our application, we require a service that implements server-side storage through a REST API and, depending on application state, local storage as well. When running tests on our controllers, we don't want to have to communicate with the server - we're testing the controller, after all. We can just add a mock service of the same name as our original component, and the injector will ensure that our controller gets the fake one automatically - our controller doesn't and needn't know the difference.
Speaking of testing...

4. Test-driven development - always

This is really part of section 3 on architecture, but it's so important that I'm putting it as its own top-level section.
Out of all of the many jQuery plugins you've seen, used, or written, how many of them had an accompanying test suite? Not very many because jQuery isn't very amenable to that. But AngularJS is.
In jQuery, the only way to test is often to create the component independently with a sample/demo page against which our tests can perform DOM manipulation. So then we have to develop a component separately and then integrate it into our application. How inconvenient! So much of the time, when developing with jQuery, we opt for iterative instead of test-driven development. And who could blame us?
But because we have separation of concerns, we can do test-driven development iteratively in AngularJS! For example, let's say we want a super-simple directive to indicate in our menu what our current route is. We can declare what we want in the view of our application:
<a href="/hello" when-active>Hello</a>
Okay, now we can write a test for the non-existent when-active directive:
it( 'should add "active" when the route changes', inject(function() {
    var elm = $compile( '<a href="/hello" when-active>Hello</a>' )( $scope );

    $location.path('/not-matching');
    expect( elm.hasClass('active') ).toBeFalsey();

    $location.path( '/hello' );
    expect( elm.hasClass('active') ).toBeTruthy();
}));
And when we run our test, we can confirm that it fails. Only now should we create our directive:
.directive( 'whenActive', function ( $location ) {
    return {
        scope: true,
        link: function ( scope, element, attrs ) {
            scope.$on( '$routeChangeSuccess', function () {
                if ( $location.path() == element.attr( 'href' ) ) {
                    element.addClass( 'active' );
                }
                else {
                    element.removeClass( 'active' );
                }
            });
        }
    };
});
Our test now passes and our menu performs as requested. Our development is both iterative and test-driven. Wicked-cool.

5. Conceptually, directives are not packaged jQuery

You'll often hear "only do DOM manipulation in a directive". This is a necessity. Treat it with due deference!
But let's dive a little deeper...
Some directives just decorate what's already in the view (think ngClass) and therefore sometimes do DOM manipulation straight away and then are basically done. But if a directive is like a "widget" and has a template, it should also respect separation of concerns. That is, the template too should remain largely independent from its implementation in the link and controller functions.
AngularJS comes with an entire set of tools to make this very easy; with ngClass we can dynamically update the class; ngBind allows two-way data binding; ngShow and ngHide programmatically show or hide an element; and many more - including the ones we write ourselves. In other words, we can do all kinds of awesomeness without DOM manipulation. The less DOM manipulation, the easier directives are to test, the easier they are to style, the easier they are to change in the future, and the more re-usable and distributable they are.
I see lots of developers new to AngularJS using directives as the place to throw a bunch of jQuery. In other words, they think "since I can't do DOM manipulation in the controller, I'll take that code put it in a directive". While that certainly is much better, it's often still wrong.
Think of the logger we programmed in section 3. Even if we put that in a directive, we still want to do it the "Angular Way". It still doesn't take any DOM manipulation! There are lots of times when DOM manipulation is necessary, but it's a lot rarer than you think! Before doing DOM manipulation anywhere in your application, ask yourself if you really need to. There might be a better way.
Here's a quick example that shows the pattern I see most frequently. We want a toggleable button. (Note: this example is a little contrived and a skosh verbose to represent more complicated cases that are solved in exactly the same way.)
.directive( 'myDirective', function () {
    return {
        template: '<a class="btn">Toggle me!</a>',
        link: function ( scope, element, attrs ) {
            var on = false;

            $(element).click( function () {
                on = !on;
                $(element).toggleClass('active', on);
            });
        }
    };
});
There are a few things wrong with this:
  1. First, jQuery was never necessary. There's nothing we did here that needed jQuery at all!
  2. Second, even if we already have jQuery on our page, there's no reason to use it here; we can simply use angular.element and our component will still work when dropped into a project that doesn't have jQuery.
  3. Third, even assuming jQuery was required for this directive to work, jqLite (angular.element) will always use jQuery if it was loaded! So we needn't use the $ - we can just use angular.element.
  4. Fourth, closely related to the third, is that jqLite elements needn't be wrapped in $ - the element that is passed to the link function would already be a jQuery element!
  5. And fifth, which we've mentioned in previous sections, why are we mixing template stuff into our logic?
This directive can be rewritten (even for very complicated cases!) much more simply like so:
.directive( 'myDirective', function () {
    return {
        scope: true,
        template: '<a class="btn" ng-class="{active: on}" ng-click="toggle()">Toggle me!</a>',
        link: function ( scope, element, attrs ) {
            scope.on = false;

            scope.toggle = function () {
                scope.on = !scope.on;
            };
        }
    };
});
Again, the template stuff is in the template, so you (or your users) can easily swap it out for one that meets any style necessary, and the logic never had to be touched. Reusability - boom!
And there are still all those other benefits, like testing - it's easy! No matter what's in the template, the directive's internal API is never touched, so refactoring is easy. You can change the template as much as you want without touching the directive. And no matter what you change, your tests still pass.
w00t!
So if directives aren't just collections of jQuery-like functions, what are they? Directives are actually extensions of HTML. If HTML doesn't do something you need it to do, you write a directive to do it for you, and then use it just as if it was part of HTML.
Put another way, if AngularJS doesn't do something out of the box, think how the team would accomplish it to fit right in with ngClick, ngClass, et al.

Summary

Don't even use jQuery. Don't even include it. It will hold you back. And when you come to a problem that you think you know how to solve in jQuery already, before you reach for the $, try to think about how to do it within the confines the AngularJS. If you don't know, ask! 19 times out of 20, the best way to do it doesn't need jQuery and to try to solve it with jQuery results in more work for you.

AngularJS vs. jQuery

AngularJS vs. jQuery

AngularJS and jQuery adopt very different ideologies. If you're coming from jQuery you may find some of the differences surprising. AngularJS may make you angry.
This is normal, you should push through. AngularJS is worth it.

First up, AngularJS doesn't replace jQuery

AngularJS and jQuery do different things. AngularJS gives you a set of tools to produce webapps. jQuery mainly gives you tools for modifying the DOM. If jQuery is present on your page, AngularJS will use it automatically. If it isn't, AngularJS ships with jQuery Lite, which is a cut down, but still perfectly usable version of jQuery.
Misko likes jQuery and encourages you to use it. You still need to manipulate the DOM and jQuery is your tool for this. However you can get a most of your work done using templates, and you should prefer templates where possible.
People encouraging you to drop jQuery should stop encouraging you to do that. You might like to lay off the jQuery for a while while you learn what AngularJS can do, but jQuery is not going away just yet.
That said, you shouldn't be sprinkling jQuery all over the place. The correct place for jQuery and other DOM manipulations in AngularJS is in directives. More on these later.

Unobtrusive JavaScript vs. Declarative Templates

jQuery is typically applied unobtrusively. Your JavaScript code is linked in the header, and this is the only place it is mentioned. The JavaScript code wraps round the DOM like a snail on a twig, making changes as required. Onclick attributes are very bad practice.
One of the first things your will notice about AngularJS is that custom attributes are everywhere. Your HTML will be littered with ng attributes, which are essentially onClick attributes on steroids. These are directives, and are one of the main ways in which the template is hooked to the model.
When you first see this you might be tempted to write AngularJS off as old school intrusive JavaScript (like I did at first). In fact, AngularJS does not play by those rules. In AngularJS, your HTML5 is a template. It is compiled by AngularJS to produce your web page.
This is the first big difference. To jQuery, your web page is a DOM to be manipulated. To AngularJS, your HTML is code to be compiled. AngularJS reads in your whole web page and literally compiles it into a new web page using it's built in compiler.
Your template should be declarative; it's meaning should be clear simply by reading it. We use custom attributes with meaningful names. We make up new HTML elements, again with meaningful names. A designer with minimal HTML knowledge and no coding skill can read your AngularJS template and understand what it is doing. He or she can make modifications. This is the AngularJS way.

The template is in the driving seat.

One of the first questions I asked myself when starting AngularJS and running through the tutorials is "Where is my code?". I've written no JavaScript, and yet I have all this behaviour. The answer is obvious. Because AngularJS compliles the DOM, AngularJS is treating your HTML as code. For simple cases it's often sufficient to just write a template and let AngularJS compile it into an application for you.
Your template drives your application. It's treated as a DSL. You write AngularJS components, and AngularJS will take care of pulling them in and making them available at the right time based on the structure of your template. This is very different to a standard MVC pattern, where the template is just for output.
It's more similar to XSLT than Ruby on Rails for example.

Semantic HTML vs. Semantic Models

With jQuery your HTML page should contain semantic meaningful content. If the JavaScript is turned off (by a user or search engine) your content remains accessible.
Because AngularJS treats your HTML page as a template. The template is not supposed to be semantic as your content is typically stored in your model. AngularJS compiles your DOM with the model to produce a semantic web page.
In AngularJS, meaning lives in the model, the HTML is for display only.
At this point you likely have all sorts of questions concerning SEO and accessibility, and rightly so. There are open issues here. Most screen readers will now parse JavaScript. Search engines may also be able to index Ajaxed content. Nevertheless, you will want to make sure you are using pushstate URLs and you have a decent sitemap. See here for a discussion of the issue: http://stackoverflow.com/a/23245379/687677

Separation of concerns vs. MVC

Separation of concerns is a pattern that has grown up over many years of web development for a variety of reasons including SEO, accessibility and browser incompatibility. It looks like this:
  1. HTML - Semantic meaning. The HTML should stand alone.
  2. CSS - Styling, without the CSS the page is still readable.
  3. JavaScript - Behaviour, without the script the content remains.
Again, AngularJS does not play by their rules. Angular instead implements an MVC pattern.
  1. Model - your models contains your semantic data. Models are usually JSON objects.
  2. View - Your views are written in HTML. The view is usually not semantic because your data lives in the model.
  3. Controller - Your controller is a JavaScript function which hooks the view to the model. Depending on your app, you may or may not need to create a controller. You can have many controllers on a page.
They are not on opposite ends of the same scale, they are on completely different axes.

Plugins vs. Directives

Plugins extend jQuery. AngularJS Directives extend the capabilities of your browser.
In jQuery we define plugins by adding functions to the jQuery.prototype. We then hook these into the DOM by selecting elements and calling the plugin on the result. The idea is to extend the capabilities of jQuery.
For example, if you want a carousel on your page, you might define an unordered list of figures, perhaps wrapped in a nav element. You might then write some jQuery to find the list on the page and restyle it as a gallery with timeouts to do the sliding animation.
In AngularJS, we define directives. A directive is a function which returns a JSON object. This object tells AngularJS what DOM elements to look for, and what changes to make to them. Directives are hooked in to the template using either attributes or elements, which you invent. The idea is to extend the capabilities of HTML with new attributes and elements.
The AngularJS way is to extend the capabilities of native looking HTML. You should write HTML that looks like HTML, extended with custom attributes and elements.
If you want a carousel, just use a <carousel /> element, then define a directive to pull in a template, and make that sucker work.

Closure vs. $scope

jQuery plugins are created in a closure. Privacy is maintained within that closure. It's up to you to maintain your scope chain within that closure. You only really have access to the set of DOM nodes passed in to the plugin by jQuery, plus any local variables defined in the closure and any globals you have defined. This means that plugins are quite self contained. This is a good thing, but can get restrictive when creating a whole application. Trying to pass data between sections of a dynamic page becomes a chore.
AngularJS has $scope objects. These are special objects created and maintained by AngularJS in which you store your model. Certain directives will spawn a new $scope, which by default inherits from its wrapping $scope using JavaScript prototypical inheritance. The $scope object is accessible in the controller and the view.
This is the clever part. Because the structure of $scope inheritance roughly follows the structure of the DOM, elements have access to their own scope, and any containing scopes seamlessly, all the way up to the global $scope (which is not the same as the global scope).
This makes it much easier to pass data around, and to store data at an appropriate level. If a dropdown is unfolded, only the dropdown $scope needs to know about it. If the user updates their preferences, you might want to update the global $scope, and any nested scopes listening to the user preferences would automatically be alerted.
This might sound complicated, in fact, once you relax into it, it's like flying. You don't need to create the $scope object, Angular instantiates and configures it for you, correctly and appropriately based on your template hierarchy. Angular then makes it available to your component using the magic of dependency injection (more on this later).

Manual DOM changes vs. Data Binding

In jQuery you make all your DOM changes by hand. You construct new DOM elements programatically. If you have a JSON array and you want to put it to the DOM, you must write a function to generate the HTML and insert it.
In AngularJS you can do this too, but you are encouraged to make use of data binding. Change your model, and because the DOM is bound to it via a template your DOM will automatically update, no intervention required.
Because data binding is done from the template, using either an attribute or the curly brace syntax, it's super easy to do. There's little cognitive overhead associated with it so you'll find yourself doing it all the time.
<input ng-model="user.name" />
Binds the input element to $scope.user.name. Updating the input will update the value in your current scope, and vice-versa.
Likewise:
<p>
  {{user.name}}
</p>
will output the user name in a paragraph. It's a live binding, so if the $scope.user.name value is updated, the template will update too.

Ajax all of the time

In jQuery making an Ajax call is fairly simple, but it's still something you might think twice about. There's the added complexity to think about, and a fair chunk of script to maintain.
In AngularJS, Ajax is your default go-to solution and it happens all the time, almost without you noticing. You can include templates with ng-include. You can apply a template with the simplest custom directive. You can wrap an Ajax call in a service and create yourself a GitHub service, or a Flickr service, which you can access with astonishing ease.

Service Objects vs Helper Functions

In jQuery, if we want to accomplish a small non-dom related task such as pulling a feed from an API, we might write a little function to do that in our closure. That's a valid solution, but what if we often want to access that feed often? What if we want to reuse that code in another application?
AngularJS gives us service objects.
Services are simple objects that contain functions and data. They are always singletons, meaning there can never be more than one of them. Say we want to access the Stack Overflow API, we might write a StackOverflowService which defines methods for doing so.
Let's say we have a shopping cart. We might define a ShoppingCartService which maintains our cart and contains methods for adding and removing items. Because the service is a singleton, and is shared by all other components, any object that needs to can write to the shopping cart and pull data from it. It's always the same cart.
Service objects are self-contained AngularJS components which we can use and reuse as we see fit.

Dependency injection (DI)

DI is a massive deal in AngularJS. It means that AngularJS will automatically instantiate your objects for you and ensure they are available for you where you need them. Until you start to use this, it's hard to explain just what a massive time boon is. Nothing like AngularJS DI exists inside jQuery.
DI means that instead of writing your application and wiring it together, you instead define a library of components, each identified by a string.
Say I have a component called 'FlickrService' which defines methods for pulling JSON feeds from Flickr. Now, if I want to write a controller that can access Flickr, I just need to refer to the 'FlickrService' by name when I declare the controller. AngularJS will take care of instantiating the component and making it available to my controller.
For example, here I define a service:
myApp.service('FlickrService', function() {
  return {
    getFeed: function() { // do something here }
  }
});
Now when I want to use that service I just refer to it by name like this:
myApp.controller('myController', ['FlickrService', function(FlickrService) {
  FlickrService.getFeed()
}]);
Angular will recognise that a FlickrService object is needed to instantiate the controller, and will provide one for us.
This makes wiring things together very easy, and pretty much eliminates any tendency towards spagettification.

Modular service architecture

jQuery says very little about how you should organise your code. AngularJS has opinions.
AngularJS gives you modules into which you can place your code. If you're writing a script that talks to Flickr for example, you might want to create a Flickr module to wrap all your Flickr related functions in. Modules can include other modules (DI). Your main application is usually a module, and this should include all the other modules your application will depend on.
You get simple code reuse, if you want to write another application based on Flickr, you can just include the Flickr module and voila, you have access to all your Flickr related functions in your new application.
Modules contain AngularJS components. When we include a module, all the components in that module become available to us as a simple list identified by their unique strings. We can then inject those components into each other using AngularJS's dependency injection mechanism.

To sum up

AngularJS and jQuery are not enemies. It's possible to use jQuery within AngularJS very nicely. If you're using AngularJS well (templates, data-binding, $scope, directives, etc.) you will find you need a lot less jQuery than you might otherwise require.
Think less about unobtrusive JavaScript, and instead think in terms of HTML extension.

Sunday, 12 October 2014

ASP.NET Web API – Why is it so cool ? | Difference between WCF and Web API and WCF REST and Web Service

he .Net framework has a number of technologies that allow you to create HTTP services such as Web Service, WCF and now Web API. There are a lot of articles over the internet which may describe to whom you should use. Now a days, you have a lot of choices to build HTTP services on .NET framework. In this article, I would like to share my opinion with you over Web Service, WCF and now Web API. 

Web Service

  1. It is based on SOAP and return data in XML form.
  2. It support only HTTP protocol.
  3. It is not open source but can be consumed by any client that understands xml.
  4. It can be hosted only on IIS.

WCF

  1. It is also based on SOAP and return data in XML form.
  2. It is the evolution of the web service(ASMX) and support various protocols like TCP, HTTP, HTTPS, Named Pipes, MSMQ.
  3. The main issue with WCF is, its tedious and extensive configuration.
  4. It is not open source but can be consumed by any client that understands xml.
  5. It can be hosted with in the applicaion or on IIS or using window service.

WCF Rest

  1. To use WCF as WCF Rest service you have to enable webHttpBindings.
  2. It support HTTP GET and POST verbs by [WebGet] and [WebInvoke] attributes respectively.
  3. To enable other HTTP verbs you have to do some configuration in IIS to accept request of that particular verb on .svc files
  4. Passing data through parameters using a WebGet needs configuration. The UriTemplate must be specified
  5. It support XML, JSON and ATOM data format.

Web API

  1. This is the new framework for building HTTP services with easy and simple way.
  2. Web API is open source an ideal platform for building REST-ful services over the .NET Framework.
  3. Unlike WCF Rest service, it use the full featues of HTTP (like URIs, request/response headers, caching, versioning, various content formats)
  4. It also supports the MVC features such as routing, controllers, action results, filter, model binders, IOC container or dependency injection, unit testing that makes it more simple and robust.
  5. It can be hosted with in the application or on IIS.
  6. It is light weight architecture and good for devices which have limited bandwidth like smart phones.
  7. Responses are formatted by Web API’s MediaTypeFormatter into JSON, XML or whatever format you want to add as a MediaTypeFormatter.

To whom choose between WCF or WEB API

  1. Choose WCF when you want to create a service that should support special scenarios such as one way messaging, message queues, duplex communication etc.
  2. Choose WCF when you want to create a service that can use fast transport channels when available, such as TCP, Named Pipes, or maybe even UDP (in WCF 4.5), and you also want to support HTTP when all other transport channels are unavailable.
  3. Choose Web API when you want to create a resource-oriented services over HTTP that can use the full features of HTTP (like URIs, request/response headers, caching, versioning, various content formats).
  4. Choose Web API when you want to expose your service to a broad range of clients including browsers, mobiles, iphone and tablets.

    ASP.NET Web API – Why is it so cool?
    That’s a great question.
    Too many of the answers to that question come down to subjective, personal preference. Worse, you’ll often hear that it’s what you should use simply because it’s the new thing Microsoft is pushing (you know, like WCF once was). If you’re already making good use of ASP.NET MVC’s controller actions as a makeshift API, just hearing that Web API is “better” might not be all that compelling.
    However, I think Web API objectively wins out over MVC controller APIs in a few key areas. In this post, I’ll cover why these three aspects of ASP.NET Web API make it a clear choice for my ASP.NET-based APIs going forward:
  5. Content negotiation
  6. Flexibility
  7. Separation of concerns


Content negotiation

Content negotiation is one of those features that can seem overwrought at first glance. The technical demos we’ve seen most often, returning content like images and vcards based on the Accept header, are neat, but not something you’re likely to need on a daily basis.
However, content negotiation is still a nice feature even if you only expect to return a single type of serialization. Why? Because it decouples your API code’s intent from the mechanics of serialization.
For example, which would you rather write and maintain?

ASP.NET MVC

public class TweetsController : Controller {
  // GET: /Tweets/
  [HttpGet]
  public ActionResult Index() {
    return Json(Twitter.GetTweets(), JsonRequestBehavior.AllowGet);
  }
}

ASP.NET Web API

public class TweetsController : ApiController {
  // GET: /Api/Tweets/
  public List<Tweet> Get() {
    return Twitter.GetTweets();
  }
}
The choice between those two options is an easy one for me. Not only is the return value of List<Tweet> more expressive (especially when you revisit this code in a year or two), but letting Web API handle the serialization means you don’t need to clutter the action up with the Json() code.
Of course, you never know when it might be helpful someday for that same endpoint to support returning its data as XML, CSV, MessagePack, or some future serialization format. Content negotiation makes adding that across an entire API relatively simple with Web API, but it would be cumbersome to update a plethora of MVC controller actions.

Flexibility

It’s clear that flexibility was a central goal in Web API (without resorting to the configuration tedium that burdened WCF). Content negotiation is a great example of that, but that’s only the beginning. Web API starts with sensible defaults, but then provides a GlobalConfiguration object that you can use to tweak a wide range of options.
A couple more concrete examples:

Down with XML!

One initially disconcerting thing about Web API is that hitting an endpoint in your browser will result in an XML response. That’s correct behavior because regular requests in browsers usually send an Accept header preferring text/html and application/xml, but not JSON.
If you want to bring Web API’s behavior more in line with the output of MVC’s Json() helper, you can remove XML support from your API entirely. Just add this to the Application_Start event and you’ll never see XML again:
// Normally, you'd probably be doing this in a setup method that
// accepts the configuration as a parameter and this line wouldn't
// be necessary, but you can do it right in Application_Start too.
var config = GlobalConfiguration.Configuration;
 
config.Formatters.Remove(config.Formatters.XmlFormatter);
Now, even if a client’s Accept header prefers XML over JSON, Web API won’t respond with XML. Personally, I’d prefer to see Web API prefer JSON over XML by default, but it’s great that Web API is flexible enough to allow these global changes with a minimum of configuration ceremony.

Tweaking your API’s JSON format

You’re generally stuck with the JSON that JavaScriptSerializer wants to generate when you use MVC’s Json() helper. Luckily, JSS does a pretty good job of creating sane JSON, but it also opts you into some unusual conventions like “MSAjax” date encoding instead of the more common ISO format.
Web API and Json.NET allow you to tweak the JSON that your API produces very easily. For example, Json.NET defaults to using ISO dates, but you can switch that to the Microsoft format for backwards compatibility across your entire API with a simple configuration setting:
// Separate statements for purposes of fitting within my tiny
// 492px code blocks. You could do this on one line if you wanted.
var js = GlobalConfiguration.Configuration.Formatters.JsonFormatter;
 
var dfh = js.SerializerSettings.DateFormatHandling;
 
dfh = Newtonsoft.Json.DateFormatHandling.MicrosoftDateFormat;
There’s a wide range of options you can set this way, from “pretty printing” the JSON to support for automatic translation between camelCase and PascalCase. For more examples, see this article on the ASP.NET site.

Separation of concerns

All of the preceding examples highlight one of my favorite overall advantages of using Web API. It’s possible to quickly drop these high-impact changes in across an entire API largely because Web API helps separate your API from the part of your project that returns HTML.
MVC controller actions are optimized for building up an HTML document through the use of things like views, partial views, and HTML helpers. That whole pipeline doesn’t really make sense in the context of an API that returns entities, DTOs, or ViewModels, and I think it’s valuable to keep them separate.

So separate, it’s not tied to a website at all

In fact, you may eventually want to isolate API from your website to the point that it doesn’t even make sense to host the API within the same project. With Web API, you can self-host the same API code in a Windows Service or even a console app.
You could do that with an ASP.NET MVC “API” by creating a separate project for the API, but then you’re still paying the performance penalty for your API requests to filter through MVC’s rendering pipeline. In simple load testing on my local machine, I’ve found that Web API endpoints hosted in console apps are nearly 50% faster than both ASP.NET controller actions and Web API endpoints hosted within MVC projects.
It’s great to have that low-cost, high-performance option as usage of your API grows. Starting with Web API means you’re not tied to ASP.NET MVC (or any web framework) in the future. It’s quick and easy to migrate an MVC-hosted Web API to a centralized project of its own as your needs organically grow.

Conclusion

These are the three main reasons why I’m using Web API controllers for client-side callbacks in my ASP.NET MVC projects going forward. Even though many of those “APIs” will never grow into more than endpoints for AJAX callbacks, using Web API for those endpoints is still easier and more flexible than other built-in alternatives.
What do you think? Are you using Web API yet? Do you plan to migrate from MVC controller actions to Web API in the future? Did I miss an aspect of the MVC-centric approach that makes it work better than Web API for you (I can think of one, but I haven’t seen many projects actually doing this in the real-world anymore)?