Wednesday 27 August 2014

Single Responsibility Principle

According to the single responsibility principle:
A class,a method ,an assembly should have one and only one reason to change.

Explanation:


  • If we have more than one reasons ,say you have n reasons to change for a class, we have to split the functionality in into n number classes.
  • Each class will handle only one responsibility and on future if we need to make one change we are going to make it in the class which handle it. 
  • When we need to make a change in a class having more responsibilities the change might affect the other functionality of the classes.
  • The Single Responsibility Principle represents a good way of identifying classes during the design phase of an application and it reminds you to think of all the ways a class can evolve.
  • A good separation of responsibilities is done only when the full picture of how the application should work is well understand.

Example

Suppose I have a project for customer management , using which users can
a. Do CRUD(Create Retrieve Update Delete) operations on customer data
b. Send mails for CU to customer and D to Administrator
c. and also, we have a requirement to log exceptions if there is a failure.
So, the first step we do is create three layers in the CustomerManagementProject:
a. UI
b. BAL
c. DAL
Now, UI takes care of all UI related functionality and DAL takes care of all data related functionality.
The middle tier has the business rules to
a. Create/Update/Delete Customers
b. Sending email and email format
c. Log Exceptions.
The BAL class looks like:
The sample sequence diagram is:
Now,
if there is a change in the business rules to send email or log exceptions we are going to touch the manage customerBAL class.
So, this class is violating SRP.

How To Resolve:


1. Although BAL has to do all the three functions of managing customers, sending mails and logging errors.
Let us have three separate projects:
a. ManageCustomerBAL
b.Exceptions
c.Email

So, now my solution looks something like:
Even though we have moved exception and email to two different projects, but still then we need to create an object of the exception class and log exceptions.
For example my BAL refers to Exception like

Add reference
Create an instance of the class
Call the method
So, now the sequence diagram looks like:
Similarly to log email exceptions we have another class in LogException namespace.
But suppose, the requirement changes and I need to add the exception to event viewer as well what do I do?

You cannot come again and instance of another class in ManageCustomerBAL.
If you create it you are again violating SRP.

So, what should I do??

Now , we need to add an interface for logging. The exception class and the BAL refer to the exception logging methods using interface.
So,What we need to do is replace the FileException Class with a generic interface and extend the FileException class from this interface.
Step 1: Create a project to contain the Interface.
Step 2: And add method to the interface.


Step 4 : Extend this Interface in all the exception classes

Step 5: Remove the reference to Exception from BAL and add reference to IInterface.
Step 6: Add reference for the interface in BAL





Now that we have added an interface , there is going to be no change in the BAL if we are going to add a new class to the LogException Namespace.
The problem is how do I create an object for the interface?
So, I should use a factory class.
Step 7: Add a ExceptionFactory project 
Step 8 : Add reference to Exception and IInterface to the ExceptionFactory project.
The interface is what which will help to identify the classes properly.
and how do you identify the class?
Step 9:You have a key-value pair in the web.config file something like

Step 10: In the ExceptionFactory add a  method which reads this web.config and creates objects of required classes.

Step 11: Refer factory class in BAL
So, whenever you have a change in requirement you need to change only at two places:
a. The ExceptionFactory
b.The LogException class.

Dependency Inversion Principle

(A) High level modules should not depend upon low level modules. Both should depend upon abstractions. (B) Abstractions should not depend upon details. Details should depend upon abstractions.

Explanation

Scenario 1
You work in an organization where you and your colleagues tend to travel a lot. Generally you travel by air and every time you need to catch a flight, you arrange for a pickup by a cab. You are aware of the airline agency who does the flight bookings, and the cab agency which arranges for the cab to drop you off at the airport. You know the phone numbers of the agencies, you are aware of the typical conversational activities to conduct the necessary bookings.
Thus your typical travel planning routine might look like the following :
Decide the destination, and desired arrival date and time
Call up the airline agency and convey the necessary information to obtain a flight booking.
Call up the cab agency, request for a cab to be able to catch a particular flight from say your residence (the cab agency in turn might need to communicate with the airline agency to obtain the flight departure schedule, the airport, compute the distance between your residence and the airport and compute the appropriate time at which to have the cab reach your residence)
Pickup the tickets, catch the cab and be on your way
Now if your company suddenly changed the preferred agencies and their contact mechanisms, you would be subject to the following relearning scenarios
The new agencies, and their new contact mechanisms (say the new agencies offer internet based services and the way to do the bookings is over the internet instead of over the phone)
The typical conversational sequence through which the necessary bookings get done (Data instead of voice).
Its not just you, but probably many of your colleagues would need to adjust themselves to the new scenario. This could lead to a substantial amount of time getting spent in the readjustment process.
Scenario 2
Now lets say the protocol is a little bit different. You have an administration department. Whenever you needed to travel an administration department interactive telephony system simply calls you up (which in turn is hooked up to the agencies). Over the phone you simply state the destination, desired arrival date and time by responding to a programmed set of questions. The flight reservations are made for you, the cab gets scheduled for the appropriate time, and the tickets get delivered to you.
Now if the preferred agencies were changed, the administration department would become aware of a change, would perhaps readjust its workflow to be able to communicate with the agencies. The interactive telephony system could be reprogrammed to communicate with the agencies over the internet. However you and your colleagues would have no relearning required. You still continue to follow exactly the same protocol as earlier (since the administration department did all the necessary adaptation in a manner that you do not need to do anything differently).
Dependency Injection ?
In both the scenarios, you are the client and you are dependent upon the services provided by the agencies. However Scenario 2 has a few differences.
You don't need to know the contact numbers / contact points of the agencies – the administration department calls you when necessary.
You don't need to know the exact conversational sequence by which they conduct their activities (Voice / Data etc.) (though you are aware of a particular standardized conversational sequence with the administration department)
The services you are dependent upon are provided to you in a manner that you do not need to readjust should the service providers change.
Thats dependency injection in “real life”. This may not seem like a lot since you imagine a cost to yourself as a single person – but if you imagine a large organization the savings are likely to be substantial.
Dependency Injection in a Software Context
Software components (Clients), are often a part of a set of collaborating components which depend upon other components (Services) to successfully complete their intended purpose. In many scenarios, they need to know “which” components to communicate with, “where” to locate them, and “how” to communicate with them. When the way such services can be accessed is changed, such changes can potentially require the source of lot of clients to be changed.
One way of structuring the code is to let the clients embed the logic of locating and/or instantiating the services as a part of their usual logic. Another way to structure the code is to have the clients declare their dependency on services, and have some "external" piece of code assume the responsibility of locating and/or instantiating the services and simply supplying the relevant service references to the clients when needed. In the latter method, client code typically is not required to be changed when the way to locate an external dependency changes. This type of implementation is considered to be an implementation of Dependency Injection and the "external" piece of code referred to earlier is likely to be either hand coded or implemented using one of a variety of DI frameworks.

Benefits

  • Dependency Injection enables user to write loosely coupled code 
  • Dependent object give up control of managing their dependencies and instead let a Composition Root inject the dependencies into them.
  • DI + Repository + Programming against Interfaces overall help in separation of concerns and greatly improves the overall application maintainability.
  • No Framework is required to do the changes.

Practical Examples:

MEF, Rhino, Spring etc.

Conclusion:

Use the principle intelligently.

Interface segregation principle


Clients should not be forced to depend upon interfaces that they don't use.

Explanation:

When designing an application the abstraction of modules plays a vital role.We should take care that a client doesn't implement an interface , just because it has to implement.
Such an interface is named fat interface or polluted interface. Having an interface pollution is not a good solution and might induce inappropriate behavior in the system.
So instead of one large interface we should have many smaller interfaces with grouped behavior.


Example:

For example you have a service class for working with Category . And IManageCategory interface exposes
 three methods
  •  Add
  • Update
  •  and Delete.
 Because of some reasons of deployment and security we have to  divide this CategoryClass to  two 
different classes

  •  CreateCategory will implement the Add method and run in untrusted environments. 
  •  UpdateCategory, will implement the other two methods and will only run in secure verified and authenticated context. 
Now if I use the IManageCategory class I have to implement dummy void Update and Delete in my CreateCategory class and dummy void Add in my UpdateCategory class.
So, this has violated the design rule of Interface seggregation.

So, we have to divide the IManageCategory to two interfaces IAddCategory an IUpdateCategory.

Advantages of Using ISP:

  1. Better Understanadability
  2. Better Maintenability
  3. High Cohesion
  4. Low Coupling

Limitations

ISP like any other principle should be used intelligently when necessary otherwise it will result in a code containing lots of interfaces containing one method. So the decision should be taken intelligently.

The Liskov Substitution Principle

This principle states that :“ Any derived class should be substitutable for their base classes”

Explanation

We must make sure that the new derived classes just extend without replacing the functionality of old classes. Otherwise the new classes can produce undesired effects when they are used in existing program modules.
Likov's Substitution Principle states that if a program module is using a Base class, then the reference to the Base class can be replaced with a Derived class without affecting the functionality of the program module.

How to know its violated?

  • A subclass that does not keep all the external observable behavior of it's parent class
  • A subclass modifies, rather than extends, the external observable behavior of it's parent class.
  • A subclass that throws exceptions in an effort to hide certain behavior defined in it's parent class
  • A subclass that overrides a virtual method defined in it's parent class using an empty implementation in order to hide certain behavior defined in it's parent class

Example

Where it is violated:
Let us say you have a class Rectangle
And say you created a child class Square 
Suppose I am going to calculate Area of a Rectangle 
This application is never going to give me correct out put.
Its always going to show area of a square and not that of the rectangle.
Where it is followed:
 Let us say consider the above example. “SavingsWithDrawform” class has a method which accepts the “Accounts” class object as shown below.
public void WithDraw( Accounts objAcc)
{
//Code Implementation of Account objects
}
The principle of LSP states that it should be legal to pass the the “SavingsAccount” object to the same method, which is as shown
public void WithDraw(SavingsAccounts objAcc)
{
//Passing the derived type should be legal
}

Where to use this?

Ideally you should use it wherever you are going to use a sub-class.
But then,
In the words of Robert Martin, Agile Principles, Patterns and Practices in C# (P.149):
"A good engineer learns when compromise is more profitable than perfection. However, conformance to LSP should not be surrendered lightly. The guarantee that a subclass will always work where its base classes are used is a powerful way to manage complexity. Once it is forsaken, we must consider each subclass individually."

Open Close Principle

Open Close Principle


What does the principle say?


OCP The Open Closed Principle: -- you should be able to extend a class's behavior, without modifying it.
The Open-Closed Principle (OCP) states that software entities (classes, modules, methods, etc.) should be open for extension, but closed for modification


We should strive to write code that doesn’t have to be changed every time the requirements change.


Importance



This is especially valuable in a production environment, where changes to source code may necessitate code reviews, unit tests, and other such procedures to qualify it for use in a product: code obeying the principle doesn't change when it is extended, and therefore needs no such effort.
Reduces the development , regression testing and maintenanace time.

How do you know its violated?


You have lots of if-else loops, switch case statements,Enums are used.
If you have not used polymorphism and inheritance in your code there are great chances that you are violating the Open-Closed Principle.

Example

Suppose I need to calculated area of various shapes rectangle , circle , square. There are various ways to do it. But if you use if..else / Switch ...Case statements , the day it is suggested to calculate the area of an ellipse you will have to give change the code and retest.
So, what is suggested is segregate the responsibility of calculating Area to a different abstract class.
say:
Inherit this in all the classes for which you will claculate area

Where should I implement this?

You should implement OCP after giving a proper thought on this.
You should not anticipate changes in requirements ahead of time, as at least my psychic abilities haven’t surfaced yet and preparing for future changes can easily lead to overly complex designs. Instead, I would suggest that we focus on writing code that is well written enough so that it’s easy to change if the requirements change.

SOLID Design Principles

SOLID : Design Principles

Software design principles represent a set of guidelines that helps us to avoid having a bad design. The design principles are associated to Robert Martin who gathered them in "Agile Software Development: Principles, Patterns, and Practices". 
According to Robert Martin there are 3 important characteristics of a bad design that should be avoided:
  • Rigidity - It is hard to change because every change affects too many other parts of the system.
  • Fragility - When you make a change, unexpected parts of the system break.
  • Immobility - It is hard to reuse in another application because it cannot be disentangled from the current application.

Following 2 characters should be taken care during designing any software:
  • High Cohesion - How focused are the responsibilities of the modules you are designing.
  • Low Coupling - The degree to which modules rely on other modules.
Principles Of Object Oriented Class Design (the "SOLID" principles)
S => SRP Single responsibility principle - the notion that an object should have only a single responsibility.

O => OCP Open/closed principle the notion that “software entities … should be open for extension, but closed for modification”. 

L => LSP Liskov substitution principle the notion that “objects in a program should be replaceable with instances of their subtypes without altering the correctness of that program”. 

I=>  ISP Interface segregation principle the notion that “many client specific interfaces are better than one general purpose interface.” 

D => DIP Dependency inversion principle the notion that one should “Depend upon Abstractions. Do not depend upon concretions.” Dependency injection is one method of following this principle.

I think that these form the foundation of Object Oriented design that allows concise, modular, and ultimately maintainable code. This means code that you (or others) can understand and modify easily and without causing unintended consequences for the function of the application in which the code resides.

Although the above design principles are good and helpful, but they should be used intelligently otherwise this may lead to problems.

The other principle , that is of a great importance is YAGNI (You ain't gonna need it).

According to YAGNI:
  • The time spent is taken from adding, testing or improving necessary functionality.
  • The new features must be debugged, documented, and supported.
  • Any new feature imposes constraints on what can be done in the future, so an unnecessary feature now opens the possibility of conflicting with a necessary feature later.[clarification needed]
  • Until the feature is actually needed, it is difficult to fully define what it should do and to test it. If the new feature is not properly defined and tested, it may not work correctly, even if it eventually is needed.
  • It leads to code bloat; the software becomes larger and more complicated.
  • Unless there are specifications and some kind of revision control, the feature may not be known to programmers who could make use of it.
  • Adding the new feature may suggest other new features. If these new features are implemented as well, this may result in a snowball effect towards feature creep.



But then futuristic thinking is also a must while designing a software.
So, you should always take design decisions intelligently.

How to manage effective meetings?

I was recently listening to an interview with Jason Fried, founder of 37 signals and co-author of the book Rework, and it really got me thinking about meetings.  He pointed out that everyone hates meetings, from the lowest workers right up to the top managers, and yet we keep having them.  There are many reasons to hate meetings but here are some of the top ones that I run into.
  1. Interrupts my most productive hours
  2. Often meetings are not relevant to me or my job
  3. Meeting content is relevant but not important enough to warrant a meeting
  4. Meeting leader has no clear goal
  5. Meeting leader cannot manage participants who slow progress
  6. Too many people
So after listening to Jason, I’m convinced that the problem isn’t just about making time in meetings effective.  It is also about reducing the number of meetings.

How can one minimize the number of meetings?  Here are some suggestions.

  1. Use passive communication technologies (eg email, message boards, wikis) that allow team members to respond at their own convenience.
  2. Have scheduled time for not checking any of these passive messages.   For example, no email will be checked from 10:30 to 2:30.  Companies could even go so far as have the email servers not deliver messages during those hours.  But what about urgent messages?  Ok, it happens (which is why we have phones) but with email, everything is urgent…which seems to also mean nothing is urgent.  Having some dead hours will teach employees how to work on other tasks and schedule *urgent* tasks at times when people are able to effectively deal with them.
  3. Have a “no talking, no meetings” morning every week (or month, or whatever works for your company).
  4. Reduce the number of people required to attend meetings.   If too many people are involved in a decision, there can be too much debate and decisions are often worse  since no one really has to take responsibility.
  5. Delegate decisions Bosses have lots of meetings so they can effectively use their time to make many decisions.  The problem is that while the boss’s time may be more effective, everyone who is in the meeting is less effective.  If leaders can delegate decisions, fewer meetings will be required.
  6. Each time you call a meeting, consider if it is possible to resolve the meeting’s goal using some other method such as a short one-on-one chat (and a chat is definitely different from a scheduled meeting).
  7. Stand up meetings I’m ashamed to admit that I have not tried one of these yet but short stand up meetings seem like a great way to eliminate the long sitting meetings that eat up everyone’s time.
And how does this make meetings effective?
  1. When meetings are rare, everyone’s mindset changes a bit to understand that meetings are important.  Time in meetings becomes important.
  2. By reducing meeting participants, it becomes much easier to make decisions.

Burn Down Charts : Agile Task Tracking

Eventually I became frustrated with Gantt charts. Sure, they had served me reasonably well for a number of years but I had a few problems with them. First, I found them time consuming to maintain whenever changes to a task’s timeline occurred. Second, the charts require tasks to be put in some sort of order at the beginning of an iteration which often isn’t representative of when tasks will actually be preformed. Third, Gantt chart project management software included so many extra features that just creating and maintaining basic charts seemed to require some sort of certification. I longed for something simpler.
The solution for me was to use burn down charts. It is just a graph that shows how much time is left in a project vs how much work is left to be done (as shown below). This article provides an overview on how to effectively create and manage burn down charts using nothing but a spreadsheet.



Figure 1: Burn down chart

If you have heard of burn down charts before, then likely it was in the context of agile software development. In this article, I will try to describe burn down charts in a manner such that they can be applied to a number of different types of projects, not just software projects.
One term to be familiar with is iteration. In a software project, an iteration refers to a set period of time where the various stages of the software development process are preformed to provide some sort of release. Iterations are preformed over and over, each time refining the product closer to a final release. However, in the context of this article, an iteration will just be a specific amount of time with an assigned set of tasks.

Reading Burn Down Charts

Burn down charts provide a method to track your progress on a daily basis. The axis on the left shows the remaining effort required to complete the iteration and the axis on the bottom contains the number of days until the iteration deadline. The remaining effort is determined by summing the time estimates for incomplete tasks.
In figure 1, the blue line shows the ideal scenario if your team performs exactly as predicted by your task estimates and the red line shows the actual performance. At day 0 (the first day of the iteration), the remaining effort is at its highest because nothing has been completed. At the end of the iteration (day 20), the sum should be 0 because there are no tasks left to be completed.
You want the red line (your team’s actual performance) to be close to the blue line. When it is above the blue line, then your team is behind schedule and when it is below the blue line, your team is ahead of schedule.


Figure 2: Chart showing areas above the blue line as being behind schedule and below the blue line as being ahead of schedule
If the actual remaining effort line is above the blue line for an extended period, then it means adjustments have to be made to the project. This could mean dropping a task, assigning additional resources, or working late, all of which can be unpleasant but because of the burn down chart, at least you can deal with it sooner rather than just before a deadline.
Not only are burn down charts intuitive to read, but they also require no adjustments when task scheduling changes which makes them easy to maintain as well. Priority and task start/end dates are never referenced when generating the graph so within an iteration, task priority and start/end dates can be changed without affecting the burn down chart at all. This significantly reduces the amount of time spent spent on adjustments when compared to other progress tracking methods.

Creating a Burn Down Chart

Step 1: Track Tasks

The first step in iteration tracking is creating an issues log to manage tasks. If you have a separate issues logging software, it is probably suitable for a lot of the details. The required information for burn down chart generation is just the task id and the time estimate.
Table 1: Issues log
Notes on filling out the issues log:
  1. Adding tasks: For example, complete a requirement, or correct a bug, etc
  2. Estimating task time: This should be for an average team member (not you), and should have team consensus when possible. I generally estimate at an accuracy of 0.25 days so that simple tasks do not get excluded.
    Important: If tasks are longer than a few days, break them into sub tasks. As you will learn later in this article, partially completed tasks do not result in any updates to the burn down chart so performance resolution is proportional to task length.
  3. Prioritizing: Prioritize each task in groups of 10. The highest priority is 10, 20 is lower, 30 is lower still, etc. Incrementing by 10 seems like an odd choice at first but it is likely that some tasks have been missed or need to be split so having a second digit available is handy for task insertions.

Step 2: Track Iterations

Using the issues log, it is now possible to generate a burn down chart based on the template shown below. After the template is created, only the green cells need to be adjusted for new iterations.
 Figure 3: Burn down spreadsheet template

2.1 Iteration Setup

The goal of this section is to determine how many tasks you can fit into an iteration, and find equations for the number of man days used each day and the ideal remaining effort line.
Table 2: Iteration setup

Start Date The date when work on the iteration starts
End Date The planned date for when the iteration should end
# of Developers
Note that if one developer is shared between 2 projects, you might want to include him/her as a fraction, for example 0.5 developers
Efficiency Factor This is a measure of your team productivity and task estimate accuracy. Use 0.7 as a starting point but after the first iteration, you will be able to obtain an updated value from the spreadsheet. It is calculated based on the following formula.
(# of task days complete)/(# of man days used)
Based on past performance, the efficiency factor adjusts the effective number of days available to work on a project so that your estimates become more in line with reality. This eliminates problems with consistent under estimates or over estimates.
It is possible to have an efficiency factor greater than 1. This means that your time predictions are greater than how long it actually takes to perform a task. This does not require any special consideration.
Work Days
The number of work days between Start Date and End Date – For software development, 20 is a good starting point which is one month (5 working days a week for 4 weeks).
Man Days
This is the total number of man days available during the iteration
 (Work Days) * (# of developers)
Effective Man Days
The amount of time that is available for actually working on tasks.
 Efficiency Factor*(Man Days)
m – Ideal Remaining Effort Slope for an ideal iteration(see burn down chart)
-(# of task work days)/(Work Days)
b - Ideal Remaining Effort Intercept for an ideal iteration – This should equal “Effective Man Days”
m - Man Days Used Slope for calculating the number of Man Days used per day – This will be used later for updating the efficiency factor after an iteration
b - Man Days Used Always 0

2.2 Tasks in the Iteration


Table 3: Assigning tasks to an iteration In the last step, we discovered how many days we have available to work on tasks during an iteration (Effective Man Days). Next, tasks need to be assigned to the iteration. Simply add the Id’s for high priority tasks to the Assigned Task Id column until the total time required to complete the iteration matches “Effective Man Days”. If there is some time at the end but the higher priority tasks are too long to fill it, select a shorter lower priority task. Do not try to squeeze an extra day into an iteration.

For the example project, there is an extra task added to the bottom in yellow to cover various tasks associated with closing an iteration. This task can be added to the issues log instead if that is more appropriate for your team.

2.3 Burn Down Chart

The final step is to create the a table for generating the burn down chart as shown below.


Work Date The date that corresponds to a particular work day.
Work Days
(x-axis)
Each day that can be worked – It should start at 0 all the way up to Work Days from 2.1.
Ideal Remaining Effort
(y-axis, ideal)
The ideal amount of task time that should be remaining for a given work day.
Actual Remaining Effort(y-axis, reality)
Your team’s actual performance based on the sum of all the incomplete task estimates
 (Work Days) – (Total Tasks Completed)
Completed Tasks – John The next set of columns are for each individual to track how much actual effort they have produced. When a task is completed from the issues log, the developer simply puts the estimate in this column.
Some notes:
  • Record the task estimate time, not the actual time worked
  • Only record completed tasks. Tasks that are 99% done are still not complete and do not get entered into this table.
Completed Tasks – Sue
Completed Tasks – …

Graphing the Ideal Tasks Remaining column and the Actual Tasks Remaining column against the Work Day column generates a burn down chart as shown below.
 
 Figure 4: Final burn down chart

Tools

Desktop Spreadsheet Software (Excel) – Excel works fine but unless it is hosted somewhere for every developer to edit, the project manager will be required to update all the completed tasks.
 Google Docs Spreadsheet – A hosted solution such as a Google Docs spreadsheet (free) is generally the best choice. Google Docs does formulas and charts like excel and it also allows team members to update their own progress. This is good because
  • Team members can see how they measure up compared to others in terms of productivity. Some people might see this as a disadvantage but I think it helps motivate.
  • Team members get used to updating the task log with new tasks so the project manager doesn’t have to always maintain it.
  • All members can see the current iteration status at any time.

Important Summary Points

  • Start with 0.7 as the Efficiency Factor.
  • One month is a good iteration timeline for many projects (20 work days).
  • Only record progress against completed tasks. If a task is 99% done, it is still not complete and cannot be used for a release.
  • If actual performance is significantly above the ideal iteration line in the burn down chart, investigate and correct the issue by dropping tasks, assigning additional resources, or working overtime.
  • Avoid task estimates that are longer than a few days. Break long tasks into shorter ones.
  • Measure priority in tens, as in, 10, 20, 30…
  • Use hosted solutions like Google Docs for tracking the project.
  • Never try to squeeze an extra day into an iteration
  • Use fractions for determining the number of developers if time is split across projects.
  • Burn down charts eliminate some overhead associated with other methods such as Gantt charts but are obvious not suitable if that extra overhead is required based on other project constraints

Template Download Links
Google Docs Template – Make Copy (IMPORTANT: Do not edit this document! Please just make a copy)
Google Docs Template Master (Original template, not available for copying)

C# Framework Core

  • Memory 
  •  
  • What is the difference between a primitive type and a reference type?
  • What does the garbage collector do?
    • Handles memory de-allocation for objects
  • How is it implemented? A rough approximation is fine.
    • Basic algorithm
      • Model as a directed graph
      • Each object instance is a vertex
      • Each reference to an instance is an edge (So if one object references another, then there is a directed edge from one to the other)
      • Every so often, a depth first search occurs
        • All objects that are visited are considered “marked”
        • Therefore, unmarked objects are no longer referenced
      • Once the available memory is used, a “sweep” occurs where memory for all unmarked objects is released
      • The method described is called mark and sweep garbage collection.
  • What is generational garbage collection?
    • Some objects exist for longer times than other objects.  They do not need to be continuously checked in the mark and sweep algorithm described above
    • When a sweep occurs, marked objects get promoted to a different graph (a generation) that is not processed as frequently
    • C# has 3 generations
  • What is the difference between Finalize and Dispose?
    • Finalize is a destructor for an object and gets called when the garbage collector destroys the object.  The programmer does not determine when finalize gets called.
    • IDisposable, on the other hand, allows programmers to determine when an object can be destroyed by calling Dispose.  This allows programmatic release of resources like database connections. Note that the memory still is not released until the garbage collector reclaims it.
  • Why would someone want to implement IDisposable?
    • One common reason is to ensure efficient use of limited resources like database connections
    • Another use would be to wrap processes that have specific code that needs to be called at a start and finish point (ie, a database transaction block)
  • How could someone use the using statement to implement IDisposable?
    • using(MyObject obj = new MyObject()){ … }
    • When execution reaches the close }, MyObject’s Dispose method is called
  • What is a WeakReference?  Why would you use one?
    • A reference that doesn’t prevent garbage collection
    • An example use would be to maintain a cache
  • What is the difference between stack memory and heap memory?
    • Stack – contains memory for primitive types & pointers
    • Heap – used to allocate memory for objects that are created
  • What are the memory considerations when using recursion with many levels?

  • What is boxing and unboxing?
    • Boxing – the term use to describe the movement of values from the stack to the heap (so change a value type to a reference type)
    • Unboxing – from heap to stack

CLR

  • What is CLR?
  • What is CLI?
    • Common Language Infrastructure – a specification that .net languages are built on.  The CLR is an implementation of the CLI
  • What is CIL?
    • Common Intermediate Language – .net languages such as C# get compiled to CIL bytecode.  This format is understood by the CLR
  • What is JIT?
    • Just In Time compiler - CIL is compiled at runtime as needed into native code
    • It executes code by using a hybrid of interpreted and ahead-of-time approach.  Code is first interpreted and then the compilied commands are cached for later use.
  • How can JIT code be faster than ahead of time compiled code?
    • The JIT code is interpreted for an computer’s specific hardware configuration whereas ahead-of-time compilied code is written for a set of computers meeting a more general specification.  This means the JIT can take advantage of very specific hardware features.
  • Explain the path from C# source to native code
    • C# gets compiled to CIL
    • CIL is then executed by the CLR.  It does this using a JIT compiler which converts the CIL into native code

Assemblies

  • What is the difference between a service and a standard exe?
  • What is the difference between an exe and a dll?
    • DLL is a library of useful code that can be used by other code when referenced
    • EXEs also have this ability as well.  However, they also offer an entry point that is able to start executing commands
  • What is DLL hell?
    • DLL hell refers to a set of problems caused by DLL sharing in applications before .net.  Multiple programs may share one dll.  When a new program is installed, it may overwrite a previously installed shared DLL with a newer version.  This version may not be backwards compatible breaking programs that were previously running correctly.
  • What is an assembly qualified name?
    • It is a way to ensure that a type is associated with the proper assembly binary
    • TopNamespace.SubNameSpace.ContainingClass+NestedClass, MyAssembly, Version=1.3.0.0, Culture=neutral, PublicKeyToken=b17a5c561934e089
  • What is the GAC? What does the GAC do?
    • The GAC is the global assembly cache.
    • It stores assemblies that are meant to be shared but avoids DLL mentioned above (by using assembly qualified names)
  • How do you add assemblies to the GAC?
    • The most common way is to use a windows installer
    • Using gacutil.exe is another method

Misc

  • What is reflection?
    • Reflection offers the ability to find information about types at runtime and use that information to dynamically create instances and call methods and properties
  • What are some examples of uses for reflection?
    • Custom serialization
    • Custom binding
    • ORM mapping
  • What is the difference between a.Equals(b) and a == b?
    • For value types, the expressions are the same
    • For reference types, a==b is true only when the objects have the same reference (pointer)
    • a.Equals(b), on the other hand, is true for reference types with different pointers provided they have the same value.  An example usage would be for a class that is mapped to a database table.  There could be two different instances of the class with an id of 5.  When using ==, the value would return false.  However, if Equals is appropriately overridden to compare the id, than a.Equals(b) should return true since they have the same id

Primitive Type implementations

Knowing details about primitive types is essential when dealing with many activities like efficiency, boundaries, and memory usage and is something senior developers should be able to at least approximate.  If you don’t know this table, you will never think about it.  However, once you know it, you will be surprised at how often you take these addition details into consideration when developing code.
Type Size Range (signed) Range (unsigned) Order Precision
boolean 8 bits 0 to 1


byte 8 bits -128 to 128 0 to 256 102
short 16 bits -32,768 to 32,767 0 to 65535 104
int 32 bits -2,147,483,648 to 2,147,483,647 0 to 4,294,967,295 109
long 64 bits -922337203685477508 to 922337203685477507 0 to 18446744073709551615 1019
float 32 bits -3.402823e38 to 3.402823e38
1038 7 digits
double 64 bits -1.79769313486232e308 to 1.79769313486232e308
10308 15-16 digits
decimal 128 bits ±1.0 × 10e−28 to ±7.9 × 10e28
1028 29 digits
char 16 bits
0 to 65535 104


How do the MVP, MVC, and MVVM patterns relate? When are they appropriate?

Those who know me know that I have a passion for software architecture and after developing projects using Model-View-ViewModel (MVVM), Model-View-Presenter (MVP), and Model-View-Controller (MVC),  I finally feel qualified to talk about the differences between these architectures.  The goal of this article is to clearly explain the differences between these 3 architectures.

First, the let’s define common elements.  All 3 of the architectures are designed to separate the view from the model.

Model

  • Domain entities & functionality
  • Knows only about itself and not about views, controllers, etc.
  • For some projects, it is simply a database and a simple DAO
  • For some projects, it could be a database/file system, a set of entities, and a number of classes/libraries that provide additional logic to the entities (such as performing calculations, managing state, etc)
Implementation: Create classes that describe your domain and handle functionality.  You probably should end up with a set of  domain objects and a set of classes that manipulate those objects.

View

  • Code that handles the display
  • Note that view related code in the codebehind is allowed (see final notes at the bottom for details)
Implementation:  HTML, WPF, WindowsForms, views created programmatically – basically code that deals with display only.

Differences between Presenters, ViewModels and Controllers

This is the tricky part.  Some things that Controllers, Presenters, and ViewModels have in common are:
  • Thin layers
  • They communicate with the model and the view
The features of each.

Presenter (Example: WinForms)

  • 2 way communication with the view
  • View Communication: The view communicates with the presenter by directly calling functions on an instance of the presenter.  The presenter communicates with the view by talking to an interface implemented by the view.
  • There is a single presenter for each view
Implementation:
  • Every view’s codebehind implements some sort of IView interface.  This interface has functions like displayErrorMessage(message:String), showCustomers(customers:IList<Customer>), etc.  When a function like showCustomers is called in the view, the appropriate items passed are added to the display.  The presenter corresponding to the view has a reference to this interface which is passed via the constructor.
  • In the view’s codebehind, an instance of the presenter is referenced.  It may be instantiated in the code behind or somewhere else.  Events are forwarded to the presenter  through the codebehind.  The view never passes view related code (such as controls, control event objects, etc) to the presenter.
A code example is shown below.
//the view interface that the presenter interacts with
public interface IUserView
{
    void ShowUser(User user);
    ...
}
 
//the view code behind
public partial class UserForm : Form, IUserView
{
    UserPresenter _presenter;
    public UserForm()
    {
        _presenter = new UserPresenter(this);
        InitializeComponent();
    }
 
    private void SaveUser_Click(object sender, EventArgs e)
    {
        //get user from form elements
        User user = ...;
        _presenter.SaveUser(user);
    }
 
    ...
}
 
public class UserPresenter
{
    IUserView _view;
    public UserPresenter(IUserView view){
        _view = view;
    }
 
    public void SaveUser(User user)
    {
 ...
    }
    ...
}

ViewModel (Example: WPF, Knockoutjs)

  • 2 way communication with the view
  • The ViewModel represents the view.  This means that fields in a view model usually match up more closely with the view than with the model.
  • View Communication:  There is no IView reference in the ViewModel.  Instead, the view binds directly to the ViewModel.  Because of the binding, changes in the view are automatically reflected in the ViewModel and changes in the ViewModel are automatically reflected in the view.
  • There is a single ViewModel for each view
Implementation:
  • The view’s datacontext is set to the ViewModel.  The controls in the view are bound to various members of the ViewModel.
  • Exposed ViewModel proproperties implement some sort of observable interface that can be used to automatically update the view (With WPF this is INotifyPropertyChanged; with knockoutjs this is done with the functions ko.observable() and ko.observrableCollection())

Controller (Example: ASP.NET MVC Website)

  • The controller determines which view is displayed
  • Events in the view trigger actions that the controller can use to modify the model or choose the next view.
  • There could be multiple views for each controller
  • View Communication:
    • The controller has a method that determines which view gets displayed
    • The view sends input events to the controller via a callback or registered handler.  In the case of a website, the view sends events to the controller via a url that gets routed to the appropriate controller and controller method.
    • The view receives updates directly from the model without having to go through the controller.
      • Note: In practice, I don’t think this particular feature of MVC is employed as often today as it was in the past.  Today, I think developers are opting for MVVM (or MVP) over MVC in most situations where this feature of MVC would have been used.  Websites are a situation where I think MVC is still a very practical solution.  However, the view is always disconnected from the server model and can only receive updates with a request that gets routed through the controller.  The view is not able to receive updates directly from the model.
Implementation (for web):
  • A class is required to interpret incoming requests and direct them to the appropriate controller.  This can be done by just parsing the url.  Asp.net MVC does it for you.
  • If required, the controller updates the model based on the request.
  • If required, the controller chooses the next view based on the request.  This means the controller needs to have access to some class that can be used to display the appropriate view.  Asp.net MVC provides a function to do this that is available in all controllers.  You just need to pass the appropriate view name and data model.
MVVM and MVP implementation seem pretty straightforward but MVC can be a little confusing.  The diagram below from Microsoft’s Smart Client Factory documentation does a great job at showing MVC communication.  Note that the controller chooses the view (ASP.NET MVC) which is not shown in this diagram.  MVVM interactions will look identical to MVP (replace Presenter with ViewModel).  The difference is that with MVP, those interactions are handled programmatically while with MVVM, they will be handled automatically by the data bindings.

General rules for when to use which?

MVP
  • Use in situations where binding via a datacontext is not possible.
  • Windows Forms is a perfect example of this.  In order to separate the view from the model, a presenter is needed.  Since the view cannot directly bind to the presenter, information must be passed to it view an interface (IView).
MVVM
  • Use in situations where binding via a datacontext is possible.  Why?  The various IView interfaces for each view are removed which means less code to maintain.
  • Some examples where MVVM is possible include WPF and javascript projects using Knockout.
MVC
  • Use in situations where the connection between the view and the rest of the program is not always available (and you can’t effectively employ MVVM or MVP).
  • This clearly describes the situation where a web API is separated from the data sent to the client browsers.  Microsoft’s ASP.NET MVC is a great tool for managing such situations and provides a very clear MVC framework.

Final notes

  • Don’t get stuck on semantics.  Many times, one of your systems will not be purely MVP or MVVM or MVC.  Don’t worry about it.  Your goal is not to make an MVP, MVVM, or MVC system.  Your goal is to separate the view, the model, and the logic that governs both of them. It doesn’t matter if your view binds to your ‘Presenter’, or if you have a pure Presenter mixed in with a bunch of ViewModels.  The goal of a maintainable project is still achieved.
  • Some evangelists will say that your ViewModels (and Presenters) must not make your model entities directly available for binding to the view.   There are definitely situations where this is a bad thing.  However, don’t avoid this for the sake of avoiding it.  Otherwise, you will have to constantly be copying data between your model and ViewModel.  Usually this is a pointless waste of time that results in much more code to maintain.
  • In line with the last point, if using WPF it makes sense to implement INotifyPropertyChanged in your model entities.  Yes, this does break POCO but when considering that INotifyPropertyChanged adds a lot of functionality with very little maintenance overhead , it is an easy decision to make.
  • Don’t worry about “bending the rules” a bit so long as the main goal of a maintainable program is achieved
  • Views
    • When there is markup available for creating views (Xaml, HTML, etc), some evangelists may try to convince developers that views must be written entirely in markup with no code behind.  However, there are perfectly acceptable reasons to use the code behind in a view if it is dealing with view related logic.  In fact, it is the ideal way to keep view code out of your controllers, view models, and  presenters.  Examples of situations where you might use the code behind include:
      • Formatting a display field
      • Showing only certain details depending on state
      • Managing view animations
    • Examples of code that should not be in the view
      • Sending an entity to the database to be saved
      • Business logic
Happy coding.