Você está na página 1de 93

1.

Difference between product owner and scrum master


They interact directly with the customer, create user stories, organize and prioritize the product
backlog, and other user/customer facing issues. The Scrum Master handles the process, overseeing
meetings (including estimation and planning), removing impediments, and monitoring the overall
health of the project, making adjustments as needed.
2.What is SLA in asp.net
3.What is Label in TFS

Labeling a version
When your project has reached a certain milestone (such as a build candidate) and you wish to identify a
version at a particular point in time you can use the Label task to isolate a set of files. A label set does not
have to be comprised of files from a single directory location-you may select files or folders from across
versions or locations to add to the label.
To label a set of files complete the following tasks:
1. Locate a project directory or file in your Eclipse resource view. Right-click it and select Team >
Label...
2. In the Chose Item Version dialog, select a server folder or file you want to add to your label (you
can add more later). Select the version. In our example, we select Latest Version. For an
explanation of other version types and their associated dialogs, see the User
Guide section, Selecting a Version Type from the version drop-down menu.

3. The Apply Label dialog lists all the files in the selected project. The Item lists a path to the
changeset in the source control repository. The Version, in this instance, indicates you are
marking the latest version. When you retrieve this label and open the dialog the Version value is
replaced by the actual changeset number in the repository. (See image below.)

4. If you have additional directories that you want to add, select Add.
5. In the Chose Item Version, make another selection. Since our project ships with documentation,
we add to the label set, the documentation project,help.mediamill.com. When you complete
adding all the projects, click OK. TFS stores the label markers on the files.

Retrieving a label
You can retrieve a Label set using the Teamprise Find Label task. The Find Label task is also available
from the Get Specific task for developers who want to retrieve a label set to a workspace. To retrieve a
label from the Team menu complete the following:
1. Right-click a source-controlled file in your Eclipse resource view and click Find Label.
2. In the Find Label dialog, enter a project or user name and select Find. If there is more than one
Team project on your TFS server, the Project drop down menu displays these.

Editing a label
If you wish to modify a label you have retrieved, click Edit in the Find Label dialog. You can add to or
remove files from the label set (though you cannot change the label name.)
Note: The number in the Version column refers to the changeset number of the file on which the version
was labeled. A "latest" project in your repository may be comprised of any number of changesets. For
example, if you rarely modify a file, the changeset number rarely increases. But if you modify a file
and check it in often the number increases; TFS creates a new changeset with each check-in. You can
view a history of changesets on a single file using the Teamprise View History task.

IEnumerable is an interface that defines one method GetEnumerator which returns


an IEnumerator interface, this in turn allows readonly access to a collection. A collection
that implements IEnumerable can be used with a foreach statement.
Definition
IEnumerable
public IEnumerator GetEnumerator();
IEnumerator
public object Current;
public void Reset();
public bool MoveNext();

HTTP Methods: GET vs. POST

Previous
Next Reference

The two most used HTTP methods are: GET and POST.

What is HTTP?
The Hypertext Transfer Protocol (HTTP) is designed to enable communications between clients and
servers.
HTTP works as a request-response protocol between a client and server.
A web browser may be the client, and an application on a computer that hosts a web site may be the
server.
Example: A client (browser) submits an HTTP request to the server; then the server returns a
response to the client. The response contains status information about the request and may also the
requested content.

Two HTTP Request Methods: GET and POST


Two commonly used methods for a request-response between a client and server are: GET and POST.

GET - Requests data from a specified resource


POST - Submits data to be processed to a specified resource

The GET Method


Note that query strings (name/value pairs) is sent in the URL of a GET request:

/test/demo_form.asp?name1=value1&name2=value2
Some other notes on GET requests:

GET
GET
GET
GET

requests
requests
requests
requests

can be cached
remain in the browser history
can be bookmarked
should never be used when dealing with sensitive data

GET requests have length restrictions


GET requests should be used only to retrieve data

The POST Method


Note that query strings (name/value pairs) is sent in the HTTP message body of a POST
request:

POST /test/demo_form.asp HTTP/1.1


Host: w3schools.com
name1=value1&name2=value2
Some other notes on POST requests:

POST
POST
POST
POST

requests
requests
requests
requests

are never cached


do not remain in the browser history
cannot be bookmarked
have no restrictions on data length

Compare GET vs. POST


The following table compares the two HTTP methods: GET and POST.
GET

POST

BACK button/Reload

Harmless

Data will be re-submitted (the browser


should alert the user that the data are
about to be re-submitted)

Bookmarked

Can be bookmarked

Cannot be bookmarked

Cached

Can be cached

Not cached

Encoding type

application/x-www-form-urlencoded

application/x-www-form-urlencoded or
multipart/form-data. Use multipart
encoding for binary data

History

Parameters remain in browser history

Parameters are not saved in browser


history

Restrictions on data length

Yes, when sending data, the GET

No restrictions

method adds the data to the URL; and


the length of a URL is limited
(maximum URL length is 2048
characters)
Restrictions on data type

Only ASCII characters allowed

No restrictions. Binary data is also


allowed

Security

GET is less secure compared to POST


because data sent is part of the URL

POST is a little safer than GET because


the parameters are not stored in
browser history or in web server logs

Never use GET when sending


passwords or other sensitive
information!
Visibility

Data is visible to everyone in the URL

Data is not displayed in the URL

Other HTTP Request Methods


The following table lists some other HTTP request methods:
Method

Description

HEAD

Same as GET but returns only HTTP headers and no document body

PUT

Uploads a representation of the specified URI

DELETE

Deletes the specified resource

OPTIONS

Returns the HTTP methods that the server supports

CONNECT

Converts the request connection to a transparent TCP/IP tunnel

Understanding Garbage Collection in .NET


Generations
A generational garbage collector collects the short-lived objects more frequently than the longer lived
ones. Short-lived objects are stored in the first generation, generation 0. The longer-lived objects are
pushed into the higher generations, 1 or 2. The garbage collector works more frequently in the lower
generations than in the higher ones.

When an object is first created, it is put into generation 0. When the generation 0 is filled up, the
garbage collector is invoked. The objects that survive the garbage collection in the first generation
are promoted onto the next higher generation, generation 1. The objects that survive garbage
collection in generation 1 are promoted onto the next and the highest generation, generation 2. This
algorithm works efficiently for garbage collection of objects, as it is fast. Note that generation 2 is the
highest generation that is supported by the garbage collector.

Execution plan in sql server


Cookie abuse.

server side code from javascript help of page methods

what is lld and hld


High Level Design (HLD) is the overall system design - covering the system architecture and
database design. It describes the relation between various modules and functions of the system.
data flow, flow charts and data structures are covered under HLD.
Low Level Design (LLD) is like detailing the HLD. It defines the actual logic for each and every
component of the system. Class diagrams with all the methods and relation between classes
comes under LLD. Programs specs are covered under LLD.

HLDBased on SRS, software analysts will convert the requirements into a usable product.They will design
an application, which will help the programmers in coding.In the design process, the product is to be
broken into independent modules and then taking each module at a time and then further breaking them
to arrive at micro levelsThe HLD document willcontain the following items at a macro level: - list of
modules and a brie description of each module - brief functionality of each module - interface relationship
among modules -dependencies between modules - database tables identified along with key elements overall architecture diagrams along with technology detailsLLDHLD contains details at macro level and so
it cannot be given to programmers as a document for coding.So the system analysts prepare a micro
level design document, called LLDThis document describes each and every module in an elaborate
manner, so that the programmer can directly code the program based on this.There will be at least 1
document for each module and there may be more for a module.The LLD will contain: - deailed
functional logic of the module, in pseudo code - database tables, with all elements, including their type
and size - all interface details with complete API references(both requests and responses) - all
dependency issues -error message listings - complete input and outputs for a module(courtesy
'anonimas')
Answer Question

1 User has rated as useful.

Login to rate this answer.


sri
Answered On : Apr 5th, 2006

High Level Design or System Design (HLD)


High level Design gives the overall System Design in terms of Functional Architecture and Database
design. This is very useful for the developers to understand the flow of the system. In this phase design
team, review team (testers) and customers plays a major role. For this the entry criteria are the
requirement document that is SRS. And the exit criteria will be HLD, projects standards, the functional
design documents, and the database design document.

Low Level Design (LLD)

During the detailed phase, the view of the application developed during the high level design is broken
down into modules and programs. Logic design is done for every program and then documented
as program specifications. For every program, a unit test plan is created.

The entry criteria for this will be the HLD document. And the exit criteria will the program specification
and unit test plan (LLD).

--------------------------------------------------------------------------------------------------------------------For people who have been involved in software projects, they will constantly hear the terms, High Level
Design (HLD) and Low Level Design (LLD). So what are the differences between these 2 design stages
and when are they respectively used ?
High Level Design (HLD) gives the overall System Design in terms of Functional Architecture and
Database design. It designs the over all architecture of the entire system from main module to all sub
module. This is very useful for the developers to understand the flow of the system. In this phase design
team, review team (testers) and customers plays a major role. For this the entry criteria are the
requirement document that is SRS. And the exit criteria will be HLD, projects standards, the functional
design documents, and the database design document. Further, High level deign gives the overview of
the development of product. In other words how the program is going to be divided into functions,
modules, subdivision etc.
Low Level Design (LLD): During the detailed phase, the view of the application developed during the high
level design is broken down into modules and programs. Logic design is done for every program and then

documented as program specifications. For every program, a unit test plan is created. The entry criteria
for this will be the HLD document. And the exit criteria will the program specification and unit test plan
(LLD).
The Low Level Design Document gives the design of the actual program code which is designed based on
the High Level Design Document. It defines Internal logic of corresponding submodule designers are
preparing and mapping individual LLDs to Every module. A good Low Level Design Document
developed will make the program very easy to be developed by developers because if proper analysis is
made and the Low Level Design Document is prepared then the code can be developed by developers
directly from Low Level Design Document with minimal effort of debugging and testing.

-----------------------------------------------------------------------------------------------------------------

1. What is Data Contract?

Data Contracts are used to describe the data types used by a service. Interoperability is possible
through this since it uses metadata of the services in background. Data Contracts can be used to
describe either parameters or return values.

Data contracts are used to define the data structure. Messages that are simply a .NET type, lets say
lain old CLR object, and generate the XML for the data you want to pass.

Data Contracts describes the data types used by a service.

Data Contracts can be used to describe either parameters or return values.

Data Contracts are unnecessary if the service only uses simple types

Data contracts enable interoperability through the XML Schema Definition (XSD) standard.

Example
Basic DataContract is defined:

1.

[DataContract]

2.

public class Shape { }

3.
4.

[DataContract(Name = "Circle")]

5.

public class CircleType : Shape { }

6.
7.

[DataContract(Name = "Triangle")]

8.

public class TriangleType : Shape { }

2. What is message contract?

Message contracts are preferred only when there is a need to "control" the layout of your message
(the SOAP message); for instance, adding specific headers/footer/etc to a message.

Message contracts describe the structure of SOAP messages sent to and from a service and enable
you to inspect and control most of the details in the SOAP header and body.

Whereas data contracts enable interoperability through the XML Schema Definition (XSD)
standard, message contracts enable you to interoperate with any system that communicates
through SOAP.

While, MessageContract(s) describes the structure of SOAP messages(since SOAP is context


oriented - passing-on complete information about object) sent to/from a service and enable you to
inspect and control most of the details in the SOAP header and body.

1. Generic - MessageContract enables you to interoperate with any system that communicates
through SOAP.
2. Control - Using message contracts, we get complete control over SOAP messages sent to/from a
service by having access to the SOAP headers and bodies. 3. Object Context - This(SOAPing) allows
use of simple or complex types to define the exact content of the SOAP.

Example
Following is a simplest message contract:

[MessageContract]
public class BankingDepositLog
{

[MessageHeader] public int numRecords


[MessageHeader] public DepositRecord records[];
[MessageHeader] public int branchID;

Why we use MessageContract when DataContract already is there?


A very simple answer to the question is, when you need a higher level of control over the message,
such as sending custom SOAP header, you then use MessageContract instead of DataContract. But in
my opinion, most of the messaging needs can be catered by DataContracts.
Sometimes complete control over the structure of a SOAP message is just as important as control
over its contents. This is especially true when interoperability is important or to specifically control
security issues at the level of the message or message part. In these cases, you can create a message
contract that enables you to use a type for a parameter or return value that serializes directly into
the precise SOAP message that you need.
why it is useful to use MessageContract(s), that is, to pass information in SOAP headers, you will
have to dive into the SOAP advantages

We Cant Mix Data and Message contract


Most important thing is we cant mix Data and Message contract Because message-based
programming and parameter-based programming cannot be mixed, so you cannot specify a
DataContract as an input argument to an operation and have it return a MessageContract, or specify
a MessageContract as the input argument to an operation and have it return a DataContract. You
can mix typed and untyped messages, but not messageContracts and DataContracts. Mixing
message and data contracts will cause a runtime error when you generate WSDL from the service.

Answer is

When we need a higher level of control over the message, such as sending custom SOAP header,
then we can useMessageContract instead of DataContract . But in general cases, most of the
messaging needs can be fulfilled by DataContracts.

Before debugging you will have to deply your dll and pdb to the IIS directory. Next, in your
VS click debug-->Attach to process.. "Ensure to check Show process from all users" and
"Show process in all sessions" you should now see W#WP process in your list of available
process. Select W3WP and click attach.

0
do
wn
vot
e

You should now be able to debug the WCF service. please refer to following blogs for more

debugging tips
http://dhawalk.blogspot.com/2007/07/debugging-tips-in-c.html
http://anilsharmadhanbad.blogspot.com/2009/07/debugging-wcf-service-hosted-in-local.html

Introduction
What is the use of FaultContract?
In simple WCF Service errors/Exceptions can be passed to the Client(WCF Service Consumer)
by using FaultContract
How do you handle errors in ASP.NET? It's very simple just by adding the simple Try & Catch
blocks. But when you come to WCF Service if any unexpected error occurred (like SQL server
down/Unavailability of data/Divide By Zero) in service then error/Exception details can be
passed to Client by using Fault Contract.

False Assumption
Most of people think that we can't throw FaultException in catch block. It should be only
based on some conditon (if condition). But it is false assumption. The main objective is, any type
of exceptions(Predicted or Unpredicted) from service to be passed to Client(WCF Service
consumer).
Predicted:(Divide By Zero/Channel Exceptions/Application exceptions etc..)
Collapse | Copy Code
public void Divide(float number, float divideBy)
{
If(dividBy ==0)
{
myServiceData.Result = false;
myServiceData.ErrorMessage = "Invalid Operation.";
myServiceData.ErrorDetails = "Can not divide by 0.";
throw new FaultException<ServiceData>(myServiceData);
}
return number/divideBy;
}

UnPredicted: (which I explained in this article like connection failures/SQL Server down/Transport
errors/Business logic errors.)
Collapse | Copy Code
try
{
SqlConnection con = new SqlConnection(StrConnectionString);
con.Open();

myServiceData.Result = true;
//Your logic to retrieve data & and return it. If any exception
occur while opening the connection or any other unexpected exception occur it
can be thrown to Client (WCF Consumer) by below catch blocks.
}
catch (SqlException sqlEx)
{
myServiceData.Result = true;
myServiceData.ErrorMessage = "Connection can not open this "
+
"time either connection string is wrong or Sever is down.
Try later";
myServiceData.ErrorDetails = sqlEx.ToString();
throw new FaultException<ServiceData>(myServiceData,
sqlEx.ToString());
}
catch (Exception ex)
{
myServiceData.Result = false;
myServiceData.ErrorMessage = "unforeseen error occured.
Please try later.";
myServiceData.ErrorDetails = ex.ToString();
throw new FaultException<ServiceData>(myServiceData,
ex.ToString());
}

Using the Code


This is a very simple WCF service implementation to understand the usage of the
FaultContract. Here I am implementing the TestConnection() method in the WCF service.
This method try to open some SQL Server connection if any errors occurs while opening the
connection it throws the error details to the cleint by using the Fault Contract.
Here my solution contain 2 projects
1. Service Implementation in ASP.NET
2. Consuming the service in Console application
Note: This article not written to test the connection string. This is to understand the usage of
FaultContract. So I taken this basic example to explain you better.

1. Service Implementation
Create a WCF Service application project & Implement the service with the following code.
The TestConnection() method added with FaultContract attribute in IService1 interface. It
means that service errors should be passed to the client with the type of ServiceData class.
ServiceData

class is DataContract class. The Error & Success message details to be added to
this data members.

Collapse | Copy Code


using
using
using
using
using
using
using

System;
System.Collections.Generic;
System.Linq;
System.Runtime.Serialization;
System.ServiceModel;
System.ServiceModel.Web;
System.Text;

namespace FaultContractSampleWCF
{
// NOTE: You can use the "Rename" command on the
// "Refactor" menu to change the interface name
// "IService" in both code and config file together.
[ServiceContract]
public interface IService1
{
[OperationContract]
[FaultContract(typeof(ServiceData))]
ServiceData TestConnection(string strConnectionString);
}
// Use a data contract as illustrated in the sample
// below to add composite types to service operations.
[DataContract]
public class ServiceData
{
[DataMember]
public bool Result { get; set; }
[DataMember]
public string ErrorMessage { get; set; }
[DataMember]
public string ErrorDetails { get; set; }
}
}

Here I am implementing the IService1 interface. This interface contain only one method
TestConnection() with one input parameter StrConnectionString. For this input parameter
SQL Server connection details should be passed from the client side.
Note: This example only to understand the basic use of Fault Contract. This article doesn't
concentrated on service security. You can try this article from your localhost SQL connection by
passing valid & Invalid connection string details.
Collapse | Copy Code
using
using
using
using
using
using
using

System;
System.Collections.Generic;
System.Linq;
System.Runtime.Serialization;
System.ServiceModel;
System.ServiceModel.Web;
System.Text;

using System.Data.SqlClient;
namespace FaultContractSampleWCF
{
// NOTE: You can use the "Rename" command on the "Refactor"
// menu to change the class name "Service1" in code, svc and config file
together.
public class Service1 : IService1
{
/// <summary>
/// Implement the TestConnection method.
/// </summary>
/// <returns></returns>
public ServiceData TestConnection(string StrConnectionString)
{
ServiceData myServiceData = new ServiceData();
try
{
SqlConnection con = new SqlConnection(StrConnectionString);
con.Open();
myServiceData.Result = true;
con.Close();
return myServiceData;
}
catch (SqlException sqlEx)
{
myServiceData.Result = true;
myServiceData.ErrorMessage = "Connection can not open this "
+
"time either connection string is wrong or Sever is down.
Try later";
myServiceData.ErrorDetails = sqlEx.ToString();
throw new FaultException<ServiceData>(myServiceData,
sqlEx.ToString());
}
catch (Exception ex)
{
myServiceData.Result = false;
myServiceData.ErrorMessage = "unforeseen error occured.
Please try later.";
myServiceData.ErrorDetails = ex.ToString();
throw new FaultException<ServiceData>(myServiceData,
ex.ToString());
}
}
}
}

Web.config
Note: The following endpoint details automatically added to web.cofig when you create a WCF
Service Project.
Collapse | Copy Code

<services>
<service name="FaultContractSampleWCF.Service1"
behaviorConfiguration="FaultContractSampleWCF.Service1Behavior">
<!-- Service Endpoints -->
<endpoint address="" binding="wsHttpBinding"
contract="FaultContractSampleWCF.IService1">
<identity>
<dns value="localhost"/>
</identity>
</endpoint>
<endpoint address="mex" binding="mexHttpBinding"
contract="IMetadataExchange"/>
</service>
</services>

2. Consuming the service in Console application


Create a new Console project and give the above service reference. Here we are creating a
object to the Service1Client class and calling the TestConnection() method by passing the
connection string. If connection succeeds then it shows "Connection Succeeded" message. If
unable to open connection then it moves to the catch block and displays the appropriate error.
Collapse | Copy Code
using
using
using
using
using
using

System;
System.Collections.Generic;
System.Linq;
System.Text;
Client_FaultContractSampleWCF.MyServiceRef;
System.ServiceModel;

namespace Client_FaultContractSampleWCF
{
class Program
{
static void Main(string[] args)
{
try
{
Service1Client objServiceClient = new Service1Client();
//Pass the connection string to the TestConnection Method.
ServiceData objSeviceData = objServiceClient.TestConnection(
@"integrated security=true;data source=localhost;initial
catalog=master");
if (objSeviceData.Result == true)
Console.WriteLine("Connection Succeeded");
Console.ReadLine();
}
catch (FaultException<ServiceData> Fex)
{
Console.WriteLine("ErrorMessage::" + Fex.Detail.ErrorMessage
+ Environment.NewLine);
Console.WriteLine("ErrorDetails::" + Environment.NewLine +
Fex.Detail.ErrorDetails);
Console.ReadLine();

}
}
}
}

Summary:
Hope that this article is useful to understand the usage of the FaultContract. This article is focus
on how to implement a basic WCF service and how to handle the errors. Please don't forgot to
rate it if you like.

Fault Contract
Service that we develop might get error in come case. This error should be reported to the client
in proper manner. Basically when we develop managed application or service, we will handle the
exception using try- catch block. But these exceptions handlings are technology specific.
In order to support interoperability and client will also be interested only, what wents wrong? not
on how and where cause the error.
By default when we throw any exception from service, it will not reach the client side. WCF
provides the option to handle and convey the error message to client from service using SOAP
Fault contract.
Suppose the service I consumed is not working in the client application. I want to know the real
cause of the problem. How I can know the error? For this we are having Fault Contract. Fault
Contract provides documented view for error accorded in the service to client. This help as to
easy identity the what error has accord. Let us try to understand the concept using sample
example.
Step 1: I have created simple calculator service with Add operation which will throw general
exception as shown below
//Service interface
[ServiceContract()]
public interface ISimpleCalculator
{
[OperationContract()]
int Add(int num1, int num2);
}
//Service implementation
public class SimpleCalculator : ISimpleCalculator
{
public int Add(int num1, int num2)
{
//Do something

throw new Exception("Error while adding number");


}
}

Step 2: On client side code. Exceptions are handled using try-Catch block. Even though I have
capture the exception when I run the application. I got the message that exceptions are not
handled properly.
try
{
MyCalculatorServiceProxy.MyCalculatorServiceProxy proxy
= new MyCalculatorServiceProxy.MyCalculatorServiceProxy();
Console.WriteLine("Client is running at " + DateTime.Now.ToString());
Console.WriteLine("Sum of two numbers... 5+5 =" + proxy.Add(5, 5));
Console.ReadLine();
}
catch (Exception ex)
{
Console.WriteLine(ex.Message);
Console.ReadLine();
}

Step 3: Now if you want to send exception information form service to client, you have to use
FaultException as shown below.
public int Add(int num1, int num2)
{
//Do something
throw new FaultException("Error while adding number");
}

Step 4: Output window on the client side is show below.

Step 5: You can also create your own Custom type and send the error information to the client
using FaultContract. These are the steps to be followed to create the fault contract.

Define a type using the data contract and specify the fields you want to return.
Decorate the service operation with the FaultContract attribute and specify the type name.
Raise the exception from the service by creating an instance and assigning properties of
the custom exception.

Step 6: Defining the type using Data Contract


[DataContract()]
public class CustomException
{
[DataMember()]
public string Title;
[DataMember()]
public string ExceptionMessage;
[DataMember()]
public string InnerException;
[DataMember()]
public string StackTrace;
}

Step 7: Decorate the service operation with the FaultContract


[ServiceContract()]
public interface ISimpleCalculator
{
[OperationContract()]
[FaultContract(typeof(CustomException))]
int Add(int num1, int num2);
}

Step 8: Raise the exception from the service


public int Add(int num1, int num2)
{
//Do something
CustomException ex = new CustomException();
ex.Title = "Error Funtion:Add()";
ex.ExceptionMessage = "Error occur while doing add function.";
ex.InnerException = "Inner exception message from serice";
ex.StackTrace = "Stack Trace message from service.";

throw new FaultException(ex,"Reason: Testing the Fault contract")


;
}

Step 9: On client side, you can capture the service exception and process the information, as
shown below.
try
{
MyCalculatorServiceProxy.MyCalculatorServiceProxy proxy
= new MyCalculatorServiceProxy.MyCalculatorServiceProxy();
Console.WriteLine("Client is running at " + DateTime.Now.ToString());
Console.WriteLine("Sum of two numbers... 5+5 =" + proxy.Add(5, 5));
Console.ReadLine();
}
catch (FaultException<MyCalculatorService.CustomException> ex)
{
//Process the Exception
}

Release mode and Debug mode are as:


Debug Mode
Developer use debug mode for debugging the web application on live/local server. Debug mode
allow developers to break the execution of program using interrupt 3 and step through the code.
Debug mode has below features:
1. Less optimized code
2. Some additional instructions are added to enable the developer to set a breakpoint on
every source code line.
3. More memory is used by the source code at runtime.
4. Scripts & images downloaded by webresource.axd are not cached.
5. It has big size, and runs slower.

Release Mode
Developer use release mode for final deployment of source code on live server. Release mode
dlls contain optimized code and it is for customers. Release mode has below features:
1. More optimized code
2. Some additional instructions are removed and developer cant set a breakpoint on every
source code line.
3. Less memory is used by the source code at runtime.

4. Scripts & images downloaded by webresource.axd are cached.


5. It has small size, and runs fast.
Note

There is no difference in functionality of a debug dll and a release dll. usually, when we compile
code in debug mode, we have a corresponding .pdb (program database) file. This .pdb file
contains information that enables the debugger to map the generated IL (intermediate language)
to source code line number. It also contains the names of local variables in the source code.

Set Compilation Mode in Visual Studio

We can set the compile mode in visual studio as shown in below fig.

HTTP header fields are components of the message header of requests and responses in the Hypertext
Transfer Protocol (HTTP). They define the operating parameters of an HTTP transaction.
VIEW
SQL Server a view represents a virtual table. ... the database engine recreates the data using the SELECT
statements in the view's definition.
Delete from table
where ID not in
(Select max(id)
from table
group by duplicateCol1, duplicateCol2duplicateColn)

Can you show a sample of Duplex Contract in WCF ?

[ServiceContract(Namespace = "http://www.Microsoft.com",
SessionMode = SessionMode.Required,

CallbackContract = typeof(IDuplexCallBack) )]

public interface IService1


{
[OperationContract(IsOneWay = true)]
void getData();
}

public interface IDuplexCallBack


{
[OperationContract(IsOneWay = true)]
void filterData(DataSet Output);
}

In the above code, getData() is a method which will be called by the client on the Service. This getdata()
is implemented in the server side.
filterData() is a method which will be called by the server on the Client. This method is implemented in
the client side.
CallbackContract is the name of the contract which will be called by the server on the client to raise an
event or to get some information from the client.

EnableViewStateMac page directive will show view state is tempered

UPDATE emp
SET sal = ( CASE
WHEN e2.sal IS NULL THEN e1.sal
ELSE e2.sal
END )
FROM employee e1 INNER JOIN emp e2
ON e1.empid = e2.empid;
public class A{

void test(){
System.out.println("Test from A");
};
public class B{
void test(){
System.out.println("Test from B");
A.this.test();
}
}
public static void main(String[] args) {
A a = new A();
B b = a.new B();
b.test();
}
}
Calling abstract class methods:
public abstract class Abstr
{
public void Describe()
{
//do something
}
}
public class Concrete : Abstr
{
/*Some other methods and properties..*/
}
class Program
{
public void Main()
{
Abstr abstr = new Concrete();
abstr.Describe();
Console.ReadLine();
}
}

How to estimate big project in few days methods eg-function point

Estimation Techniques : Function Point Analysis (FPA)


You can't control what you can't measure.
Software practitioners are frequently challenged to provide early and accurate software project
estimates. It speaks poorly of the software community that the issue of accurate estimating, early in
the life cycle, has not been adequately addressed and standardized.
The ability to accurately estimate the time/cost taken for a project to come to its successful
conclusion has been a serious problem for software engineers. The use of repeatable, clearly
defined and well understood software development process has in recent years shown itself to be
the most effective method of gaining useful historical data that can be used for statistical estimation.
In particular, the act of sampling more frequently, coupled with the loosening of constraints between
parts of a project, has allowed more accurate estimation and more rapid development times.
Popular methods for estimation in software engineering include:

Parametric Estimating
Wideband Delphi
Cocomo
SLIM
SEER-SEM Parametric Estimation of Effort, Schedule, Cost, Risk (based on Brooks Law)
Function Point Analysis
Proxy Based Estimation (PROBE) (from the Personal Software Process)
The Planning Game (from Extreme Programming)
Program Evaluation and Review Technique (PERT)
Analysis Effort method

NOTE: Brooks' law was stated by Fred Brooks in his 1975 book The Mythical Man-Month as
"Adding manpower to a late software project makes it later." Likewise, Brooks memorably
stated "The bearing of a child takes nine months, no matter how many women are assigned."
The value to be gained from utilizing a functional sizing technique, such as Function Points, is
primarily in the capability to accurately estimate a project early in the development process.
In words of Wikipedia
Function Point Analysis (FPA) is an ISO recognized method to measure the functional size of an
information system. The functional size reflects the amount of functionality that is relevant to and
recognized by the user in the business. It is independent of the technology used to implement the
system.

The unit of measurement is "function points". So, FPA expresses the functional size of an
information system in a number of function points (for example: the size of a system is 314 fp's).
The functional size may be used:

To budget application development or enhancement costs


To budget the annual maintenance costs of the application portfolio
To determine project productivity after completion of the project
To determine the Software Size for cost estimating

All software applications will have numerous elementary processes or independent processes to
move data. Transactions (or elementary processes) that bring data from outside the application
domain (or application boundary) to inside that application boundary are referred to as external
inputs. Transactions (or elementary processes) that take data from a resting position (normally on a
file) to outside the application domain (or application boundary) are referred as either an external
outputs or external inquiries. Data at rest that is maintained by the application in question is
classified as internal logical files. Data at rest that is maintained by another application in question
is classified as external interface files .
Types of Function Point Counts:
Development Project Function Point Count
Function Points can be counted at all phases of a development project from requirements up to and
including implementation. This type of count is associated with new development work. Scope creep
can be tracked and monitored by understanding the functional size at all phase of a project.
Frequently, this type of count is called a baseline function point count.
Enhancement Project Function Point Count
It is common to enhance software after it has been placed into production. This type of function point
count tries to size enhancement projects. All production applications evolve over time. By tracking
enhancement size and associated costs a historical database for your organization can be built.
Additionally, it is important to understand how a Development project has changed over time.
Application Function Point Count
Application counts are done on existing production applications. This baseline count can be used
with overall application metrics like total maintenance hours. This metric can be used to track
maintenance hours per function point. This is an example of a normalized metric. It is not enough to
examine only maintenance, but one must examine the ratio of maintenance hours to size of the
application to get a true picture.
Productivity:
The definition of productivity is the output-input ratio within a time period with due consideration for
quality.
Productivity = outputs/inputs (within a time period, quality considered)

The formula indicates that productivity can be improved by (1) by increasing outputs with the same
inputs, (2) by decreasing inputs but maintaining the same outputs, or (3) by increasing outputs and
decreasing inputs change the ratio favorably.
Software Productivity = Function Points / Inputs
Effectiveness vs. Efficiency:
Productivity implies effectiveness and efficiency in individual and organizational performance.
Effectiveness is the achievement of objectives. Efficiency is the achievement of the ends with least
amount of resources.
Software productivity is defined as hours/function points or function points/hours. This is the average
cost to develop software or the unit cost of software. One thing to keep in mind is the unit cost of
software is not fixed with size. What industry data shows is the unit cost of software goes up with
size.
Average cost is the total cost of producing a particular quantity of output divided by that quantity. In
this case toTotal Cost/Function Points. Marginal cost is the change in total cost attributable to a
one-unit change in output.
There are a variety of reasons why marginal costs for software increase as size increases. The
following is a list of some of the reasons

As size becomes larger complexity increases.


As size becomes larger a greater number of tasks need to be completed.
As size becomes larger there is a greater number of staff members and they become more
difficult to manage.

Function Points are the output of the software development process. Function points are the unit of
software. It is very important to understand that Function Points remain constant regardless who
develops the software or what language the software is developed in. Unit costs need to be
examined very closely. To calculate average unit cost all items (units) are combined and divided by
the total cost. On the other hand, to accurately estimate the cost of an application each component
cost needs to be estimated.

Determine type of function point count


Determine the application boundary
Identify and rate transactional function types to determine their contribution to the unadjusted
function point count.
Identify and rate data function types to determine their contribution to the unadjusted function
point count.
Determine the value adjustment factor (VAF)
Calculate the adjusted function point count.

To complete a function point count knowledge of function point rules and application documentation
is needed. Access to an application expert can improve the quality of the count. Once the application
boundary has been established, FPA can be broken into three major parts
1. FPA for transactional function types
2. FPA for data function types
3. FPA for GSCs
Rating of transactions is dependent on both information contained in the transactions and the
number of files referenced, it is recommended that transactions are counted first. At the same time a
tally should be kept of all FTRs (file types referenced) that the transactions reference. Every FTR
must have at least one or more transactions. Each transaction must be an elementary process. An
elementary process is the smallest unit of activity that is meaningful to the end user in the business.
It must be self-contained and leave the business in consistent state
Function Point calculation
The function point method was originaly developed by Bij Albrecht. A function point is a rough
estimate of a unit of delivered functionality of a software project. Function points (FP) measure size
in terms of the amount of functionality in a system. Function points are computed by first calculating
an unadjusted function point count (UFC). Counts are made for the following categories

Number of user inputs


Each user input that provides distinct application oriented data to the software is counted.

Number of user outputs


Each user output that provides application oriented information to the user is counted. In this
context "output" refers to reports, screens, error messages, etc. Individual data items within a
report are not counted separately.

Number of user inquiries


An inquiry is defined as an on-line input that results in the generation of some immediate
software response in the form of an on-line output. Each distinct inquiry is counted.

Number of files
Each logical master file is counted.

Number of external interfaces

All machine-readable interfaces that are used to transmit information to another system are
counted.
Once this data has been collected, a complexity rating is associated with each count according
to Table
TABLE 1: Function point complexity weights.
Measurement parameter
Number
Number
Number
Number
Number

of
of
of
of
of

Weighting factor
Simple Average Complex
user inputs
3
4
6
user outputs
4
5
7
user inquiries
3
4
6
files
7
10
15
external interfaces 5
7
10

Each count is multiplied by its corresponding complexity weight and the results are summed to
provide the UFC. The adjusted function point count (FP) is calculated by multiplying the UFC by
a technical complexity factor (TCF) also referred to as Value Adjustment Factor (VAF).
Components of the TCF are listed in Table 2
Table 2. Components of the technical complexity factor.
F1 Reliable back-up and recovery F2 Data communications
F3 Distributed functions
F4 Performance
F5 Heavily used configuration
F6 Online data entry
F7 Operational ease
F8 Online update
F9 Complex interface
F10 Complex processing
F11 Reusability
F12 Installation ease
F13 Multiple sites
F14 Facilitate change

Alternatively the following questionaire could be utilized


1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.

Does the system require reliable backup and recovery?


Are datacommunications required?
Are there distributed processing functions?
Is preformance critical?
Will the system run in an existing, heavuly utilized operational enviroment?
Does the system require on-line data entry?
Does the on-line data entry require the input transaction to be build over multiple screens or
operations?
Are the master files updated online?
Are the input, outputs, files or inquiries complex?
Is the internal processing complex?
Is the code designed to be reusable?
Are conversions and installation included in the design?
Is the system designed for multiple installations in different organizations?
Is the applications designed to facilitate change and ease of use?

Each component is rated from 0 to 5, where 0 means the component has no influence on the
system and 5 means the component is essential (Pressman, 1997). The VAF can then be
calculated as:
VAF = 0.65 + (Sum of GSCs x 0.01) Where Sum of GSCs = SUM(Fi)
The factor varies from 0.65 (if each Fi is set to 0) to 1.35 (if each Fi is set to 5) (Fenton, 1997).
The final function point calculation is:
Final Adjusted FP = UFC x VAF
Convert AFP into SLOC using appropriate conversion factor.
The following calculations depends on the scenario applicable
SLOC = 16 x SLOC/AFP [NOTE: 16 is the conversion factor]
EFFORT = EAF x A x (SLOC)EX
EAF = CPLX x TOOL
A = 3.2= Constant based on the development mode.
EX = 0.38= Constant based on the development mode.
CPLX = 1.3 = Constant based on the development language.
TOOL = 1.1 = Constant based on the development Tool.

TDEV = 2.5 x (EFFORT) EX in months

Abbreviations
AFP : Adjusted Function Point
UFP : Unadjusted Function Point
GSC : General System Characteristics
FTR : File Types Referenced
FP : Function Point
ILF : Internal Logical File.
EIF : External Interface file
EI : External Inputs
EO : External Outputs
EQ : External Enquiries
RET : Record Element Type
DET : Data Element Type
FTR : File Type Reference
GSC : General System Characteristic
VAF : Value Adjustment Factor
LOC : Line of code

EAF : Effort Adjustment Factor


SLOC : Source Lines of Code
CPLX : Development/Technical Complexity Factor
TOOL : Development/Technical Tool Complexity Factor
TDEV : Development Time
REFERENCES:
http://www.fprecorder.com/
http://www.codeproject.com/gen/design/Softwarecosting.asp
http://www.softdevtools.com/modules/weblinks/viewcat.php?cid=39
http://www.softwaremetrics.com/Function%20Point%20Training%20Booklet%20New.pdf
http://sern.ucalgary.ca/courses/seng/621/W98/johnsonk/cost.htm#Function%20Points
http://msdn2.microsoft.com/en-us/library/bb245774.aspx
http://www.methodsandtools.com/mt/download.php
http://www.softwaremetrics.com/freemanual.htm
http://www.ijcim.th.org/past_editions/v13n1/IJCIM-V131-pp3.pdf
http://www.codeproject.com/gen/design/usecasep.asp
http://www.codeproject.com/gen/design/cocomo2.asp
http://www.codeproject.com/gen/design/estimate-manhour-software.asp
http://www.codeproject.com/gen/design/Estimation.asp

Sticky sessions
Sticky session refers to the feature of many commercial load balancing solutions for webfarms to route the requests for a particular session to the same physical machine that
serviced the first request for that session. This is mainly used to ensure that a in-proc
session is not lost as a result of requests for a session being routed to different servers.
Since requests for a user are always routed to the same machine that first served the
request for that session, sticky sessions can cause uneven load distribution across servers.

Xmlhttprequest
http://www.codeproject.com/Articles/21157/Backbone-of-Ajax-XmlHttpRequest
http://www.codeproject.com/Articles/375248/Understanding-Callback-using-XMLHttpRequest

How create your own programming language


Yes can create
How to enable security in wshttpbinding
<bindings>
<wsHttpBinding>
<binding name="wsHttpEndpointBinding">
<security mode="Transport">
</security>
</binding>
</wsHttpBinding>
</bindings>
<services>
<service behaviorConfiguration="ServiceBehavior" name="Service">
<endpoint address="" binding="wsHttpBinding"
bindingConfiguration="wsHttpEndpointBinding"
name="wsHttpEndpoint" contract="IService">
<!--<identity>
<dns value="" />
</identity>-->
</endpoint>
<endpoint address="mex" binding="mexHttpBinding"
contract="IMetadataExchange" />
</service>
</services>

http://msdn.microsoft.com/en-us/library/ms731884(v=vs.110).aspx

Introduction
This article is useful for developers who are interested in implementing WCF webservice using
transport layer security and SSL configured on IIS6.0. Those who do not have a good idea about WCF
can read more about it here and here.

Using the code


You can go through the web.config file in the project folders which I have uploaded.
Collapse | Copy Code

<system.serviceModel>
<services>
<service behaviorConfiguration="returnFaults" name="TestService.Service">
<endpoint binding="wsHttpBinding" bindingConfiguration=
"TransportSecurity" contract="TestService.IService"/>

<endpoint address="mex" binding="mexHttpsBinding"


name="MetadataBinding" contract="IMetadataExchange"/>
</service>
</services>
<behaviors>
<serviceBehaviors>
<behavior name="returnFaults">
<serviceDebug includeExceptionDetailInFaults="true"/>
<serviceMetadata httpsGetEnabled="true"/>
<serviceTimeouts/>
</behavior>
</serviceBehaviors>
</behaviors>
<bindings>
<wsHttpBinding>
<binding name="TransportSecurity">
<security mode="Transport">
<transport clientCredentialType="None"/>
</security>
</binding>
</wsHttpBinding>
</bindings>
<diagnostics>
<messageLogging logEntireMessage="true"
maxMessagesToLog="300" logMessagesAtServiceLevel="true"
logMalformedMessages="true" logMessagesAtTransportLevel="true"/>
</diagnostics>
</system.serviceModel>
//Contract Description
[ServiceContract]
interface IService
{
[OperationContract]
string TestCall();
}
//Implementation
public class Service:IService
{
public string TestCall()
{
return "You just called a WCF webservice On SSL
(Transport Layer Security)";
}
}
//Tracing and message logging
<system.diagnostics>
<sources>
<source name="System.ServiceModel"
switchValue="Information,ActivityTracing" propagateActivity="true">
<listeners>
<add name="xml"/>
</listeners>
</source>
<source name="System.ServiceModel.MessageLogging">
<listeners>
<add name="xml"/>
</listeners>

</source>
</sources>
<sharedListeners>
<add initializeData="C:\Service.svclog"
type="System.Diagnostics.XmlWriterTraceListener" name="xml"/>
</sharedListeners>
<trace autoflush="true"/>
</system.diagnostics>

In the above ServiceModel configuration, there are two end points:


1. One with contract TestService.IService: In this, binding is configured to have transport layer
security , see inside the <bindings> tag. So SSL has to be configured on IIS.
2. One with contract IMetadataExchange: this is also configured to an HTTPS call. If you see the
binding it ismexHttpsBinding, and in the service behaviors section, httpsGetEnabled is used, here I
tried to even secure the metadata publishing through WSDL.
To configure this Web.config file you can use SvcConfigEditor.exe which is located in
C:\program files\microsoft sdks\windows\v6.0\bin\svcconfigeditor.exe
If you try to run the code from Visual Studio then you get an exception as shown below:
"Could not find a base address that matches scheme HTTPS for the endpoint with
binding WSHttpBinding. Registered base address schemes are [HTTP]."
So first configure the website on SSL. To get an idea on how to configure SSL, you can go
through this. Make sure that when you configure the SSL, the certificate CN value should be exactly
the same as the URL of the website. For example, if your webservice address
is http:\\www.example.com, then issue a certificate on the name : CN =
http:\\www.example.com.
Don't forget to host an entry in the hosts file c:\windows\system32\drivers\etc\hosts. If you want to
put this on localhost then just enter the following in the host file 127.0.0.1 www.example.com.
Configure www.example.com as the header value in the website properties on port 80. Once you
are done with SSL, you will access the webservice through the web browser
as https://www.example.com/service.svc . On this page you will have the HTTPS URL for
WSDL .
I have even enabled tracing and message logging on the webservice. To view the service log just
usesvctraceviewer.exe by loading service.log file in this. See the <system.diagnostics> tag above
Note that I have not put any certificates to run this sample. So if you want to run this sample, then
generate a certificate, install it on IIS as per the instructions above and run it though the browser. To
get an idea how to generate self certificates for testing purposes just go through this link.
To run this project you need to have IIS 6.0 on your machine. On IIS 5.0 also you can do that, but it
needs to be configured to run WCF services.

Hope this article helps you get a good idea about WCF transport layer security and SSL. If you have
any question or comments please email me, I would really appreciate it. Thanks.

Ref in reference
One of my collegue started a discussion regarding ref keyword in C# and I brought up a small
sample, which caused quite a confusion- Why is "ref" used for reference types? Here is the sample.
class Program
{
static void Main(string[] args)
{
int[] i = { 1, 2, 3, 4 };
//Will output "1
Console.WriteLine("Before : " + i[1].ToString());
//Calling a method (*NOT* using ref keyword)..So ideally it shouldn't matter what is being changed
new Test().Modify(i);
//Should output the same "1", as we are not using the ref keyword here....
Console.WriteLine("After : " + i[1].ToString());
//But it outputs "5", which is modified internally in an array
Console.ReadLine();
}
}
class Test
{
public void Modify(int[] i)
{
if ((i != null) && i.Length > 2)
i[1] = 5;
}
}

In the above sample, even if I dont use the ref keyword, the value is modified for the reference type
(here it is int[]). Why is it so? Any good explanation is greatly appreciated.

I can see why some developers have never given any thought to the ref keyword in C# when used for
reference types in method signatures. I suspect they presume that just like in VB 6, ref is the default and
applying it makes no difference well in .NET it sure does! Although I think that Ive only ever needed to
use it once
Here is an example:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26

void Main()
{
var obj = new WorkPackage("Object 1");
Processor processor = new Processor();
processor.DoSomeWork(obj);
Console.WriteLine(obj.Name);
}
public class WorkPackage
{
public WorkPackage(string name)
{
this.Name = name;
}
public string Name { get; set; }
}
public class Processor
{
public void DoSomeWork(WorkPackage myObject)
{
myObject = new WorkPackage("Object 2");
}
}

Outputs: Object 1
Change the method signature to include the ref keyword

1
2
3
4

public void DoSomeWork(ref ClassX myObject)


{
myObject = new ClassX("Object 2");
}

Outputs: Object 2

Anonymous methods code

delegate int MathOp(int a, int b);


static void Main()
{
//statements
MathOp op = delegate(int a, int b) { return a + b; };
int result = op(13, 14);
//statements
}

Anonymous methods allow you to define a code block where a


"delegate" object is acceptable.

An anonymous methods use the keyword "delegate", instead of


name of the method. This is followed by the bodyof
the method.

Anonymous method always start with the keyword "delegate"


and which is followed by parameters to be used inside
the method & the method body itself.

You can reduce the code by preventing instantiation of the


delegate and registering delegate with the methods.

It increases the maintainability & readability of our code by


keeping the caller of the method & the method itself as close as
possible to one another.
public MyForm()
{

Button btnSayHello = new Button();


btnSayHello.Text = "Hello";

btnSayHello.Click +=
delegate
{
MessageBox.Show("Hello!! User");
};

Controls.Add(btnSayHello);
}
Note:

Anonymous methods are an easy or simplified way for you to


assign handlers to the events. They take less effort as compared
to delegates & are closer to the events they are associated with.

An anonymous methods can't access the out or


ref parameters of an outer scope.

The anonymous-method-block can not access Unsafe code


within it

Method attributes can't be applied to Anonymous methods. Also


an anonymous methods can be added to the invocation list for
delegates, but it can be only deleted from the invocation list if
they have been saved to the delegates.

http://www.c-sharpcorner.com/uploadfile/Ashush/anonymous-methods-in-C-Sharp/

http://www.c-sharpcorner.com/UploadFile/051e29/anonymous-method-in-C-Sharp/

Lamda expression code


http://www.c-sharpcorner.com/uploadfile/kalisk/lambda-expressions-in-C-Sharp-3-0/
1. What is a Lambda Expression?
A lambda expression is an anonymous function and it is mostly used to create delegates in LINQ.
Simply put, it's a method without a declaration, i.e., access modifier, return value declaration, and
name.
2. Why do we need lambda expressions? (Why would we need to write a method without a name?)
Convenience. It's a shorthand that allows you to write a method in the same place you are going to
use it. Especially useful in places where a method is being used only once, and the method definition
is short. It saves you the effort of declaring and writing a separate method to the containing class.
Benefits:
a. Reduced typing. No need to specify the name of the function, its return type, and its access
modifier.
b. When reading the code you don't need to look elsewhere for the method's definition.
Lambda expressions should be short. A complex definition makes the calling code difficult to read.
3. How do we define a lambda expression?
Lambda basic definition: Parameters => Executed code.

Simple example
Collapse | Copy Code

n => n % 2 == 1

n is the input parameter


n % 2 == 1 is the expression
You can read n => n % 2 == 1 like: "input parameter named n goes to anonymous function which
returns true if the input is odd".
Same example (now execute the lambda):
Collapse | Copy Code

List<int> numbers = new List<int>{11,37,52};

List<int> oddNumbers = numbers.where(n => n % 2 == 1).ToList();


//Now oddNumbers is equal to 11 and 37

That's all, now you know the basics of Lambda Expressions.

Impersonation
Difference between asp.net 3.5 and 4.0
http://www.asp.net/whitepapers/aspnet4
http://www.asp.net/whitepapers/aspnet4/breaking-changes

Asynchronous Communication in a WCF Service

Introduction
This article shows a step by step implementation of asynchronous communication in WCF.

Background
Asynchronous pattern is a very well known and common Design Pattern in systems that are built on
top of executions.
Like many other technologies, this pattern has an implementation in Windows Communication
Foundation.
The .NET Framework provides two Design Patterns for asynchronous operations:

Asynchronous operations that use IAsyncResult objects.


Asynchronous operations that use events.
This article uses the IAsyncResult implementation for asynchronous communication.

Need of async communication


For long running executions in an application, there is a probability that a current thread cannot keep
executing because it may stop the user interface. For example, when a Windows application goes to
a new state to execute a long running process, windows may freeze and even crash. One solution is
to move this execution to another thread and let it follow there.
Asynchronous pattern comes into play to solve this issue, and Microsoft has a built-in mechanism for
this pattern in its great .NET Framework. Microsoft's implementation of this pattern consists of these
pieces:

Two methods for an asynchronous operation.


An object which implements the IAsyncCallback interface.
A callback delegate.
This way, the execution of a method splits into two steps. In the first step, you create the background
process and start it, and in the second step, you listen for changes in the process and wait until it
finishes.

Using the code


Here is a short description of the classes used in the sample project:

AsyncResult
Collapse | Copy Code

public class AsyncResult : IAsyncResult, IDisposable


{
AsyncCallback callback;
object state;
ManualResetEvent manualResentEvent;
public AsyncResult(AsyncCallback callback, object state)
{
this.callback = callback;
this.state = state;
this.manualResentEvent = new ManualResetEvent(false);
}

AsyncResult is derived from IAsyncResult.


AsyncResult has some properties, like AsyncCallback, a state object, and
a ManualResetEvent which handles the waiting for the asynchronous operation. There is
an IsCompleted property which returns a boolean value to specify that if the operation is
completed asynchronously, a simple WaitOne(0, false) method call always returns the

appropriate value for any asynchronous operation. The last point is about the Complete() method.
This calls theSet() method of my ManualResetEvent to show that my event is signaled and any
other awaiting thread can follow. Then I pass the current object to the callback if the callback is
not null.

AddAsyncResult
Collapse | Copy Code

public class AddAsyncResult : AsyncResult


{
public readonly int number1 = 0;
public readonly int number2 = 0;
private int result;
public AddDataContract AddContract { get; set; }
public Exception Exception { get; set; }
public int Result
{
get { return result; }
set { result = value; }
}
public AddAsyncResult(int num1, int num2, AsyncCallback callback, object state)
: base(callback, state)
{
this.number1 = num1;
this.number2 = num2;
}
public AddAsyncResult(AddDataContract input, AsyncCallback callback, object state)
: base(callback, state)
{
this.AddContract = input;
}
}

AddAsyncResult is derived from AsyncResult; it also holds the input and output
datacontracts/entities. We need to develop this class as per requirements.

WCF Service implementation


IAddService
Collapse | Copy Code

[ServiceContract()]
public interface IAddService
{
[OperationContract(AsyncPattern = true)]
[FaultContract(typeof(ErrorInfo))]
IAsyncResult BeginAddDC(AddDataContract input,
AsyncCallback callback, object state);
//[FaultContract(typeof(ErrorInfo))]

AddDataContract EndAddDC(IAsyncResult ar);


}

This service has the BeginAddDC method which is declared as an asynchronous method
using AsyncPattern=true.

AddService
Collapse | Copy Code

public IAsyncResult BeginAddDC(AddDataContract input, AsyncCallback callback, object state)


{
AddAsyncResult asyncResult = null;
try
{
//throw new Exception("error intorduced here in BeginAddDC.");
asyncResult = new AddAsyncResult(input, callback, state);
//Queues a method for execution. The method executes
//when a thread pool thread becomes available.
ThreadPool.QueueUserWorkItem(new WaitCallback(CallbackDC), asyncResult);
}
catch (Exception ex)
{
ErrorInfo err = new ErrorInfo(ex.Message, "BeginAddDC faills");
throw new FaultException<ErrorInfo>(err, "reason goes here.");
}
return asyncResult;
}
public AddDataContract EndAddDC(IAsyncResult ar)
{
AddDataContract result = null;
try
{
//throw new Exception("error intorduced here in EndAddDC.");
if (ar != null)
{
using (AddAsyncResult asyncResult = ar as AddAsyncResult)
{
if (asyncResult == null)
throw new ArgumentNullException("IAsyncResult parameter is null.");
if (asyncResult.Exception != null)
throw asyncResult.Exception;
asyncResult.AsyncWait.WaitOne();
result = asyncResult.AddContract;
}
}
}
catch (Exception ex)
{
ErrorInfo err = new ErrorInfo(ex.Message, "EndAddDC faills");
throw new FaultException<ErrorInfo>(err, "reason goes here.");

}
return result;
}
private void CallbackDC(object state)
{
AddAsyncResult asyncResult = null;
try
{
asyncResult = state as AddAsyncResult;
//throw new Exception("error intorduced here in CallbackDC.");
//throw new Exception("service fails");
asyncResult.AddContract = InternalAdd(asyncResult.AddContract);
}
catch (Exception ex)
{
asyncResult.Exception = ex;
//ErrorInfo err = new ErrorInfo(ex.Message, "CallbackDC faills");
//throw new FaultException<ErrorInfo>(err, "reason goes here.");
}
finally
{
asyncResult.Complete();
}
}
private int InternalAdd(int number1, int number2)
{
Thread.Sleep(TimeSpan.FromSeconds(20));
return number1 + number2;
}

BeginAddDC queues a method CallBackDC to execute it once a thread from the thread pool
becomes available, and is returned immediately to the client. This CallBackDC method then
performs the actual processing of the client request on a separate thread, and after finishing its
processing, it signals a ManualResetEvent.
EndAddDC simply gets the actual result from IAsyncResult and returns it.
Client
The client simply creates a proxy of the service and calls the BeginAddDC method of the service.
Along with business related parameters, it also passes a callback method (this is the point where the
control will fall once the WCF Service execution finishes) and a state object.
Collapse | Copy Code

IAsyncResult res = service.BeginAddDC(input, new AsyncCallback(AddCallbackDC), service);

Client callback method


Collapse | Copy Code

static void AddCallbackDC(IAsyncResult ar)


{
try
{
Console.ForegroundColor = ConsoleColor.Yellow;
Console.WriteLine("in AddCallbackDC");
IAddService res = ar.AsyncState as IAddService;
if (res != null)
{
Console.WriteLine("Result returned from WCF service");
Console.WriteLine(res.EndAddDC(ar).Result.ToString());
}
}
catch (Exception ex)
{
}
finally
{
if (addProxy != null)
{
addProxy.CloseProxy();
Console.WriteLine("Proxy closed.");
}
Console.ResetColor();
}

This callback method extracts the service object from IAsyncResult.AsyncState and
calls EndAddDC of the WCF Service to get the actual result on the client side.

Output
As you can see in the screenshot below, after providing two numbers, the client creates a proxy and
calls itsBeginAddDC method, and then immediately returns back to the client and continues its
execution (starts printing numbers) until it receives a response from the WCF Service. After receiving
a response, it displays it and continues with the main execution.

Handling FaultException
If any exception arises in the Begin/End method in the WCF Service, then it can be raised from there
itself as aFaultException.

Collapse | Copy Code

catch (Exception ex)


{
ErrorInfo err = new ErrorInfo(ex.Message, "BeginAddDC faills");
throw new FaultException<ErrorInfo>(err, "reason goes here.");
}

But what if the exception comes in the WCF Service in the method which we are running on a separate
thread (in this case, this method is CallBackDC)?
Collapse | Copy Code

private void CallbackDC(object state)


{
AddAsyncResult asyncResult = null;
try
{
asyncResult = state as AddAsyncResult;
//throw new Exception("error intorduced here in CallbackDC.");
//throw new Exception("service fails");
asyncResult.AddContract = InternalAdd(asyncResult.AddContract);
}
catch (Exception ex)
{
asyncResult.Exception = ex;
//ErrorInfo err = new ErrorInfo(ex.Message, "CallbackDC faills");
//throw new FaultException<ErrorInfo>(err, "reason goes here.");
}
finally
{
asyncResult.Complete();
}
}

To take care of such exceptions, we can add an Exception property in our AsyncResult and
populate this property if an exception occurs in the callback method in the service. Then you can
check this Exception property in the relevant End method. FaultException can be raised from
this End method.
Collapse | Copy Code

public AddDataContract EndAddDC(IAsyncResult ar)


{
AddDataContract result = null;
try
{
//throw new Exception("error intorduced here in EndAddDC.");
if (ar != null)
{
using (AddAsyncResult asyncResult = ar as AddAsyncResult)
{
if (asyncResult == null)
throw new ArgumentNullException("IAsyncResult parameter is null.");

if (asyncResult.Exception != null)
throw asyncResult.Exception;
asyncResult.AsyncWait.WaitOne();
result = asyncResult.AddContract;
}
}
}
catch (Exception ex)
{
ErrorInfo err = new ErrorInfo(ex.Message, "EndAddDC faills");
throw new FaultException<ErrorInfo>(err, "reason goes here.");
}
return result;
}

At the client side, you can catch all the exceptions in the client's callback method.

Points to be remember
1.
2.
3.
4.

Declare two relevant method: one as BeginMethodName and another as EndMethodName.


Add [OperationContract(AsyncPattern = true)] to the relevant Begin method.
Do not provide the OperationContract attribute to the corresponding End method.
In the service implementation, the actual processing should go in a method, and this method should
run on a separate thread. To make sure this, use ThreadPool.QueueUserWorkItem .
5. On the client side, call the relevant Begin method, but do not close the proxy after this because the
End method needs the same channel to get the result. You can close it in the client's callback after
calling the End method.

Delete duplicate records from table

.without primary key

WITH CTE (COl1,Col2, DuplicateCount)


AS
(
SELECT COl1,Col2,
ROW_NUMBER() OVER(PARTITION BY COl1,Col2 ORDER BY Col1) AS DuplicateCount
FROM DuplicateRcordTable
)
DELETE
FROM CTE
WHERE DuplicateCount > 1

With primary key


DELETE
FROM MyTable
WHERE ID NOT IN
(
SELECT MAX(ID)

FROM MyTable
GROUP BY DuplicateColumn1, DuplicateColumn2, DuplicateColumn3)

Maintain gridview paging using store proc

CREATE PROCEDURE GetCustomersPageWise


@PageIndex INT = 1
,@PageSize INT = 10
,@RecordCount INT OUTPUT
AS
BEGIN
SET NOCOUNT ON;
SELECT ROW_NUMBER() OVER
(
ORDER BY [CustomerID] ASC
)AS RowNumber
,[CustomerID]
,[CompanyName]
,[ContactName]
INTO #Results
FROM [Customers]
SELECT @RecordCount = COUNT(*)
FROM #Results
SELECT * FROM #Results
WHERE RowNumber BETWEEN(@PageIndex 1) * @PageSize + 1 AND(((@PageIndex -1) * @PageSize + 1) + @PageSize) - 1
DROP TABLE #Results
END

WSDL vs MEX, knockout or tie? Why use Mex


10 August 2011
tags: mex, WCF, wsdl, WSDL vs MEX
12 comments

When teaching WCF I am always asked about the difference between getting the
services metadata by using the WSDLs http get url, and getting the metadata by calling
the MEX endpoint.
To answer that question we first need to understand the different parts of the
configuration that affect metadata creation.
The ServiceMetadata behavior
This behavior controls whether metadata is created for the service. When this behavior is
used, the service is scanned, and metadata is created for the services contracts (a list of
operations and types exposed by the service).
If the behavior is not used, no metadata will be created for the service, and you will not
be able to create MEX endpoints.
The ServiceMetadatas httpGetEnabled flag
This flag defines whether the metadata will be accessible by an http get request. If this
attribute is set to true, then a default url will be created for the metadata (usually the
services address with the suffix of ?wsdl). The url will lead you to a WSDL file
containing the description of the service operations, but without the description of the
data contracts these files are accessible by different urls, usually the services url with
the suffix of ?xsd=xsdN. The list of these urls are pointed out from the WSDL file.
If you do not set this attribute to true, you will not be able to access the metadata using
http get requests. If you prefer using https for the get requests, you can use
the httpsGetEnabled attribute instead of the httpGetEnabled.
There are several other settings for the get options you can read more about them
on MSDN.
The MEX endpoint
MEX endpoints are special endpoints that allow clients to receive the services metadata
by using SOAP messages instead of http get requests. You can create MEX endpoint
that can be accessed through http, https, tcp, and even named pipes.
The response that you will receive when calling a MEX endpoints GetMetadata
operation will include the content of the WSDL and all the XSD files that are linked to it.
So what exactly is the difference between MEX and WSDL?

There is no difference!
MEX and WSDL both output the same thing a web service description language
(WSDL) document, only MEX does it by getting a SOAP message over some transport
(http, tcp, named pipes) and returning one message with all the parts, while the WSDL
urls use http get requests and require sending several requests to get all the parts.
Dont believe me?
The following diff diagram was produced by comparing the output of a MEX call to the
output of the aggregated results gathered by calling all the WSDL related urls using http
get:

As you can see, there are only several sections that are different between the files
(marked in red and yellow), while most of the content is identical. Lets look at one of
these parts:

The red lines come from the MEX result, and the yellow lines from the WSDL file. The
difference is because when using WSDL files, the rest of the XSD files are linked by using
the <xsd:import> tag with the schemaLocation attribute. Since the MEX response
includes all the XSD content in it, the import tag doesnt include the location attribute.
As for the other different sections they are different because the MEX response wraps
each WSDL/XSD part with an xml element that does not appear in the aggregated files.
These elements are part of the MEX standard declared by the W3C surprise surprise,
MEX is a W3C standard, its not a Microsoft proprietary spec.
So they are the same, still, which one to use?
In most cases there is no need for MEX endpoint using WSDLs with http get is usually
enough.
The Add Service Reference option in Visual Studio 2010 will work for both options. The
same goes for the svcutil command line tool.
So when would I use MEX?
1. If you want to make as less calls as possible to your service in order to get its metadata (one call
instead of several).
2. If you dont want to use HTTP to get the metadata, but prefer using TCP or named pipes (not so
common).
3. If you want people to ask you why you declared a MEX endpoint.

So is it a knockout or a tie? I say tie.


Default access modifier.
An enum has default modifier as public
A class has default modifiers as Internal . It can declare members (methods etc) with
following access modifiers:

public
internal
private
protected internal
An interface has default modifier as public
A struct has default modifier as Internal and it can declare its members (methods etc)
with following access modifiers:
public
internal
private
A methods, fields, and properties has default access modifier as "Private" if no modifier
is specified.

SQL SERVER Better Performance


LEFT JOIN or NOT IN?
First of all answer this question : Which method of T-SQL is better for performance LEFT JOIN or NOT IN
when writing query? Answer is : It depends! It all depends on what kind of data is and what kind query it is
etc. In that case just for fun guess one option LEFT JOIN or NOT IN. If you need to refer the query which
demonstrates the mentioned clauses, review following two queries.

USE AdventureWorks;
GO
SELECT ProductID
FROM Production.Product
WHERE ProductID
NOT IN (
SELECT ProductID
FROM Production.WorkOrder);
GO
SELECT p.ProductID
FROM Production.Product p
LEFT JOIN Production.WorkOrder w ON p.ProductID = w.ProductID
WHERE w.ProductID IS NULL;
GO
******************************************************************************
In C#, Action and Func are extremely useful tools for reducing duplication in code and decreasing coupling.
It is a shame that many developers shy away from them because they dont really understand them.
Adding Action and Func to your toolbox is a very important step in improving your C# code.
Its not really that hard to understand what they do and how to use them, it just takes a little patience

A simple way of thinking about Action<>


Most of us are pretty familiar with finding sections of repeated code, pulling that code out into a method and making that method
take parameters to represent the differences.
Here is a small example, which should look pretty familiar:

public void SteamGreenBeans()


{
var greenBeans = new GreenBeans();
Clean(greenBeans);
Steam(greenBeans, Minutes.Is(10));
Serve(greenBeans);
}
public void SteamCorn()
{
var corn = new Corn();
Clean(corn);
Steam(corn, Minutes.Is(15));
Serve(corn);
}
public void SteamSpinach()
{
var spinach = new Spinach();
Clean(spinach);
SteamVegetable(spinach, Minutes.Is(8));
Serve(spinach);
}
Each one of these methods pretty much does the same thing. The only difference here is the type of vegetable and the time to
steam it.
It is a simple and common refactor to refactor that code to:

public void SteamGreenBeans()


{
SteamVegetable(new GreenBeans(), 10);
}
public void SteamCorn()
{
SteamVegetable(new Corn(), 15);
}
public void SteamSpinach()
{
SteamVegetable(new Spinach(), 8);
}
public void SteamVegetable(Vegetable vegetable, int
timeInMinutes)
{

Clean(vegetable);
Steam(vegetable, Minutes.Is(timeInMinutes));
Serve(vegetable);
}
Much better, now we arent repeating the actions in 3 different methods.
Now lets imagine we want to do something more than steam. We need to be able to fry or bake the vegetables. How can we do
that?
Probably we will have to add some new methods for doing that. So we will end up with something like this:

public void SteamVegetable(Vegetable vegetable, int


timeInMinutes)
{
Clean(vegetable);
Steam(vegetable, Minutes.Is(timeInMinutes));
Serve(vegetable);
}
public void FryVegetable(Vegetable vegetable, int timeInMinutes)
{
Clean(vegetable);
Fry(vegetable, Minutes.Is(timeInMinutes));
Serve(vegetable);
}
public void BakeVegetable(Vegetable vegetable, int timeInMinutes)
{
Clean(vegetable);
Bake(vegetable, Minutes.Is(timeInMinutes));
Serve(vegetable);
}
Hmm, lots of duplication again. No problem. Lets just do what we did to the first set of methods and make
aCookVegetable method. Since we always clean, then cook, then serve, we should be able to just pass in the method of cooking
we will use.

ToList() method iterate on each element of provided collection and add them to new instance of List and
return this instance.Suppose an example

//using linq
list = Students.Where(s => s.Name == "ABC").ToList();

//traditional way
foreach (var student in Students)
{

if (student.Name == "ABC")
list.Add(student);
}
Deferred Query Execution
To understand Deferred Query Execution, lets take the following example which declares
some Employees and then queries all employees with Age > 28:

OUTPUT: Jack, Rahul


Looking at the query shown above, it appears that the query is executed at the point where
the arrow is pointing towards. However thats not true. The query is actually executed when
the query variable isiterated over, not when the query variable is created. This is
called deferred execution.
Now how do we prove that the query was not executed when the query variable was
created? Its simple. Just create another Employee instance after the query variable is
created

Notice we are creating a new Employee instance after the query variable is created. Now
had the query been executed when the query variable is created, the results would be the
same as the one we got earlier, i.e. only two employees would meet the criteria of Age >
28. However the output is not the same
OUTPUT: Jack, Rahul, Bill.
What just happened is that the execution of the query was deferred until the query variable
was iterated over in a foreach loop. This allows you to execute a query as frequently as you
want to, like fetching the latest information from a database that is being updated
frequently by other applications. You will always get the latest information from the
database in this case.
Immediate Query Execution
You can also force a query to execute immediately, which is useful for caching query results.
Let us say we want to display a count of the number of employees that match a criteria.

In the query shown above, it order to count the elements that match the condition, the
query must be executed, and this is done automatically when Count( ) is called. So adding a
new employee instanceafter the query variable declaration does not have any effect here,
as the query is already executed. The output will be 2, instead of 3.
The basic difference between a Deferred execution vs Immediate execution is that Deferred
execution of queries produce a sequence of values, whereas Immediate execution of queries
return a singleton value and is executed immediately. Examples are using Count(),
Average(), Max() etc.
Note: To force immediate execution of a query that does not produce a singleton value, you
can call the ToList(), ToDictionary() or the ToArray() method on a query or query variable.
These are called conversion operators which allow you to make a copy/snapshot of the
result and access is as many times you want, without the need to re-execute the query.
Hopefully novice developers will now know the basic difference between Deferred and
Immediate Execution of queries.
I hope you liked the article and I thank you for viewing it.

Understanding the Dynamic


Keyword in C# 4
The dynamic keyword brings exciting new features to C# 4. Find out how it
works and why it simplifies a lot of your coding tasks, including some handy
COM interop possibilities.

By Alexandra Rusina
02/01/2011

The dynamic keyword and the Dynamic Language Runtime (DLR) are major new features in C# 4 and the Microsoft .NET
Framework 4. These features generated a lot of interest when announced -- along with a lot of questions. There were a
number of answers as well, but they're now spread throughout documentation and on various technical blogs and articles.
So people continue asking the same questions again and again on forums and at conferences.
This article provides a general overview of the new dynamic features in C# 4 and also delves into some more in-depth
information about how they work with other language and framework features, such as reflection or implicitly typed variables.
Given there's a lot of information available already, I'll sometimes reuse classic examples with links to the original sources.
I'll also provide plenty of links for further reading.
What Is Dynamic?
Programming languages are sometimes divided into statically typed and dynamically typed languages. C# and Java are
often considered examples of statically typed languages, while Python, Ruby and JavaScript are examples of dynamically
typed languages.
Generally speaking, dynamic languages don't perform compile-time type checks and identify the type of objects at run time
only. This approach has its pros and cons: Often the code is much faster and easier to write, but at the same time you don't
get compiler errors and have to use unit testing and other techniques to ensure the correct behavior of your application.
Originally, C# was created as a purely static language, but with C# 4, dynamic elements have been added to improve
interoperability with dynamic languages and frameworks. The C# team considered several design options, but finally settled
on adding a new keyword to support these features: dynamic.
The dynamic keyword acts as a static type declaration in the C# type system. This way C# got the dynamic features and at
the same time remained a statically typed language. Why and how this decision was made is explained in the presentation
"Dynamic Binding in C# 4" by Mads Torgersen at PDC09. Among other things, it was decided that dynamic objects should
be first-class citizens of the C# language, so there's no option to switch dynamic features on or off, and nothing similar to the
Option Strict On/Off in Visual Basic was added to C#.
When you use the dynamic keyword you tell the compiler to turn off compile-time checking. There are plenty of examples on
the Web and in the MSDN documentation on how to use this keyword. A common example looks like this:
dynamic d = "test";
Console.WriteLine(d.GetType());
// Prints "System.String".
d = 100;
Console.WriteLine(d.GetType());
// Prints "System.Int32".
As you can see, it's possible to assign objects of different types to a variable declared as dynamic. The code compiles and
the type of object is identified at run time. However, this code compiles as well, but throws an exception at run time:
dynamic d = "test";
// The following line throws an exception at run time.
d++;
The reason is the same: The compiler doesn't know the runtime type of the object and therefore can't tell you that the
increment operation is not supported in this case.

Absence of compile-time type checking leads to the absence of IntelliSense as well. Because the C# compiler doesn't know
the type of the object, it can't enumerate its properties and methods. This problem might be solved with additional type
inference, as is done in the IronPython tools for Visual Studio, but for now C# doesn't provide it.
However, in many scenarios that might benefit from the dynamic features, IntelliSense wasn't available anyway because the
code used string literals. This issue is discussed in more detail later in this article.
Dynamic, Object or Var?
So what's the real difference between dynamic, object and var, and when should you use them? Here are short definitions of
each keyword and some examples.
The object keyword represents the System.Object type, which is the root type in the C# class hierarchy. This keyword is
often used when there's no way to identify the object type at compile time, which often happens in various interoperability
scenarios.
You need to use explicit casts to convert a variable declared as object to a specific type:
object objExample = 10;
Console.WriteLine(objExample.GetType());
This obviously prints System.Int32. However, the static type is System.Object, so you need an explicit cast here:
objExample = (int)objExample + 10;
You can assign values of different types because they all inherit from System.Object:
objExample = "test";
The var keyword, since C# 3.0, is used for implicitly typed local variables and for anonymous types. This keyword is often
used with LINQ. When a variable is declared by using the var keyword, the variable's type is inferred from the initialization
string at compile time. The type of the variable can't be changed at run time. If the compiler can't infer the type, it produces a
compilation error:
var varExample = 10;
Console.WriteLine(varExample.GetType());
This prints System.Int32, and it's the same as the static type.
In the following example, no cast is required because varExample's static typed is System.Int32:
varExample = varExample + 10;
This line doesn't compile because you can only assign integers to varExample:
varExample = "test";
The dynamic keyword, introduced in C# 4, makes certain scenarios that traditionally relied on the object keyword easier to
write and maintain. In fact, the dynamic type uses the System.Object type under the hood, but unlike object it doesn't require
explicit cast operations at compile time, because it identifies the type at run time only:

dynamic dynamicExample = 10;


Console.WriteLine(dynamicExample.GetType());
This prints System.Int32.
In the following line, no cast is required, because the type is identified at run time only:
dynamicExample = dynamicExample + 10;
You can assign values of different types to dynamicExample:
dynamicExample = "test";
There's a detailed blog post about differences between the object and dynamic keywords on the C# FAQ blog.
What sometimes causes confusion is that all of these keywords can be used together -- they're not mutually exclusive. For
example, let's take a look at this code:
dynamic dynamicObject = new Object();
var anotherObject = dynamicObject;
What's the type of anotherObject? The answer is: dynamic. Remember that dynamic is in fact a static type in the C# type
system, so the compiler infers this type for the anotherObject. It's important to understand that the var keyword is just an
instruction for the compiler to infer the type from the variable's initialization expression; var is not a type.
The Dynamic Language Runtime
When you hear the term "dynamic" in regard to the C# language, it usually refers to one of two concepts: the dynamic
keyword in C# 4 or the DLR. Although these two concepts are related, it's important to understand the difference as well.
The DLR serves two main goals. First, it enables interoperation between dynamic languages and the .NET Framework.
Second, it brings dynamic behavior to C# and Visual Basic.
The DLR was created based on lessons learned while building IronPython, which was the first dynamic language
implemented on the .NET Framework. While working on IronPython, the team found out that they could reuse their
implementation for more than one language, so they created a common underlying platform for .NET dynamic languages.
Like IronPython, the DLR became an open source project and its source code is now available at dlr.codeplex.com.
Later the DLR was also included in the .NET Framework 4 to support dynamic features in C# and Visual Basic. If you only
need the dynamic keyword in C# 4, you can simply use the .NET Framework and in most cases it will handle all interactions
with the DLR on its own. But if you want to implement or port a new dynamic language to .NET, you may benefit from the
extra helper classes in the open source project, which has more features and services for language implementers.
Using Dynamic in a Statically Typed Language
It's not expected that everybody should use dynamic whenever possible instead of the static type declarations. Compile-time
checking is a powerful instrument and the more benefits you can get from it, the better. And once again, dynamic objects in
C# do not support IntelliSense, which might have a certain impact on overall productivity.

At the same time, there are scenarios that were hard to implement in C# prior to the dynamic keyword and DLR. In most
cases they used System.Object type and explicit casting and couldn't get much benefit from compile-time checking and
IntelliSense anyway. Here are some examples.
The most notorious scenario is when you have to use the object keyword for interoperability with other languages or
frameworks. Usually you have to rely on reflection to get the type of the object and to access its properties and methods.
The syntax is sometimes hard to read and consequently the code is hard to maintain. Using dynamic here might be much
easier and more convenient than reflection.
Anders Hejlsberg gave a great example at PDC08 that looks like this:
object calc = GetCalculator();
Type calcType = calc.GetType();
object res = calcType.InvokeMember(
"Add", BindingFlags.InvokeMethod,
null, new object[] { 10, 20 });
int sum = Convert.ToInt32(res);
The function returns a calculator, but the system doesn't know the exact type of this calculator object at compile time. The
only thing the code relies on is that this object should have the Add method. Note that you don't get IntelliSense for this
method because you supply its name as a string literal.
With the dynamic keyword, this code looks as simple as this one:
dynamic calc = GetCalculator();
int sum = calc.Add(10, 20);

Action<T> and Func<T> both take zero to one+


parameters, only Func<T> returns a value
while Action<T> don't.
As for Predicate<T> - I have no idea.
Therefore, I came up with this following questions:
1. What does Predicate<T> do? (Examples
welcomed!)
2. If Action<T> returns nothing, wouldn't it be
simpler to just use void instead? (Or any other
type if it's Func<T> we're talking about.)
I'd like you to avoid LINQ/List examples in your
questions.
I've seen those already but they just make it more
confusing as the code that got me 'interested' in these
delegates have nothing to do with it (I think!).
Therefore, I'd like to use a code I'm familiar with to get
my answer.
Here it is:
public class RelayCommand : ICommand
{

readonly Action<object> _execute;


readonly Predicate<object>
_canExecute;
public RelayCommand(Action<object>
execute)
: this(execute, null)
{
}
public RelayCommand(Action<object>
execute, Predicate<object> canExecute)
{
if (execute == null)
throw new
ArgumentNullException("execute");
_execute = execute;
_canExecute = canExecute;
}
[DebuggerStepThrough]
public bool CanExecute(object
parameters)
{
return _canExecute == null ? true :
_canExecute(parameters);
}
public event EventHandler
CanExecuteChanged
{
add {
CommandManager.RequerySuggested +=
value; }
remove {
CommandManager.RequerySuggested -=
value; }
}
public void Execute(object parameters)
{
_execute(parameters);
}
}
Note:
I took out the comments to avoid super-long block of
code.

The full code can be found HERE.


Any help is appreciated! Thanks! :)
P.S: Please don't point me to other questions. I did try
to search but I couldn't find anything simple enough
for me to understand.
c# .net wpf mvvm delegates

share|improve this question

edited Oct 2 '13 at 2:52

asked Jan 1 '13 at 13:31

Tilak
12k11555

xTCx
389317

A Predicate must return a boolean. I will point you to another

1 question:stackoverflow.com/questions/1710301/what-is-a-predicate-in
add comment

6 Answers
activeoldest votes

up

Predicate<T> is a delegate that takes a T and returns a bool.


It's completely equivalent to Func<T, bool>.
vote down
The difference is that Predicate<T> was added in .Net 2.0, whereas all of
vote
the Func<*> delegates were added in .Net 3.5. (except the ones with >8 parameters, which were
added in .Net 4.0)
The LINQ-like methods in List<T> (FindAll(), TrueForAll(), etc) take Predicate<T>s.
To answer your second question, void cannot be used as a generic parameter.

acce pted

Difference between var and dynamic in C#


By aspnet -i, 17 Jun 2013
Rate this: vote 1vote 2vote 3vote 4vote 5

4.80 (43 votes)


inShare0

Introduction
The type keyword 'var' was introduced in C# 3.0 (.NET 3.5 with Visual Studio 2008) and
the type 'dynamic' was introduced in C# 4.0 ( .NET 4.0 with Visual Studio 2010). Let us see the
difference between these two

Background
Variables declared with var are implicitly but statically typed. Variables declared with dynamic are
dynamically typed. This capability was added to the CLR in order to support dynamic languages like
Ruby and Python.
This means that dynamic declarations are resolved at run-time, var declarations are resolved at
compile-time.

Table of difference
Var

dynamic

Introduced in C# 3.0

Introduced in C# 4.0

Statically typed This means the type of variable


declared is decided by the compiler at compile
time.

Dynamically typed - This means the type of


variable declared is decided by the compiler at
runtime time.

Need to initialize at the time of declaration.

No need to initialize at the time of declaration.

e.g., var str=I am a string;

e.g., dynamic str;

Looking at the value assigned to the variable str,


the compiler will treat the variable str as string.

str=I am a string; //Works fine and


compiles
str=2; //Works fine and compiles

Errors are caught at compile time.

Errors are caught at runtime

Since the compiler knows about the type and the


Since the compiler comes to about the type and
methods and properties of the type at the compile the methods and properties of the type at the
time itself
run time.
Intellisense is not available since the type and
Visual Studio shows intellisense since the type of
its related methods and properties can be
variable assigned is known to compiler.
known at run time only

Var

e.g., var obj1;


will throw a compile error since the variable is
not initialized. The compiler needs that this
variable should be initialized so that it can infer a
type from the value.

dynamic

e.g., dynamic obj1;


will compile;

e.g. var obj1=1;

e.g. dynamic obj1=1;

will compile

will compile and run

var obj1= I am a string;

dynamic obj1= I am a string;

will throw error since the compiler has already


decided that the type of obj1 is System.Int32 when
the value 1 was assigned to it. Now assigning a
string value to it violates the type safety.

will compile and run since the compiler creates


the type for obj1 as System.Int32 and then
recreates the type as string when the value I
am a string was assigned to it.
This code will work fine.

In this post we are going to learn about Func Delegates in C#. As per MSDN following
is a definition.

Encapsulates a method that has one parameter and returns a value of the type specified by the
TResult parameter.

Func can handle multiple arguments. The Func delegates is parameterized type. It takes
any valid C# type as parameter and you have can multiple parameters and also you have
specify the return type as last parameters.
Followings are some examples of parameters.
Func<int T,out TResult>
Func<int T,int T, out Tresult>

Now lets take a string concatenation example for that. I am going to create two func
delegate which will going to concate two strings and three string. Following is a code for
that.
view source

print?

.using System;
.using System.Collections.Generic;
.
.namespace FuncExample
05.{
06.class Program
07.{
08.static void Main(string[] args)
09.{
10.Func<string, string, string> concatTwo = (x, y) => string.Format("{0}
{1}",x,y);
11.Func<string, string, string, string> concatThree = (x, y, z)
=>string.Format("{0} {1} {2}", x, y,z);
12.
13.Console.WriteLine(concatTwo("Hello", "Jalpesh"));
14.Console.WriteLine(concatThree("Hello","Jalpesh","Vadgama"));
15.Console.ReadLine();
16.}
17.}
18.}

Action delegate

.NET 2.0 introduced one generic delegate Action which takes a single parameter and returns
nothing. Its declaration is:
Collapse | Copy Code

Public delegate void Action<T1>(T1 t1) // Takes 1 parameter and returns nothing

This is a very elegant way to use the delegate.


And C# 3.0 introduced 4 delegates which are as follows:
1.

Public delegate void Action()

Collapse | Copy Code

// Takes no parameter and returns nothing


2.

3. Public delegate void Action<T1,T2>(T1 t1,T2 t2)

Collapse | Copy Code

// Takes 2 parameters
// and returns nothing
4.

Collapse | Copy Code

5. Public delegate void Action<T1,T2,T3>(T1 t1,T2 t2,T3 t3) // Takes 3 parameters


// and returns nothing
6.

7. Public delegate void Action<T1,T2,T3,T4>(T1 t1,T2 t2,T3 t3),T4 t4) // Takes 4

Collapse | Copy Code

// parameters and returns nothing

when to use abstract class and when to use interface in c#

If you anticipate creating multiple versions of your component, create an abstract class. Abstract
classes provide a simple and easy way to version your components. By updating the base class, all
inheriting classes are automatically updated with the change. Interfaces, on the other hand,
cannot be changed once created. If a new version of an interface is required, you must create a
whole new interface.
If the functionality you are creating will be useful across a wide range of disparate objects, use an
interface. Abstract classes should be used primarily for objects that are closely related, whereas
interfaces are best suited for providing common functionality to unrelated classes.
If you are designing small, concise bits of functionality, use interfaces. If you are designing large
functional units, use an abstract class.
If you want to provide common, implemented functionality among all implementations of your
component, use an abstract class. Abstract classes allow you to partially implement your class,
whereas interfaces contain no implementation for any members.

Use of factory pattern

You can avoid creating duplicate objects (If your objects are immuatble) The factory can
return the same object for same set of parameters
You can create and return any subtype of the type that factory is designed to create.
Replacing implementations without changing client code (calling code)
You can return same object every time (in other words, singleton if the only way to get the
object is the factory)

Sr
No

SQL Server 2005

SQL Server 2008

XML datatype is introduced.

XML datatype is used.

Can not encrypt the entire database.

Can encrypt the entire database introduced in 2008.

Datetime is used for both date and


time.

Date and time are seperately used for date and time

No table datatype is included.

Table datatype introduced.

SSIS is started using.

SSIS avails in this version.

CMS is not available.

Central Management Server(CMS) is Introduced.

PBM is not available

Policy based management(PBM) server is


Introduced.

The SQL Server Central Management System (SQLCMS) uses multiple features from SQL Server 2008 and
other products for three purposes:
1.

Store SQL Server system information

2.

Manage SQL Server from one location

3.

Report system state, configuration and peformance

Introduction to Policy-Based
Management in SQL Server
2008
New to SQL Server 2008 is Policy-Based Management. This new
technology allows for defining polices to ensure your database
guidelines are met. In this article, SQL Server consultant Tim
Chapman gives an overview of this new technology.
Policy-Based Management in SQL Server 2008 allows the database administrator to define
policies that tie to database instances and objects. These policies allow the Database
Administrator (DBA) to specify rules for which objects and their properties are created, or
modified. An example of this would be to create a database-level policy that disallows the
AutoShrink property to be enabled for a database. Another example would be a policy that
ensures the name of all table triggers created on a database table begins with tr_.
As with any new SQL Server technology (or Microsoft technology in general), there is a new
object naming nomenclature associated with Policy-Based Management. Below is a listing of
some of the new base objects.

Policy
A Policy is a set of conditions specified on the facets of a target. In other words, a Policy is
basically a set of rules specified for properties of database or server objects.

Target
A Target is an object that is managed by Policy-Based Management. Includes objects such as
the database instance, a database, table, stored procedure, trigger, or index.

Facet
A Facet is a property of an object (target) that can be involved in Policy Based
Management. An example of a Facet is the name of a Trigger or the AutoShrink property of a
database.

Condition
A Condition is the criteria that can be specify for a Target's Facets. For example, you can set a
condition for a Fact that specifies that all stored procedure names in the Schema 'Banking'
begin with the name 'bnk_'.

You can also assign a policy to a category. This allows you manage a set of policies assigned
to the same category. A policy belongs to only one category.

Policy Evaluation Modes


A Policy can be evaluated in a number of different ways:

On demand - The policy is evaluated only when directly ran by the administrator.
On change: prevent - DDL triggers are used to prevent policy violations.
On change: log only - Event notifications are used to check a policy when a change is
made.
On schedule - A SQL Agent job is used to periodically check policies for violations.

Advantages of Policy Based Management


Policy-Based Management gives you much more control over your database procedures as a
DBA. You as a DBA have the ability to enforce your paper policies at the database level. Paper
polices are great for defining database standards are guidelines. However, it takes time and
effort to enforce these. To strictly enforce them, you have to go over your database with a finetoothed comb. With Policy-Based Management, you can define your policies and rest assured
that they will be enforced.

SQL SERVER 2008 Introduction


to Table-Valued Parameters
with Example
Table-Valued Parameters is a new feature introduced in SQL SERVER 2008. In earlier versions of SQL
SERVER it is not possible to pass a table variable in stored procedure as a parameter, but now in SQL
SERVER 2008 we can use Table-Valued Parameter to send multiple rows of data to a stored procedure or a
function without creating a temporary table or passing so many parameters.
Table-valued parameters are declared using user-defined table types. To use a Table Valued Parameters we
need follow steps shown below:
1. Create a table type and define the table structure
2. Declare a stored procedure that has a parameter of table type.
3. Declare a table type variable and reference the table type.
4. Using the INSERT statement and occupy the variable.
5. We can now pass the variable to the procedure.
For Example,
Lets create a Department Table and pass the table variable to insert data using procedure. In our example we
will create Department table and afterward we will query it and see that all the content of table value parameter
is inserted into it.
Department:
CREATE TABLE Department
(
DepartmentID INT PRIMARY KEY,
DepartmentName VARCHAR(30)
)
GO

1. Create a TABLE TYPE and define the table structure:


CREATE TYPE DeptType AS TABLE
(
DeptId INT, DeptName VARCHAR(30)
);
GO

2. Declare a STORED PROCEDURE that has a parameter of table type:


CREATE PROCEDURE InsertDepartment
@InsertDept_TVP DeptType READONLY
AS
INSERT INTO Department(DepartmentID,DepartmentName)
SELECT * FROM @InsertDept_TVP;
GO

Important points to remember :


- Table-valued parameters must be passed as READONLY parameters to SQL routines. You cannot perform
DML operations like UPDATE, DELETE, or INSERT on a table-valued parameter in the body of a routine.
- You cannot use a table-valued parameter as target of a SELECT INTO or INSERT EXEC statement. A
table-valued parameter can be in the FROM clause of SELECT INTO or in the INSERT EXEC string or
stored-procedure.

3. Declare a table type variable and reference the table type.


DECLARE @DepartmentTVP AS DeptType;

4. Using the INSERT statement and occupy the variable.


INSERT INTO @DepartmentTVP(DeptId,DeptName)
VALUES (1,'Accounts'),
(2,'Purchase'),
(3,'Software'),

(4,'Stores'),
(5,'Maarketing');

5. We can now pass the variable to the procedure and Execute.


EXEC InsertDepartment @DepartmentTVP;
GO

Lets see if the Data are inserted in the Department Table

Conclusion:
Table-Valued Parameters is a new parameter type in SQL SERVER 2008 that provides efficient way of
passing the table type variable than using the temporary table or passing so many parameters. It helps in using
complex business logic in single routine. They reduce Round Trips to the server making the performance
better.

List vs Ilist

Return type of linq

Ienumerable vs Iqueryable

IEnumerable VS IList
IList
1. IList exists in System.Collections Namespace.
2. IList is used to access an element in a specific position/index in a list.
3. Like IEnumerable, IList is also best to query data from in-memory collections like List, Array etc.
4. IList is useful when you want to Add or remove items from the list.
5. IList can find out the no of elements in the collection without iterating the collection.
6. IList supports deferred execution.
7. IList doesn't support further filtering.

IEnumerable
1. IEnumerable exists in System.Collections Namespace.
2. IEnumerable can move forward only over a collection, it cant move backward and between the
items.
3. IEnumerable is best to query data from in-memory collections like List, Array etc.
4. IEnumerable doesn't support add or remove items from the list.
5. Using IEnumerable we can find out the no of elements in the collection after iterating the collection.
6. IEnumerable supports deferred execution.
7. IEnumerable supports further filtering.

Dependency Injection in .net

Introduction: What is Dependency Injection?


This stuff is tricky--I'm not going to lie. First of all, I would recommend reading up on the
basics. Until you start to really see the power behind defining interfaces and working with
abstractions, dependency injection will be unnecessary for you.
Dependency Injection strives to decouple classes from dependencies. For instance:
Collapse | Copy Code

public interface IComplaintHearer


{
void RegisterComplaint(string message);
}
public class Manager : IComplaintHearer
{
public Manager() { }
public void RegisterComplaint(string message)
{
//do something with message
}
}
public class Employee
{
//completely dependent upon this exact class
private Manager _itsManager;
public Employee() { }
public void Complain(string complaint)
{
_itsManager.RegisterComplaint(complaint);
}
}

What's the issue? What if we want the employee to report to someone other than its immediate
boss? Or, what if we want the employee to complain to a co-worker? We would have to completely
change the class. Right now this code completely violates the open closed principle (among others)
we discussed when reviewing the S.O.L.I.D. principles. So let's use the dependency inversion principle
and make this class dependent upon an abstraction:
Collapse | Copy Code

public interface IComplaintHearer


{
void RegisterComplaint(string message);
}
public class Manager : IComplaintHearer
{
public Manager() { }
public void RegisterComplaint(string message)

{
//do something with message
}
}
public class Employee:IComplaintHearer
{
//completely dependent upon this exact class
private IComplaintHearer _complaintHearer;
public Employee(IComplaintHearer hearer)
{
_complaintHearer=hearer;
}
public void RegisterComplaint(string message)
{
//do something with the message
}
public void Complain(string complaint)
{
_complaintHearer.RegisterComplaint(complaint);
}
}

And we would need to run this code like so:


Collapse | Copy Code

//we could pass a manager, or another employee if we wanted to


Employee myEmp = new Employee(new Manager());

Injecting the Class, Instead of Providing It


Imagine a scenario where we had tons of classes that implemented
the IComplaintHearer interface. We'd have to recompile the code everytime we want to change
who the employee complains to. This is where a dependency injection steps in and allows you to:
1. Specify a class's dependency at run-time
2. Dynamically use classes in another assembly
3. Make changes without recompilation
Let's take a look at an example:
Collapse | Copy Code

public interface IComplaintHearer


{
void RegisterComplaint(string message);
}
public class Manager : IComplaintHearer
{
public Manager() { }
public void RegisterComplaint(string message)

{
//do something with message
}
}
public class Employee:IComplaintHearer
{
//completely dependent upon this exact class
private IComplaintHearer _complaintHearer;
//use the IComplaintHearer subclass that the Dependency Injection Framework
//(StructureMap in this case) tells us to
// this depends upon the xml that defines what to use (below)
public Employee(): this ( ObjectFactory.GetInstance<icomplainthearer>()){}
public Employee(IComplaintHearer hearer)
{
_complaintHearer=hearer;
}
public void RegisterComplaint(string message)
{
//do something with the message
}
public void Complain(string complaint)
{
_complaintHearer.RegisterComplaint(complaint);
}
}
Collapse | Copy Code

<structuremap>
<defaultinstance pluggedtype="Example.Manager, Example"
plugintype="Example.IComplaintHearer, Example">
</defaultinstance></structuremap>

What just happened? By using the StructureMap Framework for .NET, we just specified
the IComplaintHearer that our employee class will default to--the manager. In the XML above,
we mapped the expected type/assembly to the default type/assembly. Further, we could set up
defaults for all of our classes that we could change later without ever having to change any code. In
some ways, I feel like this is moving a problem from one environment to another, but in other ways, I
think it is a great architectural tool.
he dependency injection pattern, also knows as Inversion of Control, is one of the most popular design
paradigms today. It facilitates the design and implementation of loosely coupled, reusable, and testable
objects in your software designs by removing dependencies that often inhibit reuse. Dependency injection
can help you design your applications so that the architecture links the components rather than the
components linking themselves.
This article presents an overview of the dependency injection pattern, the advantages of using
dependency injection in your designs, the different types of dependency injection, and the pros and cons
of each of these types, with code examples where appropriate.
Object Dependency Explained
When an object needs another object to operate properly, we say that the former is dependent on the
latter. This behavior is transitive in nature. Consider three objects, namely, A, B, and C. If object A is
coupled to object B, and B is in turn coupled to C, then object A is effectively coupled to object Cit

is dependent on C. I've used the terms coupling and dependency interchangeably in this article.
Objects can be coupled in two ways: tight coupling and loose coupling. When an object is loosely coupled
with another object, you can change the coupling with ease; when the coupling is tight, the objects are not
independently reusable and hence are difficult to use effectively in unit test scenarios.
Here's an example of tight coupling. Consider two classes, C1 and C2, where C1 is tightly coupled with
C2 and requires it to operate. In this case C1 is dependent on C2, as shown below:

public class C2
{
//Some code
}
public class C1
{
C2 bObject = new C2();
//Some code
}

The tight coupling between the two classes shown above occurs because C1 (which is dependent on C2)
creates and contains an instance of the class C2. It's "tight" because you can eliminate or change the
dependency only by modifying the container class (C1). This is where dependency injection fits in.
What is Dependency Injection?
Dependency injection eliminates tight coupling between objects to make both the objects and applications
that use them more flexible, reusable, and easier to test. It facilitates the creation of loosely coupled
objects and their dependencies. The basic idea behind Dependency Injection is that you should isolate
the implementation of an object from the construction of objects on which it depends. Dependency
Injection is a form of the Inversion of Control Pattern where a factory object carries the responsibility for
object creation and linking. The factory object ensures loose coupling between the objects and promotes
seamless testability.
Advantages and Disadvantages of Dependency Injection
The primary advantages of dependency injection are:

Loose coupling
Centralized configuration
Easily testable

Code becomes more testable because it abstracts and isolates class dependencies.
However, the primary drawback of dependency injection is that wiring instances together can become a
nightmare if there are too many instances and many dependencies that need to be addressed.

Types of Dependency Injection


There are three common forms of dependency injection:

1. Constructor Injection
2. Setter Injection
3. Interface-based injection
Constructor injection uses parameters to inject dependencies. In setter injection, you use setter methods
to inject the object's dependencies. Finally, in interface-based injection, you design an interface to inject
dependencies. The following section shows how to implement each of these dependency injection forms
and discusses the pros and cons of each.
Implementing Constructor Injection
I'll begin this discussion by implementing the first type of dependency injection mentioned in the
preceding sectionconstructor injection. Consider a design with two layers; a BusinessFacade layer and
the BusinessLogic layer. The BusinessFacade layer of the application depends on the BusinessLogic
layer to operate properly. All the business logic classes implement an IBusinessLogic interface.
With constructor injection, you'd create an instance of the BusinessFacade class using its argument or
parameterized constructor and pass the required BusinessLogic type to inject the dependency. The
following code snippet illustrates the concept, showing the BusinessLogic and BusinessFacade classes.

interface IBusinessLogic
{
//Some code
}

class ProductBL : IBusinessLogic


{
//Some code
}

class CustomerBL : IBusinessLogic


{
//Some code
}

public class BusinessFacade


{
private IBusinessLogic businessLogic;
public BusinessFacade(IBusinessLogic businessLogic)
{
this.businessLogic = businessLogic;
}
}

You'd instantiate the BusinessLogic classes (ProductBL or CustomerBL) as shown below:

IBusinessLogic productBL = new ProductBL();

Then you can pass the appropriate type to the BusinessFacade class when you instantiate it:
BusinessFacade businessFacade = new BusinessFacade(productBL);

Note that you can pass an instance of either BusinssLogic class to the BusinessFacade class constructor.
The constructor does not accept a concrete object; instead, it accepts any class that implements the
IBusinessLogic interface.
Even though it is flexible and promotes loose coupling, the major drawback of constructor injection is that
once the class is instantiated, you can no longer change the object's dependency. Further, because you
can't inherit constructors, any derived classes call a base class constructor to apply the dependencies
properly. Fortunately, you can overcome this drawback using the setter injection technique.
Implementing Setter Injection
Setter injection uses properties to inject the dependencies, which lets you create and use resources as
late as possible. It's more flexible than constructor injection because you can use it to change the
dependency of one object on another without having to create a new instance of the class or making any
changes to its constructor. Further, the setters can have meaningful, self-descriptive names that simplify
understanding and using them. Here's an example that adds a property to the BusinessFacade class
which you can use to inject the dependency.
The following is now our BusinessFacade class with the said property.

public class BusinessFacade


{
private IBusinessLogic businessLogic;

public IBusinessLogic BusinessLogic


{
get
{
return businessLogic;
}

set
{
businessLogic = value;
}
}
}

The following code snippet illustrates to implement setter injection using the BusinessFacade class
shown above.

IBusinessLogic productBL = new ProductBL();


BusinessFacade businessFacade = new BusinessFacade();
businessFacade.BusinessLogic = productBL;

The preceding code snippet uses the BusinessLogic property of the BusinessFacade class to set its
dependency on the BusinessLogic type.The primary advantage of this design is that you can change the
dependency between the BusinessFacade and the instance of BusinessLogic even after instantiating the
BusinessFacade class.
Even though setter injection is a good choice, its primary drawback is that an object with setters cannot
be immutableand it can be difficult to identify which dependencies are needed, and when. You should
normally choose constructor injection over setter injection unless you need to change the dependency
after instantiating an object instance, or cannot change constructors and recompile.
Implementing Interface Injection
You accomplish the last type of dependency injection technique, interface injection, by using a common
interface that other classes need to implement to inject dependencies. The following code shows an

example in which the classes use the IBusinessLogic interface as a base contract to inject an instance of
any of the business logic classes (ProductBL or CustomerBL) into the BusinessFacade class. Both the
business logic classes ProductBL and CustomerBL implement the IBusinessLogic interface:

interface IBusinessLogic
{
//Some code
}

class ProductBL : IBusinessLogic


{
//Some code
}

class CustomerBL : IBusinessLogic


{
//Some code
}

class BusinessFacade : IBusinessFacade


{
private IBusinessLogic businessLogic;
public void SetBLObject(IBusinessLogic businessLogic)
{
this.businessLogic = businessLogic;
}
}

In the code snippet above, the SetBLObject method of the BusinessFacade class accepts a parameter
of type IBusinessLogic. The following code shows how you'd call the SetBLObject()method to inject a
dependency for either type of BusinessLogic class:

IBusinessLogic businessLogic = new ProductBL();


BusinessFacade businessFacade = new BusinessFacade();
businessFacade.SetBLObject(businessLogic);

Or:

IBusinessLogic businessLogic = new CustomerBL();


BusinessFacade businessFacade = new BusinessFacade();
businessFacade.SetBLObject(businessLogic);

All three forms of dependency injection discussed in this article passed a reference to a BusinssLogic
type rather than an instance of the type by using interfaces. According to Jeremy Weiskotten, a senior
software engineer for Kronos:

"Coding to well-defined interfaces, particularly when using the dependency injection pattern, is the key to
achieving loose coupling. By coupling an object to an interface instead of a specific implementation, you
have the ability to use any implementation with minimal change and risk."
Dependency Injection can reduce the coupling between software components and it promises to become
the paradigm of choice for designing loosely coupled, maintainable and testable objects. It can be used to
abstract the dependencies of an object outside of it and make such objects loosely coupled with each
other.

Singleton pattern

Singleton
The following implementation of the Singleton design pattern follows the solution presented in Design
Patterns: Elements of Reusable Object-Oriented Software [Gamma95] but modifies it to take advantage of
language features available in C#, such as properties:

using System;

public class Singleton


{
private static Singleton instance;
private Singleton() {}
public static Singleton Instance
{
get
{
if (instance == null)
{
instance = new Singleton();
}
return instance;
}
}
}
This implementation has two main advantages:

Because the instance is created inside the Instance property method, the class can exercise
additional functionality (for example, instantiating a subclass), even though it may introduce
unwelcome dependencies.
The instantiation is not performed until an object asks for an instance; this approach is referred
to as lazy instantiation. Lazy instantiation avoids instantiating unnecessary singletons when the
application starts.
The main disadvantage of this implementation, however, is that it is not safe for multithreaded
environments. If separate threads of execution enter the Instance property method at the same time,
more that one instance of the Singleton object may be created. Each thread could execute the following
statement and decide that a new instance has to be created:
if (instance == null)
Various approaches solve this problem. One approach is to use an idiom referred to as Double-Check
Locking [Lea99]. However, C# in combination with the common language runtime provides a static
initialization approach, which circumvents these issues without requiring the developer to explicitly code
for thread safety.

Static Initialization
One of the reasons Design Patterns [Gamma95] avoided static initialization is because the C++
specification left some ambiguity around the initialization order of static variables. Fortunately, the .NET
Framework resolves this ambiguity through its handling of variable initialization:

public sealed class Singleton


{
private static readonly Singleton instance = new Singleton();

private Singleton(){}
public static Singleton Instance
{
get
{
return instance;
}
}
}
In this strategy, the instance is created the first time any member of the class is referenced. The common
language runtime takes care of the variable initialization. The class is marked sealed to prevent derivation,
which could add instances. For a discussion of the pros and cons of marking a class sealed, see [Sells03].
In addition, the variable is marked readonly, which means that it can be assigned only during static
initialization (which is shown here) or in a class constructor.
This implementation is similar to the preceding example, except that it relies on the common language
runtime to initialize the variable. It still addresses the two basic problems that the Singleton pattern is
trying to solve: global access and instantiation control. The public static property provides a global access
point to the instance. Also, because the constructor is private, the Singleton class cannot be instantiated
outside of the class itself; therefore, the variable refers to the only instance that can exist in the system.
Because the Singleton instance is referenced by a private static member variable, the instantiation does
not occur until the class is first referenced by a call to the Instanceproperty. This solution therefore
implements a form of the lazy instantiation property, as in the Design Patterns form of Singleton.
The only potential downside of this approach is that you have less control over the mechanics of the
instantiation. In the Design Patterns form, you were able to use a nondefault constructor or perform other
tasks before the instantiation. Because the .NET Framework performs the initialization in this solution, you
do not have these options. In most cases, static initialization is the preferred approach for implementing
a Singleton in .NET.

Multithreaded Singleton
Static initialization is suitable for most situations. When your application must delay the instantiation, use
a non-default constructor or perform other tasks before the instantiation, and work in a multithreaded
environment, you need a different solution. Cases do exist, however, in which you cannot rely on the
common language runtime to ensure thread safety, as in the Static Initialization example. In such cases,
you must use specific language capabilities to ensure that only one instance of the object is created in the
presence of multiple threads. One of the more common solutions is to use the Double-Check
Locking [Lea99] idiom to keep separate threads from creating new instances of the singleton at the same
time.
Note: The common language runtime resolves issues related to using Double-Check Locking that are
common in other environments. For more information about these issues, see "The 'Double-Checked
Locking Is Broken' Declaration," on the University of Maryland, Department of Computer Science Web site,
athttp://www.cs.umd.edu/~pugh/java/memoryModel/DoubleCheckedLocking.html.
The following implementation allows only a single thread to enter the critical area, which the lock block
identifies, when no instance of Singleton has yet been created:

using System;

public sealed class Singleton


{
private static volatile Singleton instance;
private static object syncRoot = new Object();
private Singleton() {}
public static Singleton Instance
{
get
{
if (instance == null)
{
lock (syncRoot)
{
if (instance == null)
instance = new Singleton();
}
}
return instance;
}
}
}
This approach ensures that only one instance is created and only when the instance is needed. Also, the
variable is declared to be volatile to ensure that assignment to the instance variable completes before the
instance variable can be accessed. Lastly, this approach uses a syncRoot instance to lock on, rather than
locking on the type itself, to avoid deadlocks.
This double-check locking approach solves the thread concurrency problems while avoiding an exclusive
lock in every call to the Instance property method. It also allows you to delay instantiation until the object
is first accessed. In practice, an application rarely requires this type of implementation. In most cases, the
static initialization approach is sufficient.

Resulting Context
Implementing Singleton in C# results in the following benefits and liabilities:

Benefits
The static initialization approach is possible because the .NET Framework explicitly defines
how and when static variable initialization occurs.
The Double-Check Locking idiom described earlier in "Multithreaded Singleton" is
implemented correctly in the common language runtime.

Liabilities
If your multithreaded application requires explicit initialization, you have to take precautions to avoid
threading issues.

Simple Singleton Pattern in C#


By Shashank Bisen, 6 Jul 2011
Rate this: vote 1vote 2vote 3vote 4vote 5

4.75 (12 votes)


inShare0

Often, a system only needs to create one instance of a class, and that instance will be accessed
throughout the program. Examples would include objects needed for logging, communication,
database access, etc.
So, if a system only needs one instance of a class, and that instance needs to be accessible in many
different parts of a system, one control both instantiation and access by making that class a
singleton.
A Singleton is the combination of two essential properties:
Ensure a class only has one instance.
Provide a global point of access to it.
You can find many articles on SingleTon Pattern but this article will give you an insight about the
basics on SingleTon Pattern especially in the case of multithreading environment.

As stated above, a singleton is a class that can be instantiated once, and only once.
To achieve this we need to keep the following things in our mind.
1. Create a public Class (name SingleTonSample).
Collapse

Collapse | Copy Code

public class SingleTonSample


{}

2. Define its constructor as private.


Collapse

Collapse | Copy Code

private SingleTonSample()
{}

3. Create a private static instance of the class (name singleTonObject).


Collapse

Collapse | Copy Code

private volatile static SingleTonSample singleTonObject;

4. Now write a static method (name InstanceCreation) which will be used to create an instance of this
class and return it to the calling method.
Collapse

Collapse | Copy Code

public static SingleTonSample InstanceCreation()


{
private static object lockingObject = new object();
if(singleTonObject == null)
{
lock (lockingObject)
{
if(singleTonObject == null)
{
singleTonObject = new SingleTonSample();
}
}
}
return singleTonObject;
}

Now we to need to analyze this method in depth. We have created an instance of object named
lockingObject, its role is to allow only one thread to access the code nested within the lock block at a
time. So once a thread enter the lock area, other threads need to wait until the locking object get
released so, even if multiple threads try to access this method and want to create an object
simultaneously, it's not possible. Further only if the static instance of the class is null, a new instance
of the class is allowed to be created.
Hence only one thread can create an instance of this Class because once an instance of this class is
created the condition of singleTonObject being null is always false and therefore rest all instance will
contain value null.

5. Create a public method in this class, for example I am creating a method to display message
(name DisplayMessage), you can perform your actual task over here.
Collapse

Collapse | Copy Code

public void DisplayMessage()


{
Console.WriteLine("My First SingleTon Program");
}

6. Now we will create an another Class (name Program).


Collapse

Collapse | Copy Code

class Program
{}

7. Create an entry point to the above class by having a method name Main.
Collapse

Collapse | Copy Code

static void Main(string[] args)


{
SingleTonSample singleton = SingleTonSample.InstanceCreation();
singleton.DisplayMessage();
Console.ReadLine();
}

Now we to need to analyse this method in depth. As we have created an instance singleton of the
class SingleTonSample by calling the static method SingleTonSample.InstanceCreation() a new object
gets created and hence further calling the method singleton.DisplayMessage() would give an output
"My First SingleTon Program".

Soap tags

Skeleton SOAP Message


<?xml version="1.0"?>
<soap:Envelope
xmlns:soap="http://www.w3.org/2001/12/soap-envelope"
soap:encodingStyle="http://www.w3.org/2001/12/soap-encoding">
<soap:Header>
...
</soap:Header>
<soap:Body>
...
<soap:Fault>
...
</soap:Fault>
</soap:Body>
</soap:Envelope>
The volatile keyword indicates that a field might be modified by multiple threads that are executing at
the same time. Fields that are declared volatile are not subject to compiler optimizations that assume
access by a single thread. This ensures that the most up-to-date value is present in the field at all times.
The volatile modifier is usually used for a field that is accessed by multiple threads without using
the lock statement to serialize access.
The volatile keyword can be applied to fields of these types:
Reference types.
Pointer types (in an unsafe context). Note that although the pointer itself can be volatile, the
object that it points to cannot. In other words, you cannot declare a "pointer to volatile."
Types such as sbyte, byte, short, ushort, int, uint, char, float, and bool.
An enum type with one of the following base types: byte, sbyte, short, ushort, int, or uint.
Generic type parameters known to be reference types.
IntPtr and UIntPtr.
The volatile keyword can only be applied to fields of a class or struct. Local variables cannot be
declared volatile.

Você também pode gostar