tag:blogger.com,1999:blog-6615972017966725562023-11-15T06:35:16.831-08:00Interview Questions on .Netkalithttp://www.blogger.com/profile/06926361249526760368noreply@blogger.comBlogger59125tag:blogger.com,1999:blog-661597201796672556.post-38432996760752782172010-04-18T22:15:00.001-07:002010-04-18T22:15:34.485-07:00Difference between Rollup and CubeYou can use the CUBE and ROLLUP operators to generate summary information in a query. A CUBE operator generates a result set that shows the aggregates for all combinations of values in the selected columns. A ROLLUP operator generates a result set showing the aggregates for a hierarchy of values in the selected columns. Both the CUBE and ROLLUP operators return data in relational form.<br /><br />The CUBE operator generates a multidimensional cube result set. A multidimensional cube is an expansion of fact data, or the data that records individual events.<br />This expansion is based on columns that the user wants to analyze. These columns are called dimensions. A cube is a result set that contains a cross tabulation of all the possible combinations of the dimensions.<br /><br />The CUBE operator is specified in the GROUP BY clause of a SELECT statement. The select list contains the dimension columns and aggregate function expressions. The GROUP BY specifies the dimension columns by using the WITH CUBE keywords.<br />The result set contains all possible combinations of the values in the dimension columns, together with the aggregate values from the underlying rows that match that combination of dimension values.<br /><br />The ROLLUP operator is useful in generating reports that contain aggregate values. The ROLLUP operator generates a result set that is similar to the result set generated by the CUBE operator.<br /><br />However, the difference between the CUBE and ROLLUP operator is that the CUBE generates a result set that shows the aggregates for all combinations of values in the selected columns. By contrast, the ROLLUP operator returns only the specific result set.<br />The ROLLUP operator generates a result set that shows the aggregates for a hierarchy of values in the selected columns. Also, the ROLLUP operator provides only one level of summarization, for example, the cumulative running sum in a table.kalithttp://www.blogger.com/profile/06926361249526760368noreply@blogger.com2tag:blogger.com,1999:blog-661597201796672556.post-19956386988481900512010-03-25T22:50:00.001-07:002010-03-25T22:50:56.654-07:00Difference Between ASP.NET Server Controls,HTML Server Controls and HTML Intrinsic ControlsASP.NET Server Controls<br />Advantages:<br /><br />1. ASP .NET Server Controls can however detect the target browser's capabilities and render themselves accordingly. No issues for compatibility issues of Browsers i.e page that might be used by both HTML 3.2 and HTML 4.0 browsers code to be written by you.<br />2. Newer set of controls that can be used in the same manner as any HTMl control like Calender controls. (No need of Activex Control for doing this which would then bring up issues of Browser compatibility).<br />3. Processing would be done at the server side. In built functionality to check for few values(with Validation controls) so no need to choose between scripting language which would be incompatible with few browsers.<br />4. ASP .NET Server Controls have an object model different from the traditional HTML and even provide a set of properties and methods that can change the outlook and behavior of the controls.<br />5. ASP .NET Server Controls have higher level of abstraction. An output of an ASP .NET server control can be the result of many HTML tags that combine together to produce that control and its events.<br /><br /> <br />Disadvantages:<br /><br />1. The control of the code is inbuilt with the web server controls so you have no much of direct control on these controls<br />2. Migration of ASP to any ASP.NET application is difficult. Its equivalent to rewriting your new application <br />HTML Server Controls<br />Advantages:<br /><br />1. The HTML Server Controls follow the HTML-centric object model. Model similar to HTML<br />2. Here the controls can be made to interact with Client side scripting. Processing would be done at client as well as server depending on your code. <br />3. Migration of the ASP project thought not very easy can be done by giving each intrinsic HTML control a runat = server to make it HTML Server side control.<br />4. The HTML Server Controls have no mechanism of identifying the capabilities of the client browser accessing the current page.<br />5. A HTML Server Control has similar abstraction with its corresponding HTML tag and offers no abstraction.<br /> <br />Disadvantages:<br />1. You would need to code for the browser compatibility.<br />HTML Intrinsic Controls<br />Advantages:<br />1. Model similar to HTML<br />2. Here the controls can be made to interact with Client side scripting<br /> <br />Disadvantages:<br />1. You would need to code for the browser compatibilitkalithttp://www.blogger.com/profile/06926361249526760368noreply@blogger.com0tag:blogger.com,1999:blog-661597201796672556.post-8518499900867827812010-03-02T01:49:00.000-08:002010-03-02T01:50:36.581-08:00What is Join and its types in SQL ServerJoin: <br /><br />Joins in SQL Server allows the retrieval of data records from one or more tables having some relation between them. Logical operators can also be used to drill down the number of records to get the desired output from sql join queries.<br /><br />Types:<br /><br />1. Inner Join<br />2. Outer Join <br />o Left Outer Join<br />o Right Outer Join<br />o Full Outer Join<br />3. Cross Join<br /><br />I) Inner Join: Inner Join is a default type join of SQL Server. It uses logical operators such as =, <, > to match the records in two tables. Inner Join includes equi join and natural joins.<br />Natural join query example: <br />SELECT C.*, P.PRODUCTID, P.PRODUCTNAME FROM CATEGORIES C <br />INNER JOIN<br />PRODUCTS P ON P.CATEGORYID = C.CATEGORYID <br />This natural join query will return all the columns of categories table and prodcutId and productName from products table. <br />Equi Join: Equi Join returns all the columns from both tables and filters the records satisfying the matching condition specified in Join “ON” statement of sql inner join query. <br /> <br />SQL Inner Equi Join Example: <br /> <br />USE NORTHWIND <br />SELECT * FROM CATEGORIES C INNER JOIN<br />PRODUCTS P ON P.CATEGORYID = C.CATEGORYID <br /> <br /><br />Result will display the following columns: <br />CategoryID, CategoryName, Description, Picture, ProductID, ProductName, SupplierID, CategoryID, QuantityPerUnit, UnitPrice, UnitsInStock, UnitsOnOrder, ReorderLevel, Discontinued <br /> <br />Above equi join sql query will display the categoryId two times in a row because both the tables have categoryId column. You can convert the result into natural join by elimination the identical columns and unnecessary columns. <br />Diffrence between natural join and equi join<br />natural join is basically a form of equi join where one of the join fields is projected out. i.e. it avoids repeatition of the join column.<br /><br />II) Outer Join: Outer Join has further 3 sub categories as left, right and full. Outer Join uses these category names as keywords that can be specified in the FROM clause. <br />o Left Outer Join: Left Outer Join returns all the rows from the table specified first in the Left Outer Join Clause. If in the left table any row has no matching record in the right side table then that row returns null column values for that particular tuple. <br /><br /><br />o Right Outer Join: Right Outer Join is exactly the reverse method of Left Outer Join. It returns all the rows from right table and returns null values for the rows having no match in the left joined table. <br /><br /><br /><br />o Full Outer Join: Full outer join returns all the rows from both left and right joined tables. If there is any match missing from the left table then it returns null column values for left side table and if there is any match missing from right table then it returns null value columns for the right side table. <br /><br /><br />III) Cross Join: Cross join works as a Cartesian product of rows for both left and right table. It combined each row of left table with all the rows of right table.kalithttp://www.blogger.com/profile/06926361249526760368noreply@blogger.com0tag:blogger.com,1999:blog-661597201796672556.post-26694828201868811062010-03-01T21:36:00.000-08:002010-03-01T21:37:56.561-08:00what is diffgram in asp.netDiffGram: <br /><br />The DiffGram is one of the two XML formats that you can use to render DataSet object contents to XML. A good use is reading database data to an XML file to be sent to a Web Service. <br />ADO.NET introduced the DataSet class to support the disconnected, distributed data-access scenarios. With DataSet, the data retrieved from the database is cached in-memory, in addition to the constraints and relationships among the tables. When the ADO.NET DataSet is serialized as XML (for example, when returning a DataSet from an ASP.NET XML Web service method), the XML format used for DataSet serialization is known as DiffGram. Like Updategrams, DiffGrams also contains the tags that specify the original and new state of data. SQLXML and .NET Managed classes can be used to execute DiffGrams to perform the database updates, however there are many things that are supported by Updategrams and not by DiffGrams (ability to pass parameters being one example).<br /><br /><br />DiffGrams and DataSet<br /><br />There are occasions when you want to compare the original data with the current data to get the changes made to the original data. One of the common example is saving data on Web Forms applications. When working with Web based data driven applications, you read data using a DataSet, make some changes to the data and sends data back to the database to save final data. Sending entire DataSet may be a costly affair specially when there are thousands of records in a DataSet. In this scenario, the best practice is to find out the updated rows of a DataSet and send only updated rows back to the database instead of the entire DataSet. This is where the DiffGrams are useful.<br /><br />Note: Do you remember GetChanges method of DataSet? This method returns the rows that have been modified in the current version in a form of DataSet. This is how a DataSet knows the modified rows.<br /><br />A DiffGram is an XML format that is used to identify current and original versions of data elements. Since the DataSet uses XML format to store and transfer data, it also use DiffGrams to keep track of the original data and the current data. When a DataSet is written as a DiffGram, not only a DiffGram stores original and current data, it also stores row versions, error information, and their orders.<br /><br />DiffGram XML Format<br /><br />The XML format for a DiffGram has three parts - data instance, diffgram before and diffgram errors. The <DataInstance> tag represents the data instance part of a diffgram, which represents the current data. The diffgram before is represented by the <diffgr:before> tag, which represents the original version of the data. The <diffgr:errors> tag represents the diffgram errors part, which stores the errors and related information. The diffgram itself is represented by tag <diffgr:diffgram>. The XML listed in Listing 1 represents the skeleton of a DiffGram.<br /><br /><?xml version="1.0"?><br /><diffgr:diffgram <br />xmlns:msdata="urn:schemas-microsoft-com:xml-msdata"<br />xmlns:diffgr="urn:schemas-microsoft-com:xml-diffgram-v1"<br />xmlns:xsd="http://www.w3.org/2001/XMLSchema"><br /><DataInstance><br /></DataInstance><br /><diffgr:before><br /></diffgr:before><br /><diffgr:errors><br /></diffgr:errors><br /></diffgr:diffgram> <br /><br />Listing 1. A DiffGram format<br /><br />The <diffgr:before> sections only store the changed rows and the <diffgr:errors> section only stores the rows that had errors. Each row in a DiffGram is identified with an id and these three sections communicate through this id. For example, if id of a row is "Id1" and it has been modified and had errors,<br /><br />Besides above discussed three sections, a DiffGram uses other elements. These are described in Table 1.<br /><br />Table 1 describes the DiffGram elements that are defined in the DiffGram namespace urn:schemas-microsoft-com:xml-diffgram-v1.<br /><br />Element Description <br />id<br /> DiffGram id. Normally in the format of [TableName][RowIdentifier]. For example: <Customers diffgr:id="Customers1">. <br />parentId<br /> Parent row of the current row. Normally in the format of [TableName][RowIdentifier]. For example: <Orders diffgr:parentId="Customers1">. <br />hasChanges Identifies a row in the <DataInstance> block as modified. The hasChanges can have one of the three values - inserted, modified, or descent. Value inserted means an Added row, modified means modified row, and descent means children of a parent row have been modified. <br />hasErrors<br /> Identifies a row in the <DataInstance> block with a RowError. The error element is placed in the <diffgr:errors> block.<br /> <br />Error<br /> Contains the text of the RowError for a particular element in the <diffgr:errors> block. <br /><br /><br />There are two more elements a DataSet generated DiffGrams can have and these elements are RowOrder and Hidden. The RowOrder is the row order of the original data and identifies the index of a row in a particular DataTable. The Hidden identifies a column as having a ColumnMapping property set to MappingType.Hidden. <br /><br />Now let's see an example of DiffGrams. The code listed in Listing 1 reads data from Employees tables and write in an XML document in DiffGram format. <br /><br />Dim connectionString As String = "Provider=Microsoft.Jet.OLEDB.4.0; Data Source=c:\Northwind.mdb"<br />Dim sql As String = "SELECT EmployeeID, FirstName, LastName, Title FROM Employees"<br />Dim conn As OleDbConnection = Nothing<br />Dim ds As DataSet = Nothing<br />' Create and open connection<br />conn = New OleDbConnection(connectionString)<br />If conn.State <> ConnectionState.Open Then<br />conn.Open()<br />End If ' Create a data adapter<br />Dim adapter As New OleDbDataAdapter(sql, conn)<br />' Create and fill a DataSet<br />ds = New DataSet("TempDtSet")<br />adapter.Fill(ds, "DtSet")<br />' Write XML in DiffGram format<br />ds.WriteXml("DiffGramFile.xml", XmlWriteMode.DiffGram)<br />' Close connection<br />If conn.State = ConnectionState.Open Then<br />conn.Close()<br />End Ifkalithttp://www.blogger.com/profile/06926361249526760368noreply@blogger.com0tag:blogger.com,1999:blog-661597201796672556.post-6393456885896329502010-03-01T21:01:00.000-08:002010-03-01T21:02:29.653-08:00Bubble Events in ASP.netEvent Bubbling is nothing but events raised by child controls is handled by the parent control. Example: Suppose consider datagrid as parent control in which there are several child controls.There can be a column of link buttons right.Each link button has click event.Instead of writing event routine for each link button write one routine for parent which will handlde the click events of the child link button events.<br /><br />protected override bool OnBubbleEvent(object source, EventArgs e) {<br /> if (e is CommandEventArgs) {<br /> // Adds information about an Item to the <br /> // CommandEvent.<br /> TemplatedListCommandEventArgs args =<br /> new TemplatedListCommandEventArgs(this, source, (CommandEventArgs)e);<br /> RaiseBubbleEvent(this, args);<br /> return true;<br /> }<br /> return false;<br /> }<br /><br />Refer: http://msdn.microsoft.com/en-us/library/aa719644(VS.71).aspxkalithttp://www.blogger.com/profile/06926361249526760368noreply@blogger.com0tag:blogger.com,1999:blog-661597201796672556.post-70809823579022981112010-02-25T23:09:00.000-08:002010-02-25T23:10:41.299-08:00Difference between outer join and inner joinInner join:<br /><br /><br />We use this when we compare two colums from two different table .Based on equality or non equality, we retrieve the rows matched.<br />eg.<br /><br />Select emp.empid , order.orderid <br />from emp Innerjoin order <br />on Emp.empid=order.empid<br /><br />This example gives all the rows from emp,order tables where the empid's in both the tables are same.<br /><br /><br /><br /><br /><br />Outer Join:<br /><br />There are three types of outer joins namely:<br />Left Outer Join---For retreiving all the columns from the first table irrespective of the column match.<br />Right Outer Join---For retreiving all the columns from the second table irrespective of the column match<br />Full Outer Join---For retreiving all the columns from both the tables irrespective of column match.<br /><br />Eg.<br /><br />If we have two tables named stud1,stud2 with the following data<br /><br />Stud1: id Name stud2: id Name<br />1 xxx 1 aaa<br />2 yyy 2 bbb<br />3 zzz 4 ccc<br />4 www 6 ddd<br />When we use Left Outer Join we get the output as:<br />1 aaa<br />2 bbb<br />3 <Null><br />4 ccc<br /><br /><br />When we use Right Outer Join we get the output as:<br />1 aaa<br />2 bbb<br />4 ccc<br /><Null> ddd<br /><br /><br />When we use Full Outer Join we get the output as:<br />1 aaa<br />2 bbb<br />3 <Null><br />4 ccc<br /><Null> dddkalithttp://www.blogger.com/profile/06926361249526760368noreply@blogger.com0tag:blogger.com,1999:blog-661597201796672556.post-21835608004569502292010-02-25T01:01:00.000-08:002010-02-25T01:03:25.588-08:00JavaScript Object Notation (JSON)JavaScript Object Notation (JSON)<br />To allow for a more efficient transfer of data and classes between web applications and<br />web services, ASP.NET AJAX supports the JavaScript Object Notation (JSON) format. It is<br />lighter weight than XML (Extensible Markup Language)/SOAP (Simple Object Access<br />Protocol), and delivers a more consistent experience because of the implementation<br />differences of XML/SOAP by the various browsers.<br />JSON is a text-based data-interchange format that represents data as a set of ordered<br />name/value pairs. As an example, take a look at the following class definition, which<br />stores a person’s name and age:<br /><br />public class MyDetails<br />{<br /> string FirstName;<br /> string LastName;<br /> int Age;<br /><br />}<br /><br />A two-element array of this object is represented as follows:<br />{ MyDetails : [ { “FirstName” : “Landon”, “LastName” : “Donovan”, “Age” : “22”}<br />{ “FirstName” : “John”, “LastName” : “Grieb”, “Age” : “46”}<br />]<br />}kalithttp://www.blogger.com/profile/06926361249526760368noreply@blogger.com1tag:blogger.com,1999:blog-661597201796672556.post-39081919195326513992010-02-22T02:37:00.000-08:002010-02-22T02:39:55.173-08:00What is Script Manager in Ajax+ASP.netScriptManager control is the parent control that needs to be there on every page wherever we are trying to use ASP.NET AJAX controls. ScriptManager control manages client script for AJAX enabled ASP.NET pages. This control enables client script to use the type system extensions and support features for partial page rendering, webservice calls etc.<br /><br />Can we use Multiple Script Manager on same web-page<br /><br />No. It is not possible to use multiple ScriptManager control in a web page. In fact, any such requirement never comes in because a single ScriptManager control is enough to handle the objects of a web page.kalithttp://www.blogger.com/profile/06926361249526760368noreply@blogger.com1tag:blogger.com,1999:blog-661597201796672556.post-19307264216948951052010-02-22T02:02:00.001-08:002010-02-22T02:02:49.600-08:00Difference between Session object and Application object in asp.NetSession variables are used to store user specific information where as in application variables we can't store user specific information.<br /><br />Default lifetime of the session variable is 20 mins and based on the requirement we can change it.<br /><br />Application variables are accessible till the application ends.<br />sessions allows information to be stored in one page and accessed in another,and it supports any type of object,including your own custom data types.<br /><br />Application state allows you to store global objects that can be accessed by any client.<br /><br />The coomon thing b/w session and application is both support the same type of objects,retain information on the server, and uses the same dictionary -based syntax.kalithttp://www.blogger.com/profile/06926361249526760368noreply@blogger.com2tag:blogger.com,1999:blog-661597201796672556.post-43726230836898950002010-02-09T03:39:00.000-08:002010-02-09T03:40:53.918-08:00What is Windows Activation Service (WAS)Windows Activation Service (WAS), introduced with Windows Vista, is the new process activation mechanism that ships with IIS 7.0. WAS builds on the existing IIS 6.0 process and hosting models, but is much more powerful because it provides support for other protocols besides HTTP, such as TCP and Named Pipes. By hosting the Windows Communication Foundation (WCF) services in WAS, one can take advantage of WAS features such as process recycling, rapid failover protection, and the common configuration system, all of which were previously available only to HTTP-based applications.kalithttp://www.blogger.com/profile/06926361249526760368noreply@blogger.com1tag:blogger.com,1999:blog-661597201796672556.post-62300991883270417132010-02-09T01:19:00.000-08:002010-02-09T01:20:32.920-08:00Attributes in .NetAttributes are a mechanism for adding metadata, such as compiler instructions and other data about your data, methods, and classes, to the program itself. Attributes are inserted into the metadata and are visible through ILDasm and other metadata-reading tools.<br /><br />Reflection is the process by which a program can read its own metadata. A program is said to reflect on itself, extracting metadata from its assembly and using that metadata either to inform the user or to modify its own behavior.<br /><br />An attribute is an object that represents data you want to associate with an element in your program. The element to which you attach an attribute is referred to as the target of that attribute.<br /><br />Using Attributes<br />Attributes can be placed on most any declaration (though a specific attribute might restrict the types of declarations on which it is valid). Syntactically, an attribute is specified by placing the name of the attribute, enclosed in square brackets, in front of the declaration of the entity to which it applies. For example, a class with the attribute DllImport is declared like this:<br />[DllImport] public class MyDllimportClass { ... }<br /><br />Many attributes have parameters, which can be either positional (unnamed) or named. <br />Any positional parameters must be specified in a certain order and cannot be omitted; named parameters are optional and can be specified in any order. Positional parameters are specified first. For example, these three attributes are equivalent:<br />[DllImport("user32.dll", SetLastError=false, ExactSpelling=false)]<br />[DllImport("user32.dll", ExactSpelling=false, SetLastError=false)]<br />[DllImport("user32.dll")]<br /><br />The first parameter, the DLL name, is positional and always comes first; the others are named. In this case, both named parameters default to false, so they can be omitted (refer to the individual attribute's documentation for information on default parameter values).<br /><br />More than one attribute can be placed on a declaration, either separately or within the same set of brackets:<br /><br />bool AMethod([In][Out]ref double x);<br />bool AMethod([Out][In]ref double x);<br />bool AMethod([In,Out]ref double x);<br /><br />Creating Custom Attributes<br /><br />You can create your own custom attributes by defining an attribute class, a class that derives directly or indirectly from System.Attribute (which makes identifying attribute definitions in metadata fast and easy). Suppose you want to tag classes and structs with the name of the programmer who wrote the class or struct. You might define a custom Author attribute class:<br /><br />using System;<br />[AttributeUsage(AttributeTargets.Class|AttributeTargets.Struct)]<br />public class Author : Attribute<br />{<br />public Author(string name) { this.name = name; version = 1.0; }<br />public double version;<br />string name;<br />}<br /><br /><br />The class name is the attribute's name, Author. It is derived from System.Attribute, so it is a custom attribute class. The constructor's parameters are the custom attribute's positional parameters (in this case, name), and any public read-write fields or properties are named parameters (in this case, version is the only named parameter). Note the use of the AttributeUsage attribute to make the Author attribute valid only on class and struct declarations.kalithttp://www.blogger.com/profile/06926361249526760368noreply@blogger.com0tag:blogger.com,1999:blog-661597201796672556.post-32312783635270038852010-02-04T22:33:00.000-08:002010-02-04T22:46:47.672-08:00Difference between Web Services of ASP.net and Web Services of WCFThe development of web service with ASP.NET relies on defining data and relies on the XmlSerializer to transform data to or from a service.<br /><br />Key issues with XmlSerializer to serialize .NET types to XML <br /><br />Only Public fields or Properties of .NET types can be translated into XML. <br />Only the classes which implement IEnumerable interface. <br />Classes that implement the IDictionary interface, such as Hash table can not be serialized. <br />The WCF uses the DataContractAttribute and DataMemeberAttribute to translate .NET FW types in to XML.<br /><br />[DataContract] <br />public class Item <br />{ <br /> [DataMember] <br /> public string ItemID; <br /> [DataMember] <br /> public decimal ItemQuantity; <br /> [DataMember] <br /> public decimal ItemPrice;<br /><br />}<br /><br />The DataContractAttribute can be applied to the class or a strcture. DataMemberAttribute can be applied to field or a property and theses fields or properties can be either public or private.<br /><br /><br />Important difference between DataContractSerializer and XMLSerializer.<br /><br />A practical benefit of the design of the DataContractSerializer is better performance over XMLserialization. <br />XMLSerialization does not indicate the which fields or properties of the type are serialized into XML where as DataCotratSerializer Explicitly shows the which fields or properties are serialized into XML. <br />The DataContractSerializer can translate the HashTable into XML. <br />Developing Service<br /><br />To develop a service using ASP.NET we must add the WebService attribute to the class and WebMethodAttribute to any of the class methods.<br /><br />Example <br /><br />[WebService] <br />public class Service : System.Web.Services.WebService <br />{ <br /> [WebMethod] <br /> public string Test(string strMsg) <br /> { <br /> return strMsg; <br /> } <br />} <br /><br />To develop a service in WCF we will write the following code<br /><br />[ServiceContract] <br />public interface ITest <br />{ <br /> [OperationContract] <br /> string ShowMessage(string strMsg); <br />} <br />public class Service : ITest <br />{ <br /> public string ShowMessage(string strMsg) <br /> { <br /> return strMsg; <br /> } <br />}<br /><br />The ServiceContractAttribute specifies that a interface defines a WCF service contract, OperationContract Attribute indicates which of the methods of the interface defines the operations of the service contract.<br /><br />A class that implements the service contract is referred to as a service type in WCF.<br /><br />Hosting the Service<br /><br />ASP.NET web services are compiled into a class library assembly and a service file with an extension .asmx will have the code for the service. The service file is copied into the root of the ASP.NET application and Assembly will be copied to the bin directory. The application is accessible using url of the service file.<br /><br />WCF Service can be hosted within IIS or WindowsActivationService.<br /><br />Compile the service type into a class library <br />Copy the service file with an extension .SVC into a virtual directory and assembly into bin sub directory of the virtual directory. <br />Copy the web.config file into the virtual directory. <br />Client Development<br /><br />Clients for the ASP.NET Web services are generated using the command-line tool WSDL.EXE. <br /><br />WCF uses the ServiceMetadata tool(svcutil.exe) to generate the client for the service.<br /><br />Message Representation<br /><br />The Header of the SOAP Message can be customized in ASP.NET Web service.<br /><br />WCF provides attributes MessageContractAttribute , MessageHeaderAttribute and MessageBodyMemberAttribute to describe the structure of the SOAP Message.<br /><br />Service Description<br /><br />Issuing a HTTP GET Request with query WSDL causes ASP.NET to generate WSDL to describe the service. It returns the WSDL as response to the request.<br /><br />The generated WSDL can be customized by deriving the class of ServiceDescriptionFormatExtension.<br /><br />Issuing a Request with the query WSDL for the .svc file generates the WSDL. The WSDL that generated by WCF can customized by using ServiceMetadataBehavior class.<br /><br />Exception Handling<br /><br />In ASP.NET Web services, Unhandled exceptions are returned to the client as SOAP faults.<br /><br />In WCF Services, unhandled exceptions are not returned to clients as SOAP faults. A configuration setting is provided to have the unhandled exceptions returned to clients for the purpose of debugging.kalithttp://www.blogger.com/profile/06926361249526760368noreply@blogger.com0tag:blogger.com,1999:blog-661597201796672556.post-89717361949937635742009-12-23T04:38:00.000-08:002009-12-23T04:39:35.539-08:00Job Interview And Resume Tips for DevelopersExcellent Article<br /><br />http://eggheadcafe.com/tutorials/aspnet/c3b9b000-f682-4673-8c83-8feb077fb0be/job-interview-and-resume.aspxkalithttp://www.blogger.com/profile/06926361249526760368noreply@blogger.com2tag:blogger.com,1999:blog-661597201796672556.post-19556628057644366132009-11-05T02:51:00.000-08:002009-11-05T02:52:03.567-08:00XML Interview QuestionsWhat are the different kind of parsers used in XML? <br />There are 2 parsers:<br />1) DOM (Document object model): This will interpret Complete XML document.Microsoft major concentration is DOM Parser.<br />2) SAX Parser (Simple Aplication programming Interface for XML): This will interpret XML document based on the event occurrence only it wont interpret complete document at a time. Sun mycrosystems major concentration is SAX Parser.What is XPath?<br />XPath is used to navigate through elements and attributes in an XML document.<br /><br />Difference between XML and HTML<br />What is the differnece between XML and HTML <br />1) XML is not a replacement for HTML.<br />2) XML and HTML were designed with different goals.<br />3) XML was designed to describe data and to focus on what data is.<br />4) HTML was designed to display data and to focus on how data looks.<br />5) HTML is about displaying information, XML is about describing information<br />XML<br />User definable tags<br />Content driven<br />End tags required for well formed documents<br />Quotes required around attributes values<br />Slash required in empty tags <br />HTML<br />Defined set of tags designed for web display<br />Format driven<br />End tags not required<br />Quotes not required<br />Slash not required<br /><br />What is XML and Binary Serialization?<br />XML Serialization serializes the object into an xml file. This file is human readable and can be shared with other applications.<br /><br />Binary serialization is more efficient but the serialized file is in binary format. It may not make any sense for a human being to open this file and understand what it contains. It is a stream of bytes.<br /><br />What is XSL?<br />XSLT - a language for transforming XML documents<br />XSLT is used to transform an XML document into another XML document, or another type of document that is recognized by a browser, like HTML and XHTML. Normally XSLT does this by transforming each XML element into an (X)HTML element.<br />XPath - a language for navigating in XML documents<br />XSL-FO - a language for formatting XML documents<br />What is DTD and Schema in XML<br /><br />A DTD is: <br /><br />The XML Document Type Declaration contains or points to markup declarations that provide a grammar for a class of documents. This grammar is known as a document type definition or DTD. <br /><br />The DTD can point to an external subset containing markup declarations, or can contain the markup declarations directly in an internal subset, or can even do both. <br /><br />A Schema is: <br /><br />XML Schemas express shared vocabularies and allow machines to carry out rules made by people. They provide a means for defining the structure, content and semantics of XML documents. <br /><br />In summary, schemas are a richer and more powerful of describing information than what is possible with DTDs. <br />What is XML? <br />XML is the Extensible Markup Language. It improves the functionality<br />of the Web by letting you identify your information in a more accurate,<br />flexible, and adaptable way. It is extensible because it is not<br />a fixed format like it’s written in SGML, the international standard meta language for<br />text document markup (ISO 8879). <br />What is a markup language? <br />A markup language is a set of words and symbols for describing<br />the identity of pieces of a document (for example ‘this is<br />a paragraph’, ‘this is a heading’, ‘this<br />is a list’, ‘this is the caption of this figure’,<br />etc). Programs can use this with a style sheet to create output<br />for screen, print, audio, video, Braille, etc. <br /><br />Some markup languages (eg those used in word processors) only describe<br />appearances (’this is italics’, ‘this is bold’),<br />but this method can only be used for display, and is not normally<br />re-usable for anything else. <br />Where should I use XML? <br />Its goal is to enable generic SGML to be served, received, and<br />processed on the Web in the way that is now possible with HTML.<br />XML has been designed for ease of implementation and for interoperability<br />with both SGML and HTML. <br />Despite early attempts, browsers never allowed other SGML, only<br />HTML (although there were plugins), and they allowed it (even encouraged<br />it) to be corrupted or broken, which held development back for over<br />a decade by making it impossible to program for it reliably. XML<br />fixes that by making it compulsory to stick to the rules, and by<br />making the rules much simpler than SGML. <br />But XML is not just for Web pages: in fact it’s very rarely used<br />for Web pages on its own because browsers still don’t provide reliable<br />support for formatting and transforming it. Common uses for XML<br />include: <br />Information identification because you can define your own markup,<br />you can define meaningful names for all your information items.<br />Information storage because XML is portable and non-proprietary,<br />it can be used to store textual information across any platform.<br />Because it is backed by an international standard, it will remain<br />accessible and processable as a data format. Information structure<br /><br />XML can therefore be used to store and identify any kind of (hierarchical)<br />information structure, especially for long, deep, or complex document<br />sets or data sources, making it ideal for an information-management<br />back-end to serving the Web. This is its most common Web application,<br />with a transformation system to serve it as HTML until such time<br />as browsers are able to handle XML consistently. Publishing the<br />original goal of XML as defined in the quotation at the start of<br />this section. Combining the three previous topics (identity, storage,<br />structure) means it is possible to get all the benefits of robust<br />document management and control (with XML) and publish to the Web<br />(as HTML) as well as to paper (as PDF) and to other formats (eg<br />Braille, Audio, etc) from a single source document by using the<br />appropriate stylesheets. Messaging and data transfer XML is also<br />very heavily used for enclosing or encapsulating information in<br />order to pass it between different computing systems which would<br />otherwise be unable to communicate. By providing a lingua franca<br />for data identity and structure, it provides a common envelope for<br />inter-process communication (messaging). Web services Building on<br />all of these, as well as its use in browsers, machine-processable<br />data can be exchanged between consenting systems, where before it<br />was only comprehensible by humans (HTML). Weather services, e-commerce<br />sites, blog newsfeeds, AJaX sites, and thousands of other data-exchange<br />services use XML for data management and transmission, and the web<br />browser for display and interaction. <br />Why is XML such an important development? <br />It removes two constraints which were holding back Web developments:<br />1. dependence on a single, inflexible document type (HTML) which<br />was being much abused for tasks it was never designed for;<br />2. the complexity of full SGML, whose syntax allows many powerful<br />but hard-to-program options.<br />XML allows the flexible development of user-defined document types.<br />It provides a robust, non-proprietary, persistent, and verifiable<br />file format for the storage and transmission of text and data both<br />on and off the Web; and it removes the more complex options of SGML,<br />making it easier to program for. <br /> What is SGML? <br />SGML is the Standard Generalized Markup Language (ISO 8879:1986),<br />the international standard for defining descriptions of the structure<br />of different types of electronic document. There is an SGML FAQ<br />from David Megginson at http://math.albany.edu:8800/hm/sgml/cts-faq.htmlFAQ;<br />and Robin Cover’s SGML Web pages are at http://www.oasis-open.org/cover/general.html.<br />For a little light relief, try Joe English’s ‘Not the SGML<br />FAQ’ at http://www.flightlab.com/~joe/sgml/faq-not.txtFAQ.<br /><br />SGML is very large, powerful, and complex. It has been in heavy<br />industrial and commercial use for nearly two decades, and there<br />is a significant body of expertise and software to go with it. <br />XML is a lightweight cut-down version of SGML which keeps enough<br />of its functionality to make it useful but removes all the optional<br />features which made SGML too complex to program for in a Web environment. <br />Aren’t XML, SGML, and HTML all the same thing? <br />Not quite; SGML is the mother tongue, and has been used for describing<br />thousands of different document types in many fields of human activity,<br />from transcriptions of ancient Irish manuscripts to the technical<br />documentation for stealth bombers, and from patients’ clinical records<br />to musical notation. SGML is very large and complex, however, and<br />probably overkill for most common office desktop applications. <br />XML is an abbreviated version of SGML, to make it easier to use<br />over the Web, easier for you to define your own document types,<br />and easier for programmers to write programs to handle them. It<br />omits all the complex and less-used options of SGML in return for<br />the benefits of being easier to write applications for, easier to<br />understand, and more suited to delivery and interoperability over<br />the Web. But it is still SGML, and XML files may still be processed<br />in the same way as any other SGML file (see the question on XML<br />software). <br />HTML is just one of many SGML or XML applicationsâ€â€the one<br />most frequently used on the Web. <br />Technical readers may find it more useful to think of XML as being<br />SGML– rather than HTML++. <br />Why is XML such an important development? <br />It removes two constraints which were holding back Web developments:<br /><br />1. dependence on a single, inflexible document type (HTML) which<br />was being much abused for tasks it was never designed for;<br />2. the complexity of full question A.4, SGML, whose syntax allows<br />many powerful but hard-to-program options. <br />XML allows the flexible development of user-defined document types.<br />It provides a robust, non-proprietary, persistent, and verifiable<br />file format for the storage and transmission of text and data both<br />on and off the Web; and it removes the more complex options of SGML,<br />making it easier to program for. <br />Give a few examples of types of applications that can<br />benefit from using XML. <br />There are literally thousands of applications that can benefit<br />from XML technologies. The point of this question is not to have<br />the candidate rattle off a laundry list of projects that they have<br />worked on, but, rather, to allow the candidate to explain the rationale<br />for choosing XML by citing a few real world examples. For instance,<br />one appropriate answer is that XML allows content management systems<br />to store documents independently of their format, which thereby<br />reduces data redundancy. Another answer relates to B2B exchanges<br />or supply chain management systems. In these instances, XML provides<br />a mechanism for multiple companies to exchange data according to<br />an agreed upon set of rules. A third common response involves wireless<br />applications that require WML to render data on hand held devices. <br />What is DOM and how does it relate to XML? <br />The Document Object Model (DOM) is an interface specification maintained<br />by the W3C DOM Workgroup that defines an application independent<br />mechanism to access, parse, or update XML data. In simple terms<br />it is a hierarchical model that allows developers to manipulate<br />XML documents easily Any developer that has worked extensively with<br />XML should be able to discuss the concept and use of DOM objects<br />freely. Additionally, it is not unreasonable to expect advanced<br />candidates to thoroughly understand its internal workings and be<br />able to explain how DOM differs from an event-based interface like<br />SAX. <br />What is SOAP and how does it relate to XML? <br />The Simple Object Access Protocol (SOAP) uses XML to define a protocol<br />for the exchange of information in distributed computing environments.<br />SOAP consists of three components: an envelope, a set of encoding<br />rules, and a convention for representing remote procedure calls.<br />Unless experience with SOAP is a direct requirement for the open<br />position, knowing the specifics of the protocol, or how it can be<br />used in conjunction with HTTP, is not as important as identifying<br />it as a natural application of XML. <br />Why not just carry on extending HTML? <br />HTML was already overburdened with dozens of interesting but incompatible<br />inventions from different manufacturers, because it provides only<br />one way of describing your information. <br />XML allows groups of people or organizations to question C.13, create<br />their own customized markup applications for exchanging information<br />in their domain (music, chemistry, electronics, hill-walking, finance,<br />surfing, petroleum geology, linguistics, cooking, knitting, stellar<br />cartography, history, engineering, rabbit-keeping, question C.19,<br />mathematics, genealogy, etc). <br />HTML is now well beyond the limit of its usefulness as a way of<br />describing information, and while it will continue to play an important<br />role for the content it currently represents, many new applications<br />require a more robust and flexible infrastructure. <br />Why should I use XML? <br />Here are a few reasons for using XML (in no particular order).<br />Not all of these will apply to your own requirements, and you may<br />have additional reasons not mentioned here (if so, please let the<br />editor of the FAQ know!). <br />* XML can be used to describe and identify information accurately<br />and unambiguously, in a way that computers can be programmed to<br />‘understand’ (well, at least manipulate as if they could<br />understand).<br />* XML allows documents which are all the same type to be created<br />consistently and without structural errors, because it provides<br />a standardized way of describing, controlling, or allowing/disallowing<br />particular types of document structure. [Note that this has absolutely<br />nothing whatever to do with formatting, appearance, or the actual<br />text content of your documents, only the structure of them.]<br />* XML provides a robust and durable format for information storage<br />and transmission. Robust because it is based on a proven standard,<br />and can thus be tested and verified; durable because it uses plain-text<br />file formats which will outlast proprietary binary ones.<br />* XML provides a common syntax for messaging systems for the exchange<br />of information between applications. Previously, each messaging<br />system had its own format and all were different, which made inter-system<br />messaging unnecessarily messy, complex, and expensive. If everyone<br />uses the same syntax it makes writing these systems much faster<br />and more reliable.<br />* XML is free. Not just free of charge (free as in beer) but free<br />of legal encumbrances (free as in speech). It doesn’t belong to<br />anyone, so it can’t be hijacked or pirated. And you don’t have to<br />pay a fee to use it (you can of course choose to use commercial<br />software to deal with it, for lots of good reasons, but you don’t<br />pay for XML itself).<br />* XML information can be manipulated programmatically (under machine<br />control), so XML documents can be pieced together from disparate<br />sources, or taken apart and re-used in different ways. They can<br />be converted into almost any other format with no loss of information.<br />* XML lets you separate form from content. Your XML file contains<br />your document information (text, data) and identifies its structure:<br />your formatting and other processing needs are identified separately<br />in a style sheet or processing system. The two are combined at output<br />time to apply the required formatting to the text or data identified<br />by its structure (location, position, rank, order, or whatever). <br />How would you build a search engine for large volumes<br />of XML data? <br />The way candidates answer this question may provide insight into<br />their view of XML data. For those who view XML primarily as a way<br />to denote structure for text files, a common answer is to build<br />a full-text search and handle the data similarly to the way Internet<br />portals handle HTML pages. Others consider XML as a standard way<br />of transferring structured data between disparate systems. These<br />candidates often describe some scheme of importing XML into a relational<br />or object database and relying on the database’s engine for searching.<br />Lastly, candidates that have worked with vendors specializing in<br />this area often say that the best way the handle this situation<br />is to use a third party software package optimized for XML data. <br /><br />Does XML replace HTML? <br />No. XML itself does not replace HTML. Instead, it provides an alternative<br />which allows you to define your own set of markup elements. HTML<br />is expected to remain in common use for some time to come, and the<br />current version of HTML is in XML syntax. XML is designed to make<br />the writing of DTDs much simpler than with full SGML.kalithttp://www.blogger.com/profile/06926361249526760368noreply@blogger.com2tag:blogger.com,1999:blog-661597201796672556.post-43113541898752173832009-11-04T04:16:00.000-08:002009-11-04T04:17:52.070-08:00Web Services Interview Questions1) What is a Web service?
<br />Many people and companies have debated the exact definition of Web services. At a minimum, however, a Web service is any piece of software that makes itself available over the Internet and uses a standardized XML messaging system.
<br />XML is used to encode all communications to a Web service. For example, a client invokes a Web service by sending an XML message, then waits for a corresponding XML response. Because all communication is in XML, Web services are not tied to any one operating system or programming language--Java can talk with Perl; Windows applications can talk with Unix applications.
<br />Beyond this basic definition, a Web service may also have two additional (and desirable) properties:
<br />First, a Web service can have a public interface, defined in a common XML grammar. The interface describes all the methods available to clients and specifies the signature for each method. Currently, interface definition is accomplished via the Web Service Description Language (WSDL). (See FAQ number 7.)
<br />Second, if you create a Web service, there should be some relatively simple mechanism for you to publish this fact. Likewise, there should be some simple mechanism for interested parties to locate the service and locate its public interface. The most prominent directory of Web services is currently available via UDDI, or Universal Description, Discovery, and Integration. (See FAQ number 8.)
<br />Web services currently run a wide gamut from news syndication and stock-market data to weather reports and package-tracking systems. For a quick look at the range of Web services currently available, check out the XMethods directory of Web services.
<br />2) What is new about Web services?
<br /> People have been using Remote Procedure Calls (RPC) for some time now, and they long ago discovered how to send such calls over HTTP.
<br />So, what is really new about Web services? The answer is XML.
<br />XML lies at the core of Web services, and provides a common language for describing Remote Procedure Calls, Web services, and Web service directories.
<br />Prior to XML, one could share data among different applications, but XML makes this so much easier to do. In the same vein, one can share services and code without Web services, but XML makes it easier to do these as well.
<br />By standardizing on XML, different applications can more easily talk to one another, and this makes software a whole lot more interesting.
<br />3) I keep reading about Web services, but I have never actually seen one. Can you show me a real Web service in action?
<br /> If you want a more intuitive feel for Web services, try out the IBM Web Services Browser, available on the IBM Alphaworks site. The browser provides a series of Web services demonstrations. Behind the scenes, it ties together SOAP, WSDL, and UDDI to provide a simple plug-and-play interface for finding and invoking Web services. For example, you can find a stock-quote service, a traffic-report service, and a weather service. Each service is independent, and you can stack services like building blocks. You can, therefore, create a single page that displays multiple services--where the end result looks like a stripped-down version of my.yahoo or my.excite.
<br />4) What is the Web service protocol stack?
<br />
<br /> The Web service protocol stack is an evolving set of protocols used to define, discover, and implement Web services. The core protocol stack consists of four layers:
<br />Service Transport: This layer is responsible for transporting messages between applications. Currently, this includes HTTP, SMTP, FTP, and newer protocols, such as Blocks Extensible Exchange Protocol (BEEP).
<br />XML Messaging: This layer is responsible for encoding messages in a common XML format so that messages can be understood at either end. Currently, this includes XML-RPC and SOAP.
<br />Service Description: This layer is responsible for describing the public interface to a specific Web service. Currently, service description is handled via the WSDL.
<br />Service Discovery: This layer is responsible for centralizing services into a common registry, and providing easy publish/find functionality. Currently, service discovery is handled via the UDDI.
<br />Beyond the essentials of XML-RPC, SOAP, WSDL, and UDDI, the Web service protocol stack includes a whole zoo of newer, evolving protocols. These include WSFL (Web Services Flow Language), SOAP-DSIG (SOAP Security Extensions: Digital Signature), and USML (UDDI Search Markup Language). For an overview of these protocols, check out Pavel Kulchenko's article, Web Services Acronyms, Demystified, on XML.com.
<br />Fortunately, you do not need to understand the full protocol stack to get started with Web services. Assuming you already know the basics of HTTP, it is best to start at the XML Messaging layer and work your way up.
<br />5) What is XML-RPC?
<br /> XML-RPC is a protocol that uses XML messages to perform Remote Procedure Calls. Requests are encoded in XML and sent via HTTP POST; XML responses are embedded in the body of the HTTP response.
<br />More succinctly, XML-RPC = HTTP + XML + Remote Procedure Calls.
<br />Because XML-RPC is platform independent, diverse applications can communicate with one another. For example, a Java client can speak XML-RPC to a Perl server.
<br />To get a quick sense of XML-RPC, here is a sample XML-RPC request to a weather service (with the HTTP Headers omitted):
<br /><?xml version="1.0" encoding="ISO-8859-1"?>
<br /><methodCall>
<br /><methodName>weather.getWeather</methodName>
<br /><params>
<br /><param><value>10016</value></param>
<br /></params>
<br /></methodCall>
<br />The request consists of a simple element, which specifies the method name (getWeather) and any method parameters (zip code).
<br />
<br />Here is a sample XML-RPC response from the weather service:
<br />
<br /><?xml version="1.0" encoding="ISO-8859-1"?>
<br /><methodResponse>
<br /><params>
<br /><param>
<br /><value><int>65</int></value>
<br /></param>
<br /></params>
<br /></methodResponse>
<br />The response consists of a single element, which specifies the return value (the current temperature). In this case, the return value is specified as an integer.
<br />In many ways, XML-RPC is much simpler than SOAP, and therefore represents the easiest way to get started with Web services.
<br />The official XML-RPC specification is available at XML-RPC.com. Dozens of XML-RPC implementations are available in Perl, Python, Java, and Ruby. See the XML-RPC home page for a complete list of implementations.
<br />6) What is SOAP?
<br /> SOAP is an XML-based protocol for exchanging information between computers. Although SOAP can be used in a variety of messaging systems and can be delivered via a variety of transport protocols, the main focus of SOAP is Remote Procedure Calls (RPC) transported via HTTP. Like XML-RPC, SOAP is platform independent, and therefore enables diverse applications to communicate with one another.
<br />
<br />To get a quick sense of SOAP, here is a sample SOAP request to a weather service (with the HTTP Headers omitted):
<br />
<br /><?xml version='1.0' encoding='UTF-8'?>
<br /><SOAP-ENV:Envelope
<br />xmlns:SOAP-ENV="http://www.w3.org/2001/09/soap-envelope"
<br />xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
<br />xmlns:xsd="http://www.w3.org/2001/XMLSchema">
<br /><SOAP-ENV:Body>
<br /><ns1:getWeather
<br />xmlns:ns1="urn:examples:weatherservice"
<br />SOAP-ENV:encodingStyle=" http://www.w3.org/2001/09/soap-encoding
<br /><zipcode xsi:type="xsd:string">10016</zipcode>
<br /></ns1:getWeather>
<br /></SOAP-ENV:Body>
<br /></SOAP-ENV:Envelope>
<br />As you can see, the request is slightly more complicated than XML-RPC and makes use of both XML namespaces and XML Schemas. Much like XML-RPC, however, the body of the request specifies both a method name (getWeather), and a list of parameters (zipcode).
<br />
<br />Here is a sample SOAP response from the weather service:
<br />
<br /><?xml version='1.0' encoding='UTF-8'?>
<br /><SOAP-ENV:Envelope
<br />xmlns:SOAP-ENV="http://www.w3.org/2001/09/soap-envelope"
<br />xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
<br />xmlns:xsd="http://www.w3.org/2001/XMLSchema">
<br /><SOAP-ENV:Body>
<br /><ns1:getWeatherResponse
<br />xmlns:ns1="urn:examples:weatherservice"
<br />SOAP-ENV:encodingStyle="http://www.w3.org/2001/09/soap-encoding">
<br /><return xsi:type="xsd:int">65</return>
<br /></ns1:getWeatherResponse>
<br /></SOAP-ENV:Body>
<br /></SOAP-ENV:Envelope>
<br />
<br />The response indicates a single integer return value (the current temperature).
<br />The World Wide Web Consortium (W3C) is in the process of creating a SOAP standard. The latest working draft is designated as SOAP 1.2, and the specification is now broken into two parts. Part 1 describes the SOAP messaging framework and envelope specification. Part 2 describes the SOAP encoding rules, the SOAP-RPC convention, and HTTP binding details.
<br />7) What is WSDL?
<br />
<br /> The Web Services Description Language (WSDL) currently represents the service description layer within the Web service protocol stack.
<br />In a nutshell, WSDL is an XML grammar for specifying a public interface for a Web service. This public interface can include the following:
<br />
<br />Information on all publicly available functions.
<br />Data type information for all XML messages.
<br />Binding information about the specific transport protocol to be used.
<br />Address information for locating the specified service.
<br />
<br />WSDL is not necessarily tied to a specific XML messaging system, but it does include built-in extensions for describing SOAP services.
<br />
<br />Below is a sample WSDL file. This file describes the public interface for the weather service used in the SOAP example above. Obviously, there are many details to understanding the example. For now, just consider two points.
<br />First, the <message> elements specify the individual XML messages that are transferred between computers. In this case, we have a getWeatherRequest and a getWeatherResponse. Second, the element specifies that the service is available via SOAP and is available at a specific URL.
<br />
<br /><?xml version="1.0" encoding="UTF-8"?>
<br /><definitions name="WeatherService"
<br />targetNamespace="http://www.ecerami.com/wsdl/WeatherService.wsdl"
<br />xmlns="http://schemas.xmlsoap.org/wsdl/"
<br />xmlns:soap="http://schemas.xmlsoap.org/wsdl/soap/"
<br />xmlns:tns="http://www.ecerami.com/wsdl/WeatherService.wsdl"
<br />xmlns:xsd="http://www.w3.org/2001/XMLSchema">
<br /><message name="getWeatherRequest">
<br /><part name="zipcode" type="xsd:string"/>
<br /></message>
<br /><message name="getWeatherResponse">
<br /><part name="temperature" type="xsd:int"/>
<br /></message>
<br />
<br /><portType name="Weather_PortType">
<br /><operation name="getWeather">
<br /><input message="tns:getWeatherRequest"/>
<br /><output message="tns:getWeatherResponse"/>
<br /></operation>
<br /></portType>
<br />
<br /><binding name="Weather_Binding" type="tns:Weather_PortType">
<br /><soap:binding style="rpc"
<br />transport="http://schemas.xmlsoap.org/soap/http"/>
<br /><operation name="getWeather">
<br /><soap:operation soapAction=""/>
<br /><input>
<br /><soap:body
<br />encodingStyle="http://schemas.xmlsoap.org/soap/encoding/"
<br />namespace="urn:examples:weatherservice"
<br />use="encoded"/>
<br /></input>
<br /><output>
<br /><soap:body
<br />encodingStyle="http://schemas.xmlsoap.org/soap/encoding/"
<br />namespace="urn:examples:weatherservice"
<br />use="encoded"/>
<br /></output>
<br /></operation>
<br /></binding>
<br />
<br /><service name="Weather_Service">
<br /><documentation>WSDL File for Weather Service</documentation>
<br /><port binding="tns:Weather_Binding" name="Weather_Port">
<br /><soap:address
<br />location="http://localhost:8080/soap/servlet/rpcrouter"/>
<br /></port>
<br /></service>
<br /></definitions>
<br />Using WSDL, a client can locate a Web service, and invoke any of the publicly available functions. With WSDL-aware tools, this process can be entirely automated, enabling applications to easily integrate new services with little or no manual code. For example, check out the GLUE platform from the Mind Electric.
<br />WSDL has been submitted to the W3C, but it currently has no official status within the W3C. See this W3C page for the latest draft.
<br />8) What is UDDI?
<br /> UDDI (Universal Description, Discovery, and Integration) currently represents the discovery layer within the Web services protocol stack.
<br />UDDI was originally created by Microsoft, IBM, and Ariba, and represents a technical specification for publishing and finding businesses and Web services.
<br />At its core, UDDI consists of two parts.
<br />First, UDDI is a technical specification for building a distributed directory of businesses and Web services. Data is stored within a specific XML format, and the UDDI specification includes API details for searching existing data and publishing new data.
<br />Second, the UDDI Business Registry is a fully operational implementation of the UDDI specification. Launched in May 2001 by Microsoft and IBM, the UDDI registry now enables anyone to search existing UDDI data. It also enables any company to register themselves and their services.
<br />The data captured within UDDI is divided into three main categories:
<br />White Pages: This includes general information about a specific company. For example, business name, business description, and address.
<br />Yellow Pages: This includes general classification data for either the company or the service offered. For example, this data may include industry, product, or geographic codes based on standard taxonomies.
<br />Green Pages: This includes technical information about a Web service. Generally, this includes a pointer to an external specification, and an address for invoking the Web service.
<br />You can view the Microsoft UDDI site, or the IBM UDDI site. The complete UDDI specification is available at uddi.org.
<br />Beta versions of UDDI Version 2 are available at:
<br />Hewlett Packard
<br />IBM
<br />Microsoft
<br />SAP
<br />9) How do I get started with Web Services?
<br /> The easiest way to get started with Web services is to learn XML-RPC. Check out the XML-RPC specification or read my book, Web Services Essentials. O'Reilly has also recently released a book on Programming Web Services with XML-RPC by Simon St.Laurent, Joe Johnston, and Edd Dumbill.
<br />Once you have learned the basics of XML-RPC, move onto SOAP, WSDL, and UDDI. These topics are also covered in Web Services Essentials. For a comprehensive treatment of SOAP, check out O'Reilly's Programming Web Services with SOAP, by Doug Tidwell, James Snell, and Pavel Kulchenko.
<br />10) Does the W3C support any Web service standards?
<br /> The World Wide Web Consortium (W3C) is actively pursuing standardization of Web service protocols. In September 2000, the W3C established an XML Protocol Activity. The goal of the group is to establish a formal standard for SOAP. A draft version of SOAP 1.2 is currently under review, and progressing through the official W3C recommendation process.
<br />On January 25, 2002, the W3C also announced the formation of a Web Service Activity. This new activity will include the current SOAP work as well as two new groups. The first new group is the Web Services Description Working Group, which will take up work on WSDL. The second new group is the Web Services Architecture Working Group, which will attempt to create a cohesive framework for Web service protocols.
<br />
<br />kalithttp://www.blogger.com/profile/06926361249526760368noreply@blogger.com1tag:blogger.com,1999:blog-661597201796672556.post-38223506383259533472009-10-28T02:56:00.001-07:002009-10-28T02:56:44.746-07:00Codd RulesRule 1 : The information Rule.<br />"All information in a relational data base is represented explicitly at the logical level and in exactly one way - by values in tables."<br /><br />Everything within the database exists in tables and is accessed via table access routines.<br /><br />Rule 2 : Guaranteed access Rule.<br />"Each and every datum (atomic value) in a relational data base is guaranteed to be logically accessible by resorting to a combination of table name, primary key value and column name."<br /><br />To access any data-item you specify which column within which table it exists, there is no reading of characters 10 to 20 of a 255 byte string.<br /><br />Rule 3 : Systematic treatment of null values.<br />"Null values (distinct from the empty character string or a string of blank characters and distinct from zero or any other number) are supported in fully relational DBMS for representing missing information and inapplicable information in a systematic way, independent of data type."<br /><br />If data does not exist or does not apply then a value of NULL is applied, this is understood by the RDBMS as meaning non-applicable data.<br /><br />Rule 4 : Dynamic on-line catalog based on the relational model.<br />"The data base description is represented at the logical level in the same way as-ordinary data, so that authorized users can apply the same relational language to its interrogation as they apply to the regular data."<br /><br />The Data Dictionary is held within the RDBMS, thus there is no-need for off-line volumes to tell you the structure of the database.<br /><br />Rule 5 : Comprehensive data sub-language Rule.<br />"A relational system may support several languages and various modes of terminal use (for example, the fill-in-the-blanks mode). However, there must be at least one language whose statements are expressible, per some well-defined syntax, as character strings and that is comprehensive in supporting all the following items <br /><br />Data Definition <br />View Definition <br />Data Manipulation (Interactive and by program). <br />Integrity Constraints <br />Authorization. <br /><br />Every RDBMS should provide a language to allow the user to query the contents of the RDBMS and also manipulate the contents of the RDBMS.<br /><br />Rule 6 : .View updating Rule<br />"All views that are theoretically updatable are also updatable by the system."<br /><br />Not only can the user modify data, but so can the RDBMS when the user is not logged-in.<br /><br />Rule 7 : High-level insert, update and delete.<br />"The capability of handling a base relation or a derived relation as a single operand applies not only to the retrieval of data but also to the insertion, update and deletion of data."<br /><br />The user should be able to modify several tables by modifying the view to which they act as base tables.<br /><br />Rule 8 : Physical data independence.<br />"Application programs and terminal activities remain logically unimpaired whenever any changes are made in either storage representations or access methods."<br /><br />The user should not be aware of where or upon which media data-files are stored<br /><br />Rule 9 : Logical data independence.<br />"Application programs and terminal activities remain logically unimpaired when information-preserving changes of any kind that theoretically permit un-impairment are made to the base tables."<br /><br />User programs and the user should not be aware of any changes to the structure of the tables (such as the addition of extra columns).<br /><br />Rule 10 : Integrity independence.<br />"Integrity constraints specific to a particular relational data base must be definable in the relational data sub-language and storable in the catalog, not in the application programs."<br /><br />If a column only accepts certain values, then it is the RDBMS which enforces these constraints and not the user program, this means that an invalid value can never be entered into this column, whilst if the constraints were enforced via programs there is always a chance that a buggy program might allow incorrect values into the system.<br /><br />Rule 11 : Distribution independence.<br />"A relational DBMS has distribution independence."<br /><br />The RDBMS may spread across more than one system and across several networks, however to the end-user the tables should appear no different to those that are local.<br /><br />Rule 12 : Non-subversion Rule.<br />"If a relational system has a low-level (single-record-at-a-time) language, that low level cannot be used to subvert or bypass the integrity Rules and constraints expressed in the higher level relational language (multiple-records-at-a-time)."<br /><br />The RDBMS should prevent users from accessing the data without going through the Oracle data-read functions.<br />In Rule 5 Codd stated that an RDBMS required a Query Language, however Codd does not explicitly state that SQL should be the query tool, just that there should be a tool, and many of the initial products had their own tools, Oracle had UFI (User Friendly Interface), Ingres had QUEL (QUery Execution Language) and the never released DB1 had a language called sequel, the acronym SQL is often pronounced such as it was sequel that provided the core functionality to SQL.<br />Even when the vendors eventually all started offering SQL the flavours were/are all radically different and contained wildly varying syntax. This situation was somewhat resolved in the late 80's when ANSI brought out their first definition of the SQL syntax.<br />This has since been upgraded to version 2 and now all vendors offer a standard core SQL, however ANSI SQL is somewhat limited and thus all RDBMS providers offer extensions to SQL which may differ from vendor to vendor.kalithttp://www.blogger.com/profile/06926361249526760368noreply@blogger.com0tag:blogger.com,1999:blog-661597201796672556.post-31284971352602689992009-10-20T23:54:00.000-07:002009-10-20T23:55:12.018-07:00ADO.Net Interview Questions1. What is ADO.Net? <br /> <br />ActiveX Data Object (ADO).NET is the primary relational data access model for Microsoft .NET-based applications. ADO.Net provides consistent data access from database management system (DBMS) such as SQL Server, Oracle etc. ADO.NET is exclusively designed to meet the requirements of web-based applications model such as disconnected data architecture, integration with XML, common data representation, combining data from multiple data sources, and optimization in interacting with the database. <br /> <br />2. Explain the ADO .Net Architecture? <br /> <br />ADO.NET Architecture includes three data providers for implementing connectivity with databases: SQL Server .NET Data Provider, OLEDB .NET Data Provider, and ODBC .Net Data Provider. You can access data through data provider in two ways either using a DataReader or DataAdapter.<br /><br />Leverage current ADO knowledge <br />Support the N-Tier programming model <br />Provide support for XML <br /> <br />In distributed applications, the concept of working with disconnected data has become very common. A disconnected model means that once you have retrieved the data that you need, the connection to the data source is dropped—you work with the data locally. The reason why this model has become so popular is that it frees up precious database server resources, which leads to highly scalable applications. The ADO.NET solution for disconnected data is the DataSet object. <br /><br />Data Access in ADO.NET relies on two components:<br /><br />DataSet <br />Data Provider. <br />DataSet<br /> <br /><br />The ADO.NET Data Set is explicitly designed for data access independent of any data source. As a result, it can be used with multiple and differing data sources, used with XML data, or used to manage data local to the application. The DataSet contains a collection of one or more objects made up of rows and columns of data, as well as primary key, foreign key, constraint, and relation information about the data in the DataTable objects.<br />The dataset is a disconnected, in-memory representation of data. It can be considered as a local copy of the relevant portions of the database. The DataSet is persisted in memory and the data in it can be manipulated and updated independent of the database. When the use of this DataSet is finished, changes can be made back to the central database for updating. The data in DataSet can be loaded from any valid data source like Microsoft SQL server database, an Oracle database or from a Microsoft Access database.<br /><br /><br /><br />Data Provider <br /><br />The Data Provider is responsible for providing and maintaining the connection to the database. A DataProvider is a set of related components that work together to provide data in an efficient and performance driven manner. The .NET Framework currently comes with two DataProviders: the SQL Data Provider which is designed only to work with Microsoft's SQL Server 7.0 or later and the OleDb DataProvider which allows us to connect to other types of databases like Access and Oracle. Each DataProvider consists of the following component classes:<br /><br />The Connection object which provides a connection to the database<br />The Command object which is used to execute a command<br />The DataReader object which provides a forward-only, read only, connected recordset<br />The DataAdapter object which populates a disconnected DataSet with data and performs update<br /><br />Data access with ADO.NET can be summarized as follows:<br />A connection object establishes the connection for the application with the database. The command object provides direct execution of the command to the database. If the command returns more than a single value, the command object returns a DataReader to provide the data. Alternatively, the DataAdapter can be used to fill the Dataset object. The database can be updated using the command object or the DataAdapter. <br /><br /><br />Component classes that make up the Data Providers<br /><br /> The Connection Object<br /> <br /><br />The Connection object creates the connection to the database. Microsoft Visual Studio .NET provides two types of Connection classes: the SqlConnection object, which is designed specifically to connect to Microsoft SQL Server 7.0 or later, and the OleDbConnection object, which can provide connections to a wide range of database types like Microsoft Access and Oracle. The Connection object contains all of the information required to open a connection to the database.<br /><br /> <br /><br /><br /><br /><br /><br />The Command Object<br /> <br /><br />The Command object is represented by two corresponding classes: SqlCommand and OleDbCommand. Command objects are used to execute commands to a database across a data connection. The Command objects can be used to execute stored procedures on the database, SQL commands, or return complete tables directly. Command objects provide three methods that are used to execute commands on the database:<br />ExecuteNonQuery: Executes commands that have no return values such as INSERT, UPDATE or DELETE <br />ExecuteScalar: Returns a single value from a database query <br />ExecuteReader: Returns a result set by way of a DataReader object<br /><br /><br /><br />The DataReader Object<br /> <br /><br />The DataReader object provides a forward-only, read-only, connected stream recordset from a database. Unlike other components of the Data Provider, DataReader objects cannot be directly instantiated. Rather, the DataReader is returned as the result of the Command object's ExecuteReader method. The SqlCommand.ExecuteReader method returns a SqlDataReader object, and the OleDbCommand.ExecuteReader method returns an OleDbDataReader object. The DataReader can provide rows of data directly to application logic when you do not need to keep the data cached in memory. Because only one row is in memory at a time, the DataReader provides the lowest overhead in terms of system performance but requires the exclusive use of an open Connection object for the lifetime of the DataReader.<br /><br /> <br /><br />The DataAdapter Object<br /> <br /><br />The DataAdapter is the class at the core of ADO .NET's disconnected data access. It is essentially the middleman facilitating all communication between the database and a DataSet. The DataAdapter is used either to fill a DataTable or DataSet with data from the database with it's Fill method. After the memory-resident data has been manipulated, the DataAdapter can commit the changes to the database by calling the Update method. The DataAdapter provides four properties that represent database commands:<br /><br /> <br /><br />SelectCommand <br />InsertCommand <br />DeleteCommand <br />UpdateCommand <br />When the Update method is called, changes in the DataSet are copied back to the database and the appropriate InsertCommand, DeleteCommand, or UpdateCommand is executed.<br /> <br /> <br />3. What are the advantages and drawbacks of using ADO.NET? <br /> <br />Pros<br /><br />ADO.NET is rich with plenty of features that are bound to impress even the most skeptical of programmers. If this weren’t the case, Microsoft wouldn’t even be able to get anyone to use the Beta. What we’ve done here is come up with a short list of some of the more outstanding benefits to using the ADO.NET architecture and the System.Data namespace. <br /><br />* Performance – there is no doubt that ADO.NET is extremely fast. The actual figures vary depending on who performed the test and which benchmark was being used, but ADO.NET performs much, much faster at the same tasks than its predecessor, ADO. Some of the reasons why ADO.NET is faster than ADO are discussed in the ADO versus ADO.NET section later in this chapter. <br /><br />* Optimized SQL Provider – in addition to performing well under general circumstances, ADO.NET includes a SQL Server Data Provider that is highly optimized for interaction with SQL Server. It uses SQL Server’s own TDS (Tabular Data Stream) format for exchanging information. Without question, your SQL Server 7 and above data access operations will run blazingly fast utilizing this optimized Data Provider. <br /><br />* XML Support (and Reliance) – everything you do in ADO.NET at some point will boil down to the use of XML. In fact, many of the classes in ADO.NET, such as the DataSet, are so intertwined with XML that they simply cannot exist or function without utilizing the technology. You’ll see later when we compare and contrast the “old” and the “new” why the reliance on XML for internal storage provides many, many advantages, both to the framework and to the programmer utilizing the class library. <br /><br />* Disconnected Operation Model – the core ADO.NET class, the DataSet, operates in an entirely disconnected fashion. This may be new to some programmers, but it is a remarkably efficient and scalable architecture. Because the disconnected model allows for the DataSet class to be unaware of the origin of its data, an unlimited number of supported data sources can be plugged into code without any hassle in the future. <br /><br />* Rich Object Model – the entire ADO.NET architecture is built on a hierarchy of class inheritance and interface implementation. Once you start looking for things you need within this namespace, you’ll find that the logical inheritance of features and base class support makes the entire system extremely easy to use, and very customizable to suit your own needs. It is just another example of how everything in the .NET framework is pushing toward a trend of strong application design and strong OOP implementations. <br /><br /><br />Cons<br /><br />Hard as it may be to believe, there are a couple of drawbacks or disadvantages to using the ADO.NET architecture. I’m sure others can find many more faults than we list here, but we decided to stick with a short list of some of the more obvious and important shortcomings of the technology. <br /><br />* Managed-Only Access – for a few obvious reasons, and some far more technical, you cannot utilize the ADO.NET architecture from anything but managed code. This means that there is no COM interoperability allowed for ADO.NET. Therefore, in order to take advantage of the advanced SQL Server Data Provider and any other feature like DataSets, XML internal data storage, etc, your code must be running under the CLR. <br /><br />* Only Three Managed Data Providers (so far) – unfortunately, if you need to access any data that requires a driver that cannot be used through either an OLEDB provider or the SQL Server Data Provider, then you may be out of luck. However, the good news is that the OLEDB provider for ODBC is available for download from Microsoft. At that point the down-side becomes one of performance, in which you are invoking multiple layers of abstraction as well as crossing the COM InterOp gap, incurring some initial overhead as well. <br /><br />* Learning Curve – despite the misleading name, ADO.NET is not simply a new version of ADO, nor should it even be considered a direct successor. ADO.NET should be thought of more as the data access class library for use with the .NET framework. The difficulty in learning to use ADO.NET to its fullest is that a lot of it does seem familiar. It is this that causes some common pitfalls. Programmers need to learn that even though some syntax may appear the same, there is actually a considerable amount of difference in the internal workings of many classes. For example (this will be discussed in far more detail later), an ADO.NET DataSet is nothing at all like a disconnected ADO RecordSet. Some may consider a learning curve a drawback, but I consider learning curves more like scheduling issues. There’s a learning curve in learning anything new; it’s just up to you to schedule that curve into your time so that you can learn the new technology at a pace that fits your schedule. <br /> <br /> <br />4. Explain what a diffgram is and its usage ? <br /> <br />A DiffGram is an XML format that is used to identify current and original versions of data elements. The DataSet uses the DiffGram format to load and persist its contents, and to serialize its contents for transport across a network connection. When a DataSet is written as a DiffGram, it populates the DiffGram with all the necessary information to accurately recreate the contents, though not the schema, of the DataSet, including column values from both the Original and Current row versions, row error information, and row order.<br />When sending and retrieving a DataSet from an XML Web service, the DiffGram format is implicitly used. Additionally, when loading the contents of a DataSet from XML using the ReadXml method, or when writing the contents of a DataSet in XML using the WriteXml method, you can select that the contents be read or written as a DiffGram. <br />The DiffGram format is divided into three sections: the current data, the original (or "before") data, and an errors section, as shown in the following example.<br /><br /><br /><br /><br /> <?xml version="1.0"?><br /><diffgr:diffgram <br /> xmlns:msdata="urn:schemas-microsoft-com:xml-msdata"<br /> xmlns:diffgr="urn:schemas-microsoft-com:xml-diffgram-v1"<br /> xmlns:xsd="http://www.w3.org/2001/XMLSchema"><br /> <DataInstance><br /> </DataInstance><br /> <diffgr:before><br /> </diffgr:before><br /> <diffgr:errors><br /> </diffgr:errors><br /></diffgr:diffgram> <br /><br /><br />The DiffGram format consists of the following blocks of data: <br /><DataInstance> <br />The name of this element, DataInstance, is used for explanation purposes in this documentation. A DataInstance element represents a DataSet or a row of a DataTable. Instead of DataInstance, the element would contain the name of the DataSet or DataTable. This block of the DiffGram format contains the current data, whether it has been modified or not. An element, or row, that has been modified is identified with the diffgr:hasChanges annotation. <br /><diffgr:before> <br />This block of the DiffGram format contains the original version of a row. Elements in this block are matched to elements in the DataInstance block using the diffgr:id annotation. <br /><diffgr:errors> <br />This block of the DiffGram format contains error information for a particular row in the DataInstance block. Elements in this block are matched to elements in the DataInstance block using the diffgr:id annotation.<br /><br /> <br /><br />Attribute<br /> Description<br /> <br />diffgr:hasChanges <br /> The row has been modified (see related row in <diffgr:before>) or inserted.<br /> <br />diffgr:hasErrors <br /> The row has an error (see related row in <diffgr:errors>).<br /> <br />diffgr:id <br /> Identifies the ID used to couple rows across sections: TableName+RowIdentifier.<br /> <br />diffgr:parentId <br /> Identifies the ID used to identify the parent of the current row.<br /> <br />diffgr:error <br /> Contains the error text for the row in <diffgr:errors>.<br /> <br />msdata:rowOrder <br /> Tracks the ordinal position of the row in the DataSet.<br /> <br />msdata:hidden <br /> Identifies columns marked as hidden msdata:hiddenColumn=…<br /> <br /> <br /> <br />5. Can you edit data in the Repeater control? <br /> <br />NO. <br /><br />6. Which method do you invoke on the DataAdapter control to load your generated dataset with data? <br /> <br />You have to use the Fill method of the DataAdapter control and pass the dataset object as an argument to load the generated data. <br /> <br />7. Which are the different IsolationLevels ? <br /> <br />Isolation Level<br /> Description<br /> <br />ReadCommitted <br /> The default for SQL Server. This level ensures that data written by one transaction will only be accessible in a second transaction after the first transaction commits.<br /> <br />ReadUncommitted <br /> This permits your transaction to read data within the database, even data that has not yet been committed by another transaction. For example, if two users were accessing the same database, and the first inserted some data without concluding their transaction (by means of a Commit or Rollback), then the second user with their isolation level set to ReadUncommitted could read the data. <br /> <br />RepeatableRead <br /> This level, which extends the ReadCommitted level, ensures that if the same statement is issued within the transaction, regardless of other poten- tial updates made to the database, the same data will always be returned. This level does require extra locks to be held on the data, which could adversely affect performance. This level guarantees that, for each row in the initial query, no changes can be made to that data. It does, however, permit "phantom" rows to show up — these are completely new rows that another transaction might have inserted while your transaction was running. <br /> <br />Serializable <br /> This is the most "exclusive" transaction level, which in effect serializes access to data within the database. With this isolation level, phantom rows can never show up, so a SQL statement issued within a serializable transac- tion will always retrieve the same data. The negative performance impact of a Serializable transaction should not be underestimated — if you don't absolutely need to use this level of isolation, stay away from it. <br /> <br /> <br /> <br />8. How xml files and be read and write using dataset?. <br /> <br />DataSet exposes method like ReadXml and WriteXml to read and write xml <br /> <br />9. What are the different rowversions available? <br /> <br />DataRow Version Value<br /> Description<br /> <br />Current <br /> The value existing at present within the column. If no edit has occurred, this will be the same as the original value. If an edit (or edits) have occurred, the value will be the last valid value entered.<br /> <br />Default <br /> The default value (in other words, any default set up for the column).<br /> <br />Original <br /> The value of the column when originally selected from the database. If the DataRow's AcceptChanges method is called, this value will update to the Current value. <br /> <br />Proposed <br /> When changes are in progress for a row, it is possible to retrieve this modified value. If you call BeginEdit() on the row and make changes, each column will have a proposed value until either EndEdit() or CancelEdit() is called. <br /> <br /> <br /> <br />10. Explain ACID properties?. <br /> <br />The ACID model is one of the oldest and most important concepts of database theory. It sets forward four goals that every database management system must strive to achieve: atomicity, consistency, isolation and durability. No database that fails to meet any of these four goals can be considered reliable. <br /><br />Let’s take a moment to examine each one of these characteristics in detail: <br /><br /> <br /><br />Atomicity states that database modifications must follow an “all or nothing” rule. Each transaction is said to be “atomic.” If one part of the transaction fails, the entire transaction fails. It is critical that the database management system maintain the atomic nature of transactions in spite of any DBMS, operating system or hardware failure.<br /><br /><br />Consistency states that only valid data will be written to the database. If, for some reason, a transaction is executed that violates the database’s consistency rules, the entire transaction will be rolled back and the database will be restored to a state consistent with those rules. On the other hand, if a transaction successfully executes, it will take the database from one state that is consistent with the rules to another state that is also consistent with the rules. <br /><br /><br />Isolation requires that multiple transactions occurring at the same time not impact each other’s execution. For example, if Joe issues a transaction against a database at the same time that Mary issues a different transaction, both transactions should operate on the database in an isolated manner. The database should either perform Joe’s entire transaction before executing Mary’s or vice-versa. This prevents Joe’s transaction from reading intermediate data produced as a side effect of part of Mary’s transaction that will not eventually be committed to the database. Note that the isolation property does not ensure which transaction will execute first, merely that they will not interfere with each other.<br /><br /><br />Durability ensures that any transaction committed to the database will not be lost. Durability is ensured through the use of database backups and transaction logs that facilitate the restoration of committed transactions in spite of any subsequent software or hardware failures. <br /> <br /> <br />11. Differences Between ADO and ADO.NET <br /> <br />ADO.NET is an evolution of ADO. The following table lists several data access features and how each feature differs between ADO and ADO.NET.<br /><br /> <br /><br />Feature <br /> ADO <br /> ADO.NET<br /> <br />Memory-resident data representation<br /> Uses the Recordset object, which holds single rows of data, much like a database table <br /> Uses the DataSet object, which can contain one or more tables represented by DataTable objects <br /> <br />Relationships between multiple tables <br /> Requires the JOIN query to assemble data from multiple database tables in a single result table. Also offers hierarchical recordsets, but they are hard to use <br /> Supports the DataRelation object to associate rows in one DataTable object with rows in another DataTable object<br /> <br />Data navigation <br /> Traverses rows in a Recordset sequentially, by using the .MoveNext method <br /> The DataSet uses a navigation paradigm for nonsequential access to rows in a table. Accessing the data is more like accessing data in a collection or array. This is possible because of the Rows collection of the DataTable; it allows you to access rows by index. Follows relationships to navigate from rows in one table to corresponding rows in another table<br /> <br />Disconnected access <br /> Provided by the Recordset but it has to be explicitly coded for. The default for a Recordset object is to be connected via the ActiveConnection property. You communicate to a database with calls to an OLE DB provider <br /> Communicates to a database with standardized calls to the DataAdapter object, which communicates to an OLE DB data provider, or directly to a SQL Server data provider<br /> <br />Programmability <br /> All Recordset field data types are COM Variant data types, and usually correspond to field names in a database table <br /> Uses the strongly typed programming characteristic of XML. Data is self-describing because names for code items correspond to the business problem solved by the code. Data in DataSet and DataReader objects can be strongly typed, thus making code easier to read and to write <br /> <br />Sharing disconnected data between tiers or components <br /> Uses COM marshaling to transmit a disconnected record set. This supports only those data types defined by the COM standard. Requires type conversions, which demand system resources <br /> Transmits a DataSet as XML. The XML format places no restrictions on data types and requires no type conversions<br /> <br />Transmitting data through firewalls <br /> Problematic, because firewalls are typically configured to prevent system-level requests such as COM marshaling <br /> Supported, because ADO.NET DataSet objects use XML, which can pass through firewalls<br /> <br />Scalability <br /> Since the defaults in ADO are to use connected Recordset objects, database locks, and active database connections for long durations contend for limited database resources <br /> Disconnected access to database data without retaining database locks or active database connections for lengthy periods limits contention for limited database resources<br /> <br /> <br /> <br />12. Whate are different types of Commands available with DataAdapter ? <br /> <br />The SqlDataAdapter has four command objects <br /><br /> <br /><br />SelectCommand <br />InsertCommand <br />DeleteCommand <br />UpdateCommand <br /> <br /> <br />13. What is a Dataset? <br /> <br />Major component of ADO.NET is the DataSet object, which you can think of as being similar to an in-memory relational database. DataSet objects contain DataTable objects, relationships, and constraints, allowing them to replicate an entire data source, or selected parts of it, in a disconnected fashion.<br />A DataSet object is always disconnected from the source whose data it contains, and as a consequence it doesn't care where the data comes from—it can be used to manipulate data from a traditional database or an XML document, or anything in between. In order to connect a DataSet to a data source, you need to use a data adapter as an intermediary between the DataSet and the .NET data provider.<br /><br />Datasets are the result of bringing together ADO and XML. A dataset contains one or more data of tabular XML, known as DataTables, these data can be treated separately, or can have relationships defined between them. Indeed these relationships give you ADO data SHAPING without needing to master the SHAPE language, which many people are not comfortable with.<br />The dataset is a disconnected in-memory cache database. The dataset object model looks like this:<br />Dataset<br /> DataTableCollection<br /> DataTable<br /> DataView<br /> DataRowCollection<br /> DataRow<br /> DataColumnCollection<br /> DataColumn<br /> ChildRelations<br /> ParentRelations<br /> Constraints<br /> PrimaryKey<br />DataRelationCollection<br />Let’s take a look at each of these:<br />DataTableCollection: As we say that a DataSet is an in-memory database. So it has this collection, which holds data from multiple tables in a single DataSet object.<br />DataTable: In the DataTableCollection, we have DataTable objects, which represents the individual tables of the dataset. <br />DataView: The way we have views in database, same way we can have DataViews. We can use these DataViews to do Sort, filter data.<br />DataRowCollection: Similar to DataTableCollection, to represent each row in each Table we have DataRowCollection.<br />DataRow: To represent each and every row of the DataRowCollection, we have DataRows.<br />DataColumnCollection: Similar to DataTableCollection, to represent each column in each Table we have DataColumnCollection.<br />DataColumn: To represent each and every Column of the DataColumnCollection, we have DataColumn.<br />PrimaryKey: Dataset defines Primary key for the table and the primary key validation will take place without going to the database.<br />Constraints: We can define various constraints on the Tables, and can use Dataset.Tables(0).enforceConstraints. This will execute all the constraints, whenever we enter data in DataTable.<br />DataRelationCollection: as we know that we can have more than 1 table in the dataset, we can also define relationship between these tables using this collection and maintain a parent-child relationship.<br /> <br /> <br />14. How you will set the datarelation between two columns? <br /> <br />ADO.NET provides DataRelation object to set relation between two columns.It helps to enforce the following constraints,a unique constraint, which guarantees that a column in the table contains no duplicates and a foreign-key constraint,which can be used to maintain referential integrity.A unique constraint is implemented either by simply setting the Unique property of a data column to true, or by adding an instance of the UniqueConstraint class to the DataRelation object's ParentKeyConstraint. As part of the foreign-key constraint, you can specify referential integrity rules that are applied at three points,when a parent record is updated,when a parent record is deleted and when a change is accepted or rejected. <br /> <br />15. Which method do you invoke on the DataAdapter control to load your generated dataset with data? <br /> <br />Use the Fill method of the DataAdapter control and pass the dataset object as an argument to load the generated data. <br /><br />16. How do you handle data concurrency in .NET ? <br /> <br />In general, there are three common ways to manage concurrency in a database: <br /><br /> <br /><br />Pessimistic concurrency control: A row is unavailable to users from the time the record is fetched until it is updated in the database. <br />Optimistic concurrency control: A row is unavailable to other users only while the data is actually being updated. The update examines the row in the database and determines whether any changes have been made. Attempting to update a record that has already been changed results in a concurrency violation. <br />"Last in wins": A row is unavailable to other users only while the data is actually being updated. However, no effort is made to compare updates against the original record; the record is simply written out, potentially overwriting any changes made by other users since you last refreshed the records. <br />Pessimistic Concurrency<br /><br />Pessimistic concurrency is typically used for two reasons. First, in some situations there is high contention for the same records. The cost of placing locks on the data is less than the cost of rolling back changes when concurrency conflicts occur.<br />Pessimistic concurrency is also useful for situations where it is detrimental for the record to change during the course of a transaction. A good example is an inventory application. Consider a company representative checking inventory for a potential customer. You typically want to lock the record until an order is generated, which would generally flag the item with a status of ordered and remove it from available inventory. If no order is generated, the lock would be released so that other users checking inventory get an accurate count of available inventory.<br />However, pessimistic concurrency control is not possible in a disconnected architecture. Connections are open only long enough to read the data or to update it, so locks cannot be sustained for long periods. Moreover, an application that holds onto locks for long periods is not scalable.<br /><br /><br />Optimistic Concurrency<br /><br /><br />In optimistic concurrency, locks are set and held only while the database is being accessed. The locks prevent other users from attempting to update records at the same instant. The data is always available except for the exact moment that an update is taking place. For more information, see Using Optimistic Concurrency.<br />When an update is attempted, the original version of a changed row is compared against the existing row in the database. If the two are different, the update fails with a concurrency error. It is up to you at that point to reconcile the two rows, using business logic that you create.<br /><br /><br />Last in Wins<br /><br /><br />With "last in wins," no check of the original data is made and the update is simply written to the database. It is understood that the following scenario can occur: <br /><br /> <br /><br />User A fetches a record from the database. <br />User B fetches the same record from the database, modifies it, and writes the updated record back to the database. <br />User A modifies the 'old' record and writes it back to the database. <br />In the above scenario, the changes User B made were never seen by User A. Be sure that this situation is acceptable if you plan to use the "last in wins" approach of concurrency control.<br /><br /><br />Concurrency Control in ADO.NET and Visual Studio<br /><br /><br />ADO.NET and Visual Studio use optimistic concurrency, because the data architecture is based on disconnected data. Therefore, you need to add business logic to resolve issues with optimistic concurrency.<br />If you choose to use optimistic concurrency, there are two general ways to determine if changes have occurred: the version approach (true version numbers or date-time stamps) and the saving-all-values approach.<br /><br /><br />The Version Number Approach<br /><br /><br />In the version number approach, the record to be updated must have a column that contains a date-time stamp or version number. The date-time stamp or a version number is saved on the client when the record is read. This value is then made part of the update.<br />One way to handle concurrency is to update only if value in the WHERE clause matches the value on the record. The SQL representation of this approach is:<br /><br /> <br /><br /> UPDATE Table1 SET Column1 = @newvalue1, Column2 = @newvalue2 WHERE DateTimeStamp = @origDateTimeStamp <br /><br />Alternatively, the comparison can be made using the version number:<br /><br /><br /> UPDATE Table1 SET Column1 = @newvalue1, Column2 = @newvalue2 WHERE RowVersion = @origRowVersionValue <br /><br /><br />If the date-time stamps or version numbers match, the record in the data store has not changed and can be safely updated with the new values from the dataset. An error is returned if they don't match. You can write code to implement this form of concurrency checking in Visual Studio. You will also have to write code to respond to any update conflicts. To keep the date-time stamp or version number accurate, you need to set up a trigger on the table to update it when a change to a row occurs.<br /><br /><br />The Saving-All-Values Approach<br /><br /><br />An alternative to using a date-time stamp or version number is to get copies of all the fields when the record is read. The DataSet object in ADO.NET maintains two versions of each modified record: an original version (that was originally read from the data source) and a modified version, representing the user updates. When attempting to write the record back to the data source, the original values in the data row are compared against the record in the data source. If they match, it means that the database record has not changed since it was read. In that case, the changed values from the dataset are successfully written to the database.<br />Each data adapter command has a parameters collection for each of its four commands (DELETE, INSERT, SELECT, and UPDATE). Each command has parameters for both the original values, as well as the current (or modified) values.<br />The following example shows the command text for a dataset command that updates a typical Customers table. The command is specified for dynamic SQL and optimistic concurrency.<br /><br /> <br /><br /> UPDATE Customers SET CustomerID = @currCustomerID, CompanyName = @currCompanyName, ContactName = @currContactName, ContactTitle = currContactTitle, Address = @currAddress, City = @currCity, PostalCode = @currPostalCode, Phone = @currPhone, Fax = @currFax WHERE (CustomerID = @origCustomerID) AND (Address = @origAddress OR @origAddress IS NULL AND Address IS NULL) AND (City = @origCity OR @origCity IS NULL AND City IS NULL) AND (CompanyName = @origCompanyName OR @origCompanyName IS NULL AND CompanyName IS NULL) AND (ContactName = @origContactName OR @origContactName IS NULL AND ContactName IS NULL) AND (ContactTitle = @origContactTitle OR @origContactTitle IS NULL AND ContactTitle IS NULL) AND (Fax = @origFax OR @origFax IS NULL AND Fax IS NULL) AND (Phone = @origPhone OR @origPhone IS NULL AND Phone IS NULL) AND (PostalCode = @origPostalCode OR @origPostalCode IS NULL AND PostalCode IS NULL); SELECT CustomerID, CompanyName, ContactName, ContactTitle, Address, City, PostalCode, Phone, Fax FROM Customers WHERE (CustomerID = @currCustomerID) <br /><br />Note that the nine SET statement parameters represent the current values that will be written to the database, whereas the nine WHERE statement parameters represent the original values that are used to locate the original record.<br />The first nine parameters in the SET statement correspond to the first nine parameters in the parameters collection. These parameters would have their SourceVersion property set to Current. <br />The next nine parameters in the WHERE statement are used for optimistic concurrency. These placeholders would correspond to the next nine parameters in the parameters collection, and each of these parameters would have their SourceVersion property set to Original.<br />The SELECT statement is used to refresh the dataset after the update has occurred. It is generated when you set the Refresh the DataSet option in the Advanced SQL Generations Options dialog box.<br /> <br /> <br />17. What are relation objects in dataset and how & where to use them? <br /> <br />In a DataSet that contains multiple DataTable objects, you can use DataRelation objects to relate one table to another, to navigate through the tables, and to return child or parent rows from a related table. Adding a DataRelation to a DataSet adds, by default, a UniqueConstraint to the parent table and a ForeignKeyConstraint to the child table.<br /><br />The following code example creates a DataRelation using two DataTable objects in a DataSet. Each DataTable contains a column named CustID, which serves as a link between the two DataTable objects. The example adds a single DataRelation to the Relations collection of the DataSet. The first argument in the example specifies the name of the DataRelation being created. The second argument sets the parent DataColumn and the third argument sets the child DataColumn.`<br /><br /> custDS.Relations.Add("CustOrders",<br />custDS.Tables["Customers"].Columns["CustID"],<br />custDS.Tables["Orders"].Columns["CustID"]); <br /> <br /> <br />18. Difference between OLEDB Provider and SqlClient ? <br /> <br />SQLClient .NET classes are highly optimized for the .net / sqlserver combination and achieve optimal results. The SqlClient data provider is fast. It's faster than the Oracle provider, and faster than accessing database via the OleDb layer. It's faster because it accesses the native library (which automatically gives you better performance), and it was written with lots of help from the SQL Server team. <br /> <br />19. What are the different namespaces used in the project to connect the database? What data providers available in .net to connect to database? <br /> <br />Following are different Namespaces:<br /><br /> <br /><br />System.Data.OleDb - classes that make up the .NET Framework Data Provider for OLE DB-compatible data sources. These classes allow you to connect to an OLE DB data source, execute commands against the source, and read the results. <br />System.Data.SqlClient - classes that make up the .NET Framework Data Provider for SQL Server, which allows you to connect to SQL Server 7.0, execute commands, and read results. The System.Data.SqlClient namespace is similar to the System.Data.OleDb namespace, but is optimized for access to SQL Server 7.0 and later. <br />System.Data.Odbc - classes that make up the .NET Framework Data Provider for ODBC. These classes allow you to access ODBC data source in the managed space. <br />System.Data.OracleClient - classes that make up the .NET Framework Data Provider for Oracle. These classes allow you to access an Oracle data source in the managed space.<br /> <br /> <br />20. What is Data Reader? <br /> <br />You can use the ADO.NET DataReader to retrieve a read-only, forward-only stream of data from a database. Using the DataReader can increase application performance and reduce system overhead because only one row at a time is ever in memory.<br />After creating an instance of the Command object, you create a DataReader by calling Command. ExecuteReader to retrieve rows from a data source, as shown in the following example.<br /><br /> <br /><br />SqlDataReader myReader = myCommand.ExecuteReader();<br /><br /> <br /><br />You use the Read method of the DataReader object to obtain a row from the results of the query.<br /><br /><br />while (myReader.Read())<br />Console.WriteLine("\t{0}\t{1}", myReader.GetInt32(0), myReader.GetString(1));<br />myReader.Close();<br /> <br /><br />21. What is Data Set? <br /> <br />The DataSet is a memory-resident representation of data that provides a consistent relational programming model regardless of the data source. It can be used with multiple and differing data sources, used with XML data, or used to manage data local to the application. The DataSet represents a complete set of data including related tables, constraints, and relationships among the tables. The methods and objects in a DataSet are consistent with those in the relational database model. The DataSet can also persist and reload its contents as XML and its schema as XML Schema definition language (XSD) schema. <br /> <br />22. What is Data Adapter? <br /> <br />The DataAdapter serves as a bridge between a DataSet and a data source for retrieving and saving data. The DataAdapter provides this bridge by mapping Fill, which changes the data in the DataSet to match the data in the data source, and Update, which changes the data in the data source to match the data in the DataSet. If you are connecting to a Microsoft SQL Server database, you can increase overall performance by using the SqlDataAdapter along with its associated SqlCommand and SqlConnection. For other OLE DB-supported databases, use the DataAdapter with its associated OleDbCommand and OleDbConnection objects. <br /> <br />23. Which method do you invoke on the DataAdapter control to load your generated dataset with data? <br /> <br />Fill() method is used to load the generated data set with Data. <br /> <br />24. Explain different methods and Properties of DataReader which you have used in your project? <br /> <br />Following are the methods and properties :<br /><br />Read<br />GetString<br />GetInt32<br />while (myReader.Read())<br />Console.WriteLine("\t{0}\t{1}", myReader.GetInt32(0), myReader.GetString(1));<br />myReader.Close();<br /> <br /> <br />25. What happens when we issue Dataset.ReadXml command? <br /> <br />Reads XML schema and data into the DataSet <br /> <br />26. What Method checks if a datareader is closed or opened? <br /> <br />IsClosed() checks whether a datareader is closed or open. <br /> <br />27. What is method to get XML and schema from Dataset? <br /> <br />getXML () and get Schema (). <br /> <br />28. Differences between dataset.clone and dataset.copy? <br /> <br />The Difference is as follows:<br /><br />Clone - Copies the structure of the DataSet, including all DataTable schemas, relations, and constraints. Does not copy any data. <br />Copy - Copies both the structure and data for this DataSet.<br /> <br /> <br />29. What are the differences between the Recordset and the DataSet objects? <br /> <br />Tables represented by the object: The ADO Recordset object represents only one table at a given time, while the DataSet object in ADO.NET can represent any number of tables, keys, constraints and relations which makes it very much like an RDBMS. <br />Navigation: Navigating the Recordset object depend on the cursor used to create the object with limited functionality in moving back and forth while the DataSet represents data in “collections” that could be accessed through indexers in a random-access fashion <br />Connection Model: The Recordset is designed to work as a “connected object” with a server-side cursor in mind while the DataSet is designed to work as a disconnected object containing hierarchy of data in XML format. <br />Database Updates: Updating a database through the use of a Recordset object is direct since it is tied to the database. On the other hand, the DataSet as an independent data store must use a database-specific DataAdapter object to post updates to the database. <br /> <br /> <br />30. Which ADO.Net objects fall under connected database model and disconnected database model? <br /> <br />The DataReader object falls under connected model and DataSet, DataTable, DataAdapter objects fall under disconnected database model. <br /> <br /> <br />31. How to use ImportRow method? <br /> <br />The ImportRow method of DataTable copies a row into a DataTable with all of the properties and data of the row. It actually calls NewRow method on destination DataTable with current table schema and sets DataRowState to Added. <br /><br /> <br />DataTable dt; /// fill the table before you use it <br />DataTable copyto; <br /><br />foreach(DataRow dr in dt.Rows) <br />{ <br />copyto.ImportRow(dr); <br />} <br /> <br /> <br />32. What are the pros and cons of using DataReader object? <br /> <br />The DataReader object is a forward-only resultset and is faster to traverse as compared to its counterpart DataTable. However it holds an active connection to the database until all records are retrieved from it or closed explicitly. This could be a problem when the resultset holds large number of records and the application has many concurrent users. <br /> <br />33. What are different execute methods of ADO.NET command object? <br /> <br />ExecuteScalar method returns a single value from the first row and first column of the resultset obtained from the execution of SQL query.<br />ExecuteNonQuery method executes the DML SQL query such as insert, delete or update and returns the number of rows affected by this action.<br />ExecuteReader method returns DataReader object which is a forward-only resultset.<br />ExecuteXMLReader method is available for SQL Server 2000 or later. Upon execution it builds XMLReader object from standard SQL query.<br /> <br /> <br />34. What is the difference between data reader and data adapter? <br /> <br />DateReader is an forward only and read only cursor type if you are accessing data through DataRead it shows the data on the web form/control but you can not perform the paging feature on that record(because it's forward only type).<br /><br /> Reader is best fit to show the Data (where no need to work on data)<br /><br />DataAdapter is not only connect with the Databse(through Command object) it provide four types of command (InsertCommand, UpdateCommand, DeleteCommand, SelectCommand), It supports to the disconnected Architecture of .NET show we can populate the records to the DataSet. where as Dataadapter is best fit to work on data.<br /> <br /> <br />35. Difference between SqlCommand and SqlCommandBuilder? <br /> <br />SQLCommand is used to retrieve or update the data from database.<br /><br />You can use the SELECT / INSERT,UPDATE,DELETE command with SQLCommand. SQLCommand will execute these commnds in the database.<br /><br />SQLBUILDER is used to build the SQL Command like SELECT/ INSERTR, UPDATE etc.<br /> <br /> <br />36. Can you edit data in the Repeater control? <br /> <br />NO. <br /> <br />37. What are the different rowversions available? <br /> <br />There are four types of Rowversions.<br />Current:<br />The current values for the row. This row version does not exist for rows with a RowState of Deleted.<br />Default :<br />The row the default version for the current DataRowState. For a DataRowState value of Added, Modified or Current, the default version is Current. For a DataRowState of Deleted, the version is Original. For a DataRowState value of Detached, the version is Proposed.<br />Original:<br />The row contains its original values.<br />Proposed:<br />The proposed values for the row. This row version exists during an edit operation on a row, or for a row that is not part of a DataRowCollection<br /> <br /> <br />38. Explain DataSet.AcceptChanges and DataAdapter.Update methods? <br /> <br />Dataset maintains the rowstate of each row with in a table. as a dataset is loaded its rowstate version is unchanged . whenever there is a modification in a paricular row with in a datatable , dataset changes the rowversion as modified, Added or deleted based on the particular action performed on the particular row.<br /><br />AcceptChanges() method again change the Rowversion back to Unchanged.<br /><br />Update() method is for updation of any changes made to the dataset in the database. This function checks the rowversion of each row with in a table. If it finds any row with added rowstate then that particular row is inserted else if it is Modified it is upadted if deleted then a delete statement is executed.<br /><br />But if Acceptchanges() is done before an update function it wold not update anything to database since rowstate becomes unchanged<br /> <br /> <br />39. How you will set the datarelation between two columns? <br /> <br />ADO.NET provides DataRelation object to set relation between two columns.It helps to enforce the following constraints,a unique constraint, which guarantees that a column in the table contains no duplicates and a foreign-key constraint,which can be used to maintain referential integrity.A unique constraint is implemented either by simply setting the Unique property of a data column to true, or by adding an instance of the UniqueConstraint class to the DataRelation object's ParentKeyConstraint. As part of the foreign-key constraint, you can specify referential integrity rules that are applied at three points,when a parent record is updated,when a parent record is deleted and when a change is accepted or rejected. <br /> <br />40. What connections does Microsoft SQL Server support? <br /> <br />Windows Authentication (via Active Directory) and SQL Server authentication (via Microsoft SQL Server username and passwords). <br /> <br /> <br /> <br /> 41. Which one is trusted and which one is untrusted? <br /> <br /> Windows Authentication is trusted because the username and password are checked with the Active Directory, the SQL Server authentication is untrusted, since SQL Server is the only verifier participating in the transaction. <br /> <br /> 42. What is Connection pooling? <br /> <br /> The connection represents an open and unique link to a data source. In a distributed system, this often involves a network connection. Depending on the underlying data source, the programming interface of the various connection objects may differ quite a bit. A connection object is specific to a particular type of data source, such as SQL Server and Oracle. Connection objects can't be used interchangeably across different data sources, but all share a common set of methods and properties grouped in the IDbConnection interface.<br /> In ADO.NET, connection objects are implemented within data providers as sealed classes (that is, they are not further inheritable). This means that the behavior of a connection class can never be modified or overridden, just configured through properties and attributes. In ADO.NET, all connection classes support connection pooling, although each class may implement it differently. Connection pooling is implicit, meaning that you don't need to enable it because the provider manages this automatically.<br /> ADO.NET pools connections with the same connection or configuration (connection string). It can maintain more than one pool (actually, one for each configuration). An interesting note: Connection pooling is utilized (by default) unless otherwise specified. If you close and dispose of all connections, then there will be no pool (since there are no available connections).<br /> While leaving database connections continuously open can be troublesome, it can be advantageous for applications that are in constant communication with a database by negating the need to re-open connections. Some database administrators may frown on the practice since multiple connections (not all of which may be useful) to the database are open. Using connection pooling depends upon available server resources and application requirements (i.e., does it really need it).<br /> <br /> Using connection pooling<br /> <br /> Connection pooling is enabled by default. You may override the default behavior with the pooling setting in the connection string. The following SQL Server connection string does not utilize connection pooling:<br /> Data Source=TestServer;Initial Catalog=Northwind;<br /> User ID=Chester;Password=Tester;Pooling=False;<br /> You can use the same approach with other .NET Data Providers. You may enable it by setting it to True (or eliminating the Pooling variable to use the default behavior). In addition, the default size of the connection pool is 100, but you may override this as well with connection string variables. You may use the following variables to control the minimum and maximum size of the pool as well as transaction support:<br /> <br /> <br /> • Max Pool Size: The maximum number of connections allowed in the pool. The default value is 100. <br /> • Min Pool Size: The minimum number of connections allowed in the pool. The default value is zero. <br /> • Enlist: Signals whether the pooler automatically enlists the connection in the creation thread's current transaction context. The default value is true. <br /> <br /> <br /> The following SQL Server connection string uses connection pooling with a minimum size of five and a maximum size of 100:<br /> <br /> <br /> Data Source=TestServer;Initial Catalog=Northwind;<br /> User ID=Chester;Password=Tester;Max Pool Size=50;<br /> Min Pool Size=5;Pooling=True;<br /> <br /> <br /> 43. What are the two fundamental objects in ADO.NET ? <br /> <br /> Datareader and Dataset are the two fundamental objects in ADO.NET. <br /> <br /> 44. What is the use of connection object ? <br /> <br /> They are used to connect a data to a Command object.<br /> <br /> An OleDbConnection object is used with an OLE-DB provider <br /> A SqlConnection object uses Tabular Data Services (TDS) with MS SQL Server <br /> <br /> <br /> 45. What are the various objects in Dataset ? <br /> <br /> Dataset has a collection of DataTable object within the Tables collection. Each DataTable object contains a collection of DataRow objects and a collection of DataColumn objects. There are also collections for the primary keys, constraints, and default values used in this table which is called as constraint collection, and the parent and child relationships between the tables. Finally, there is a DefaultView object for each table. This is used to create a Data View object based on the table, so that the data can be searched, filtered or otherwise manipulated while displaying the data. <br /> <br /> <br />46. How can we force the connection object to close after my datareader is closed ? <br /> <br />Command method Executereader takes a parameter called as CommandBehavior where in we can specify saying close connection automatically after the Datareader is close.<br />pobjDataReader =pobjCommand.ExecuteReader(CommandBehavior.CloseConnection)<br /> <br /> <br />47. How can we get only schema using dataReader? <br /> <br />pobjDataReader = pobjCommand.ExecuteReader(Co-mmandBehavior.SchemaOnly) <br /> <br />48. Explain how to use stored procedures with ADO.net? <br /> <br />Using Stored Procedures with a Command<br /><br /><br />Stored procedures offer many advantages in data-driven applications. Using stored procedures, database operations can be encapsulated in a single command, optimized for best performance, and enhanced with additional security. While a stored procedure can be called by simply passing the stored procedure name followed by parameter arguments as an SQL statement, using the Parameters collection of the ADO.NET Command object enables you to more explicitly define stored procedure parameters as well as to access output parameters and return values.<br />To call a stored procedure, set the CommandType of the Command object to StoredProcedure. Once the CommandType is set to StoredProcedure, you can use the Parameters collection to define parameters, as in the following example.<br />Note The OdbcCommand requires that you supply the full ODBC CALL syntax when calling a stored procedure.<br /><br /><br />SqlClient<br />[Visual Basic]<br />Dim nwindConn As SqlConnection = New SqlConnection("Data Source=localhost;Integrated Security=SSPI;" & _<br /> "Initial Catalog=northwind")<br /><br />Dim salesCMD As SqlCommand = New SqlCommand("SalesByCategory", nwindConn)<br />salesCMD.CommandType = CommandType.StoredProcedure<br /><br />Dim myParm As SqlParameter = salesCMD.Parameters.Add("@CategoryName", SqlDbType.NVarChar, 15)<br />myParm.Value = "Beverages"<br /><br />nwindConn.Open()<br /><br />Dim myReader As SqlDataReader = salesCMD.ExecuteReader()<br /><br />Console.WriteLine("{0}, {1}", myReader.GetName(0), myReader.GetName(1))<br /><br />Do While myReader.Read()<br /> Console.WriteLine("{0}, ${1}", myReader.GetString(0), myReader.GetDecimal(1))<br />Loop<br /><br />myReader.Close()<br />nwindConn.Close()<br /><br /><br />[C#]<br /><br /><br />SqlConnection nwindConn = new SqlConnection("Data Source=localhost;Integrated Security=SSPI;Initial Catalog=northwind");<br /><br />SqlCommand salesCMD = new SqlCommand("SalesByCategory", nwindConn);<br />salesCMD.CommandType = CommandType.StoredProcedure;<br /><br />SqlParameter myParm = salesCMD.Parameters.Add("@CategoryName", SqlDbType.NVarChar, 15);<br />myParm.Value = "Beverages";<br /><br />nwindConn.Open();<br /><br />SqlDataReader myReader = salesCMD.ExecuteReader();<br /><br />Console.WriteLine("{0}, {1}", myReader.GetName(0), myReader.GetName(1));<br /><br />while (myReader.Read())<br />{<br /> Console.WriteLine("{0}, ${1}", myReader.GetString(0), myReader.GetDecimal(1));<br />}<br /><br />myReader.Close();<br />nwindConn.Close();<br /><br /><br />OleDb<br />[Visual Basic]<br />Dim nwindConn As OleDbConnection = New OleDbConnection("Provider=SQLOLEDB;Data Source=localhost;Integrated Security=SSPI;" & _<br /> "Initial Catalog=northwind")<br /><br />Dim salesCMD As OleDbCommand = New OleDbCommand("SalesByCategory", nwindConn)<br />salesCMD.CommandType = CommandType.StoredProcedure<br /><br />Dim myParm As OleDbParameter = salesCMD.Parameters.Add("@CategoryName", OleDbType.VarChar, 15)<br />myParm.Value = "Beverages"<br /><br />nwindConn.Open()<br /><br />Dim myReader As OleDbDataReader = salesCMD.ExecuteReader()<br /><br />Console.WriteLine("{0}, {1}", myReader.GetName(0), myReader.GetName(1))<br /><br />Do While myReader.Read()<br /> Console.WriteLine("{0}, ${1}", myReader.GetString(0), myReader.GetDecimal(1))<br />Loop<br /><br />myReader.Close()<br />nwindConn.Close()<br /><br /><br />[C#]<br /><br /><br />OleDbConnection nwindConn = new OleDbConnection("Provider=SQLOLEDB;Data Source=localhost;Integrated Security=SSPI;" +<br /> "Initial Catalog=northwind");<br /><br />OleDbCommand salesCMD = new OleDbCommand("SalesByCategory", nwindConn);<br />salesCMD.CommandType = CommandType.StoredProcedure;<br /><br />OleDbParameter myParm = salesCMD.Parameters.Add("@CategoryName", OleDbType.VarChar, 15);<br />myParm.Value = "Beverages";<br /><br />nwindConn.Open();<br /><br />OleDbDataReader myReader = salesCMD.ExecuteReader();<br /><br />Console.WriteLine("\t{0}, {1}", myReader.GetName(0), myReader.GetName(1));<br /><br />while (myReader.Read())<br />{<br /> Console.WriteLine("\t{0}, ${1}", myReader.GetString(0), myReader.GetDecimal(1));<br />}<br /><br />myReader.Close();<br />nwindConn.Close();<br /><br /><br />Odbc<br />[Visual Basic]<br /><br /><br />Dim nwindConn As OdbcConnection = New OdbcConnection("Driver={SQL Server};Server=localhost;Trusted_Connection=yes;" & _<br /> "Database=northwind")<br />nwindConn.Open()<br /><br />Dim salesCMD As OdbcCommand = New OdbcCommand("{ CALL SalesByCategory(?) }", nwindConn)<br />salesCMD.CommandType = CommandType.StoredProcedure<br /><br />Dim myParm As OdbcParameter = salesCMD.Parameters.Add("@CategoryName", OdbcType.VarChar, 15)<br />myParm.Value = "Beverages"<br /><br />Dim myReader As OdbcDataReader = salesCMD.ExecuteReader()<br /><br />Console.WriteLine("{0}, {1}", myReader.GetName(0), myReader.GetName(1))<br /><br />Do While myReader.Read()<br /> Console.WriteLine("{0}, ${1}", myReader.GetString(0), myReader.GetDecimal(1))<br />Loop<br /><br />myReader.Close()<br />nwindConn.Close()<br />[C#]<br />OdbcConnection nwindConn = new OdbcConnection("Driver={SQL Server};Server=localhost;Trusted_Connection=yes;" +<br /> "Database=northwind");<br />nwindConn.Open();<br /><br />OdbcCommand salesCMD = new OdbcCommand("{ CALL SalesByCategory(?) }", nwindConn);<br />salesCMD.CommandType = CommandType.StoredProcedure;<br /><br />OdbcParameter myParm = salesCMD.Parameters.Add("@CategoryName", OdbcType.VarChar, 15);<br />myParm.Value = "Beverages";<br /><br />OdbcDataReader myReader = salesCMD.ExecuteReader();<br /><br />Console.WriteLine("\t{0}, {1}", myReader.GetName(0), myReader.GetName(1));<br /><br />while (myReader.Read())<br />{<br /> Console.WriteLine("\t{0}, ${1}", myReader.GetString(0), myReader.GetDecimal(1));<br />}<br /><br />myReader.Close();<br />nwindConn.Close();<br /><br /><br />A Parameter object can be created using the Parameter constructor, or by calling the Add method of the Parameters collection of a Command. Parameters.Add will take as input either constructor arguments or an existing Parameter object. When setting the Value of a Parameter to a null reference, use DBNull.Value.<br />For parameters other than Input parameters, you must set the ParameterDirection property to specify whether the parameter type is InputOutput, Output, or ReturnValue. The following example shows the difference between creating Input, Output, and ReturnValue parameters.<br /><br /><br />[Visual Basic]<br /><br /><br />Dim sampleCMD As SqlCommand = New SqlCommand("SampleProc", nwindConn)<br />sampleCMD.CommandType = CommandType.StoredProcedure<br /><br />Dim sampParm As SqlParameter = sampleCMD.Parameters.Add("RETURN_VALUE", SqlDbType.Int)<br />sampParm.Direction = ParameterDirection.ReturnValue<br /><br />sampParm = sampleCMD.Parameters.Add("@InputParm", SqlDbType.NVarChar, 12)<br />sampParm.Value = "Sample Value"<br /><br />sampParm = sampleCMD.Parameters.Add("@OutputParm", SqlDbType.NVarChar, 28)<br />sampParm.Direction = ParameterDirection.Output<br /><br />nwindConn.Open()<br /><br />Dim sampReader As SqlDataReader = sampleCMD.ExecuteReader()<br /><br />Console.WriteLine("{0}, {1}", sampReader.GetName(0), sampReader.GetName(1))<br /><br />Do While sampReader.Read()<br /> Console.WriteLine("{0}, {1}", sampReader.GetInt32(0), sampReader.GetString(1))<br />Loop<br /><br />sampReader.Close()<br />nwindConn.Close()<br /><br />Console.WriteLine(" @OutputParm: {0}", sampleCMD.Parameters("@OutputParm").Value)<br />Console.WriteLine("RETURN_VALUE: {0}", sampleCMD.Parameters("RETURN_VALUE").Value)<br /><br /><br />[C#]<br /><br /><br />SqlCommand sampleCMD = new SqlCommand("SampleProc", nwindConn);<br />sampleCMD.CommandType = CommandType.StoredProcedure;<br /><br />SqlParameter sampParm = sampleCMD.Parameters.Add("RETURN_VALUE", SqlDbType.Int);<br />sampParm.Direction = ParameterDirection.ReturnValue;<br /><br />sampParm = sampleCMD.Parameters.Add("@InputParm", SqlDbType.NVarChar, 12);<br />sampParm.Value = "Sample Value";<br /><br />sampParm = sampleCMD.Parameters.Add("@OutputParm", SqlDbType.NVarChar, 28);<br />sampParm.Direction = ParameterDirection.Output;<br /><br />nwindConn.Open();<br /><br />SqlDataReader sampReader = sampleCMD.ExecuteReader();<br /><br />Console.WriteLine("{0}, {1}", sampReader.GetName(0), sampReader.GetName(1));<br /><br />while (sampReader.Read())<br />{<br /> Console.WriteLine("{0}, {1}", sampReader.GetInt32(0), sampReader.GetString(1));<br />}<br /><br />sampReader.Close();<br />nwindConn.Close();<br /><br />Console.WriteLine(" @OutputParm: {0}", sampleCMD.Parameters["@OutputParm"].Value);<br />Console.WriteLine("RETURN_VALUE: {0}", sampleCMD.Parameters["RETURN_VALUE"].Value);<br />OleDb<br /><br /><br />[Visual Basic]<br /><br /><br />Dim sampleCMD As OleDbCommand = New OleDbCommand("SampleProc", nwindConn)<br />sampleCMD.CommandType = CommandType.StoredProcedure<br /><br />Dim sampParm As OleDbParameter = sampleCMD.Parameters.Add("RETURN_VALUE", OleDbType.Integer)<br />sampParm.Direction = ParameterDirection.ReturnValue<br /><br />sampParm = sampleCMD.Parameters.Add("@InputParm", OleDbType.VarChar, 12)<br />sampParm.Value = "Sample Value"<br /><br />sampParm = sampleCMD.Parameters.Add("@OutputParm", OleDbType.VarChar, 28)<br />sampParm.Direction = ParameterDirection.Output<br /><br />nwindConn.Open()<br /><br />Dim sampReader As OleDbDataReader = sampleCMD.ExecuteReader()<br /><br />Console.WriteLine("{0}, {1}", sampReader.GetName(0), sampReader.GetName(1))<br /><br />Do While sampReader.Read()<br /> Console.WriteLine("{0}, {1}", sampReader.GetInt32(0), sampReader.GetString(1))<br />Loop<br /><br />sampReader.Close()<br />nwindConn.Close()<br /><br />Console.WriteLine(" @OutputParm: {0}", sampleCMD.Parameters("@OutputParm").Value)<br />Console.WriteLine("RETURN_VALUE: {0}", sampleCMD.Parameters("RETURN_VALUE").Value)<br /><br /><br />[C#]<br /><br /><br />OleDbCommand sampleCMD = new OleDbCommand("SampleProc", nwindConn);<br />sampleCMD.CommandType = CommandType.StoredProcedure;<br /><br />OleDbParameter sampParm = sampleCMD.Parameters.Add("RETURN_VALUE", OleDbType.Integer);<br />sampParm.Direction = ParameterDirection.ReturnValue;<br /><br />sampParm = sampleCMD.Parameters.Add("@InputParm", OleDbType.VarChar, 12);<br />sampParm.Value = "Sample Value";<br /><br />sampParm = sampleCMD.Parameters.Add("@OutputParm", OleDbType.VarChar, 28);<br />sampParm.Direction = ParameterDirection.Output;<br /><br />nwindConn.Open();<br /><br />OleDbDataReader sampReader = sampleCMD.ExecuteReader();<br /><br />Console.WriteLine("{0}, {1}", sampReader.GetName(0), sampReader.GetName(1));<br /><br />while (sampReader.Read())<br />{<br /> Console.WriteLine("{0}, {1}", sampReader.GetInt32(0), sampReader.GetString(1));<br />}<br /><br />sampReader.Close();<br />nwindConn.Close();<br /><br />Console.WriteLine(" @OutputParm: {0}", sampleCMD.Parameters["@OutputParm"].Value);<br />Console.WriteLine("RETURN_VALUE: {0}", sampleCMD.Parameters["RETURN_VALUE"].Value);<br /><br /><br />Odbc<br />[Visual Basic]<br />Dim sampleCMD As OdbcCommand = New OdbcCommand("{ ? = CALL SampleProc(?, ?) }", nwindConn)<br />sampleCMD.CommandType = CommandType.StoredProcedure<br /><br />Dim sampParm As OdbcParameter = sampleCMD.Parameters.Add("RETURN_VALUE", OdbcType.Int)<br />sampParm.Direction = ParameterDirection.ReturnValue<br /><br />sampParm = sampleCMD.Parameters.Add("@InputParm", OdbcType.VarChar, 12)<br />sampParm.Value = "Sample Value"<br /><br />sampParm = sampleCMD.Parameters.Add("@OutputParm", OdbcType.VarChar, 28)<br />sampParm.Direction = ParameterDirection.Output<br /><br />nwindConn.Open()<br /><br />Dim sampReader As OdbcDataReader = sampleCMD.ExecuteReader()<br /><br />Console.WriteLine("{0}, {1}", sampReader.GetName(0), sampReader.GetName(1))<br /><br />Do While sampReader.Read()<br /> Console.WriteLine("{0}, {1}", sampReader.GetInt32(0), sampReader.GetString(1))<br />Loop<br /><br />sampReader.Close()<br />nwindConn.Close()<br /><br />Console.WriteLine(" @OutputParm: {0}", sampleCMD.Parameters("@OutputParm").Value)<br />Console.WriteLine("RETURN_VALUE: {0}", sampleCMD.Parameters("RETURN_VALUE").Value)<br /><br /><br />[C#]<br /><br /><br />OdbcCommand sampleCMD = new OdbcCommand("{ ? = CALL SampleProc(?, ?) }", nwindConn);<br />sampleCMD.CommandType = CommandType.StoredProcedure;<br /><br />OdbcParameter sampParm = sampleCMD.Parameters.Add("RETURN_VALUE", OdbcType.Int);<br />sampParm.Direction = ParameterDirection.ReturnValue;<br /><br />sampParm = sampleCMD.Parameters.Add("@InputParm", OdbcType.VarChar, 12);<br />sampParm.Value = "Sample Value";<br /><br />sampParm = sampleCMD.Parameters.Add("@OutputParm", OdbcType.VarChar, 28);<br />sampParm.Direction = ParameterDirection.Output;<br /><br />nwindConn.Open();<br /><br />OdbcDataReader sampReader = sampleCMD.ExecuteReader();<br /><br />Console.WriteLine("{0}, {1}", sampReader.GetName(0), sampReader.GetName(1));<br /><br />while (sampReader.Read())<br />{<br /> Console.WriteLine("{0}, {1}", sampReader.GetInt32(0), sampReader.GetString(1));<br />}<br /><br />sampReader.Close();<br />nwindConn.Close();<br /><br />Console.WriteLine(" @OutputParm: {0}", sampleCMD.Parameters["@OutputParm"].Value);<br />Console.WriteLine("RETURN_VALUE: {0}", sampleCMD.Parameters["RETURN_VALUE"].Value);<br /><br /><br />Using Parameters with a SqlCommand<br /><br /><br />When using parameters with a SqlCommand, the names of the parameters added to the Parameters collection must match the names of the parameter markers in your stored procedure. The .NET Framework Data Provider for SQL Server treats parameters in the stored procedure as named parameters and searches for the matching parameter markers.<br />The .NET Framework Data Provider for SQL Server does not support the question mark (?) placeholder for passing parameters to an SQL statement or a stored procedure. In this case, you must use named parameters, as in the following example.<br />SELECT * FROM Customers WHERE CustomerID = @CustomerID<br />Using Parameters with an OleDbCommand or OdbcCommand<br />When using parameters with an OleDbCommand or OdbcCommand, the order of the parameters added to the Parameters collection must match the order of the parameters defined in your stored procedure. The .NET Framework Data Provider for OLE DB and .NET Framework Data Provider for ODBC treat parameters in a stored procedure as placeholders and applies parameter values in order. In addition, return value parameters must be the first parameters added to the Parameters collection.<br />The .NET Framework Data Provider for OLE DB and .NET Framework Data Provider for ODBC do not support named parameters for passing parameters to an SQL statement or a stored procedure. In this case, you must use the question mark (?) placeholder, as in the following example.<br />SELECT * FROM Customers WHERE CustomerID = ?<br />As a result, the order in which Parameter objects are added to the Parameters collection must directly correspond to the position of the question mark placeholder for the parameter.<br />Deriving Parameter Information<br />Parameters can also be derived from a stored procedure using the CommandBuilder class. Both the SqlCommandBuilder and OleDbCommandBuilder classes provide a static method, DeriveParameters, which will automatically populate the Parameters collection of a Command object with parameter information from a stored procedure. Note that DeriveParameters will overwrite any existing parameter information for the Command.<br />Deriving parameter information does require an added trip to the data source for the information. If parameter information is known at design-time, you can improve the performance of your application by setting the parameters explicitly.<br />The following code example shows how to populate the Parameters collection of a Command object using CommandBuilder.DeriveParameters.<br /><br /><br />[Visual Basic]<br /><br /><br />Dim nwindConn As SqlConnection = New SqlConnection("Data Source=localhost;Initial Catalog=Northwind;Integrated Security=SSPI;")<br />Dim salesCMD As SqlCommand = New SqlCommand("Sales By Year", nwindConn)<br />salesCMD.CommandType = CommandType.StoredProcedure<br /><br />nwindConn.Open()<br />SqlCommandBuilder.DeriveParameters(salesCMD)<br />nwindConn.Close()<br /><br /><br />[C#]<br /><br /><br />SqlConnection nwindConn = new SqlConnection("Data Source=localhost;Initial Catalog=Northwind;Integrated Security=SSPI;");<br />SqlCommand salesCMD = new SqlCommand("Sales By Year", nwindConn);<br />salesCMD.CommandType = CommandType.StoredProcedure;<br /><br />nwindConn.Open();<br />SqlCommandBuilder.DeriveParameters(salesCMD);<br />nwindConn.Close();<br /> <br /> <br />49. How can we fine tune the command object when we are expecting a single row or a single value ? <br /> <br />CommandBehaviour enumeration provides two values SingleResult and SingleRow.If you are expecting a single value then pass "CommandBehaviour.SingleResult" and the query is optimized accordingly, if you are expecting single row then pass "CommandBehaviour.SingleRow" and query is optimized according to single row. <br /> <br />50. How can you Obtaining Data as XML from SQL Server? <br /> <br />[Visual Basic]<br /><br /><br />Dim custCMD As SqlCommand = New SqlCommand("SELECT * FROM Customers FOR XML AUTO, ELEMENTS", nwindConn)<br /> Dim myXR As System.Xml.XmlReader = custCMD.ExecuteXmlReader()<br /><br /><br />[C#]<br /><br /><br />SqlCommand custCMD = new SqlCommand("SELECT * FROM Customers FOR XML AUTO, ELEMENTS", nwindConn);<br /> System.Xml.XmlReader myXR = custCMD.ExecuteXmlReader();<br /> <br /> <br /> 51. How to add Existing Constraints to a DataSet? <br /> <br /> The Fill method of the DataAdapter fills a DataSet only with table columns and rows from a data source; though constraints are commonly set by the data source, the Fill method does not add this schema information to the DataSet by default. To populate a DataSet with existing primary key constraint information from a data source, you can either call the FillSchema method of the DataAdapter, or set the MissingSchemaAction property of the DataAdapter to AddWithKey before calling Fill. This will ensure that primary key constraints in the DataSet reflect those at the data source. Foreign key constraint information is not included and will need to be created explicitly<br /> <br /> Adding schema information to a DataSet before filling it with data ensures that primary key constraints are included with the DataTable objects in the DataSet. As a result, when additional calls to Fill the DataSet are made, the primary key column information is used to match new rows from the data source with current rows in each DataTable, and current data in the tables is overwritten with data from the data source. Without the schema information, the new rows from the data source are appended to the DataSet, resulting in duplicate rows.<br /> <br /> Using FillSchema or setting the MissingSchemaAction to AddWithKey requires extra processing at the data source to determine primary key column information. This additional processing can hinder performance. If you know the primary key information at design-time, it is recommended that you specify the primary key column or columns explicitly in order to achieve optimal performance. For information about explicitly setting primary key information for a table<br /> <br /> <br /> <br /> [Visual Basic]<br /> Dim custDS As DataSet = New DataSet()<br /> <br /> custDA.FillSchema(custDS, SchemaType.Source, "Customers")<br /> custDA.Fill(custDS, "Customers")<br /> <br /> <br /> [C#]<br /> DataSet custDS = new DataSet();<br /> <br /> custDA.FillSchema(custDS, SchemaType.Source, "Customers");<br /> custDA.Fill(custDS, "Customers");<br /> <br /> [Visual Basic]<br /> Dim custDS As DataSet = New DataSet()<br /> <br /> custDA.MissingSchemaAction = MissingSchemaAction.AddWithKey<br /> custDA.Fill(custDS, "Customers")<br /> <br /> <br /> [C#]<br /> DataSet custDS = new DataSet();<br /> <br /> custDA.MissingSchemaAction = MissingSchemaAction.AddWithKey;<br /> custDA.Fill(custDS, "Customers");<br /> <br /> <br /> 52. How to add relation between tables? <br /> <br /> In a DataSet that contains multiple DataTable objects, you can use DataRelation objects to relate one table to another, to navigate through the tables, and to return child or parent rows from a related table.<br /> Adding a DataRelation to a DataSet adds, by default, a UniqueConstraint to the parent table and a ForeignKeyConstraint to the child table. For more information about these default constraints<br /> <br /> <br /> <br /> [Visual Basic]<br /> custDS.Relations.Add("CustOrders", _<br /> custDS.Tables("Customers").Columns("CustID"), _<br /> custDS.Tables("Orders").Columns("CustID"))<br /> <br /> [C#]<br /> custDS.Relations.Add("CustOrders",<br /> custDS.Tables["Customers"].Columns["CustID"],<br /> custDS.Tables["Orders"].Columns["CustID"]);<br /> <br /> <br /> 53. How to get the data changes in dataset? <br /> <br /> GetChanges : Gets a copy of the DataSet containing all changes made to it since it was last loaded, or since AcceptChanges was called.<br /> <br /> <br /> <br /> [Visual Basic] <br /> Private Sub UpdateDataSet(ByVal myDataSet As DataSet)<br /> ' Check for changes with the HasChanges method first.<br /> If Not myDataSet.HasChanges(DataRowState.Modified) Then Exit Sub<br /> ' Create temporary DataSet variable.<br /> Dim xDataSet As DataSet<br /> ' GetChanges for modified rows only.<br /> xDataSet = myDataSet.GetChanges(DataRowState.Modified)<br /> ' Check the DataSet for errors.<br /> If xDataSet.HasErrors Then<br /> ' Insert code to resolve errors.<br /> End If<br /> ' After fixing errors, update the data source with the DataAdapter <br /> ' used to create the DataSet.<br /> myOleDbDataAdapter.Update(xDataSet)<br /> End Sub<br /> <br /> <br /> <br /> [C#] <br /> private void UpdateDataSet(DataSet myDataSet){<br /> // Check for changes with the HasChanges method first.<br /> if(!myDataSet.HasChanges(DataRowState.Modified)) return;<br /> // Create temp <br /> <br /> <br /> 54. What are the various methods provided by the dataset object to generate XML? <br /> <br /> ReadXML : Read's a XML document in to Dataset. <br /> GetXML : This is function's which return's a string containing XML document. <br /> WriteXML : This write's a XML data to disk. <br /> <br /> <br /> 55. What is Dataview and what’s the use of Dataview? <br /> <br /> Represents a databindable, customized view of a DataTable for sorting, filtering, searching, editing, and navigation. A major function of the DataView is to allow data binding on both Windows Forms and Web Forms.<br /> <br /> Dataview has 4 main method's :-<br /> Find<br /> Take's a array of value's and return's the index of the row.<br /> FindRow<br /> This also takes array of values but returns a collection of "DataRow".<br /> If we want to manipulate data of "DataTable" object create "DataView" (Using the "DefaultView" we can create "DataView" object) of the "DataTable" object, and use the following functionalities:-<br /> AddNew<br /> Add's a new row to the "DataView" object.<br /> Delete<br /> Deletes the specified row from "DataView" object.<br /> <br /> Additionally, a DataView can be customized to present a subset of data from the DataTable. This capability allows you to have two controls bound to the same DataTable, but showing different versions of the data. For example, one control may be bound to a DataView showing all of the rows in the table, while a second may be configured to display only the rows that have been deleted from the DataTable. The DataTable also has a DefaultView property which returns the default DataView for the table. For example, if you wish to create a custom view on the table, set the RowFilter on the DataView returned by the DefaultView.<br /> <br /> To create a filtered and sorted view of data, set the RowFilter and Sort properties. Then use the Item property to return a single DataRowView.<br /> <br /> You can also add and delete from the set of rows using the AddNew and Delete methods. When you use those methods, the RowStateFilter property can set to specify that only deleted rows or new rows be displayed by the DataView.<br /> <br /> <br /> <br />56. What is CommandBuilder? <br /> <br />What the CommandBuilder can do is relieve you of the responsibility of writing your own action queries by automatically constructing the SQL code, ADO.NET Command objects, and their associated Parameters collections given a SelectCommand.<br /><br /> <br /><br />The CommandBuilder expects you to provide a viable, executable, and simple SelectCommand associated with a DataAdapter. It also expects a viable Connection. That's because the CommandBuilder opens the Connection associated with the DataAdapter and makes a round trip to the server each and every time it's asked to construct the action queries. It closes the Connection when it's done. <br /><br /> <br /><br />Dim cn As SqlConnection<br />Dim da As SqlDataAdapter<br />Dim cb As SqlCommandBuilder<br />cn = New SqlConnection("data source=demoserver…")<br />da = New SqlDataAdapter("SELECT Au_ID, au_lname, City FROM authors", cn)<br /> <br /> <br />57. what’s the difference between optimistic locking and pessimistic locking? <br /> <br />In pessimistic locking when user wants to update data it locks the record and till then no one can update data. Other user's can only view the data when there is pessimistic locking.<br /><br /><br />In optimistic locking multiple user's can open the same record for updating, thus increase maximum concurrency. Record is only locked when updating the record. This is the most preferred way of locking practically. Now a days browser based application are very common and having pessimistic locking is not a practical solution. <br /><br /><br />The basic difference between Optimistic and Pessimistic locking is the time at which the lock on a row or page occurs. A Pessimistic lock is enforced when the row is being edited while an Optimistic lock occurs at the time the row is being updated. Obviously the time between an Edit and Update can be very short, but Pessimistic locking will allow the database provider to prevent a user from overwriting changes to a row by another user that occurred while he was updating it. There is no<br />provision for this under Optimistic locking and the last user to perform the update wins.<br /> <br /> <br />58. How to implement pessimistic locking? <br /> <br />The basics steps for pessimistic locking are as follows:<br /><br />Create a transaction with an IsolationLevel of RepeatableRead. <br />Set the DataAdapter’s SelectCommand property to use the transaction you created. <br />Make the changes to the data. <br />Set DataAdapter’s Insert, Update, and Delete command properties to use the transaction you created. <br />Call the DataAdapter’s Update method. <br />Commit the transaction. <br /> <br /> <br />59. How to use transactions in ADO.net? <br /> <br />Transactions are a feature offered by most enterprise-class databases for making sure data integrity is maintained when data is modified. A transaction at its most basic level consists of two required steps—Begin, and then either Commit or Rollback. The Begin call defines the start of the transaction boundary, and the call to either Commit or Rollback defines the end of it. Within the transaction boundary, all of the statements executed are considered to be part of a unit for accomplishing the given task, and must succeed or fail as one. Commit (as the name suggests) commits the data modifications if everything was successful, and Rollback undoes the data modifications if an error occurs. All of the .NET data providers provide similar classes and methods to accomplish these operations.<br /><br /> <br /><br />The ADO.NET data providers offer transaction functionality through the Connection, Command, and Transaction classes. A typical transaction would follow a process similar to this: <br /><br /> <br /><br />Open the transaction using Connection.BeginTransaction(). <br />Enlist statements or stored procedure calls in the transaction by setting the Command.Transaction property of the Command objects associated with them. <br />Depending on the provider, optionally use Transaction.Save() or Transaction.Begin() to create a savepoint or a nested transaction to enable a partial rollback. <br />Commit or roll back the transaction using Transaction.Commit() or Transaction.Rollback(). <br /> <br /><br />using System;<br />using System.Drawing;<br />using System.Collections;<br />using System.ComponentModel;<br />using System.Windows.Forms;<br />using System.Data;<br />using System.Data.SqlClient;<br />using System.Data.SqlTypes;<br /><br />…public void SPTransaction(int partID, int numberMoved, int siteID)<br />{<br /> // Create and open the connection.<br /> SqlConnection conn = new SqlConnection();<br /> string connString = "Server=SqlInstance;Database=Test;"<br /> + "Integrated Security=SSPI";<br /> conn.ConnectionString = connString;<br /> conn.Open();<br /><br /> // Create the commands and related parameters.<br /> // cmdDebit debits inventory from the WarehouseInventory <br /> // table by calling the DebitWarehouseInventory <br /> // stored procedure.<br /> SqlCommand cmdDebit = <br /> new SqlCommand("DebitWarehouseInventory", conn);<br /> cmdDebit.CommandType = CommandType.StoredProcedure;<br /> cmdDebit.Parameters.Add("@PartID", SqlDbType.Int, 0, "PartID");<br /> cmdDebit.Parameters["@PartID"].Direction = <br /> ParameterDirection.Input;<br /> cmdDebit.Parameters.Add("@Debit", SqlDbType.Int, 0, "Quantity");<br /> cmdDebit.Parameters["@Debit"].Direction = <br />ParameterDirection.Input;<br /><br /> // cmdCredit adds inventory to the SiteInventory <br /> // table by calling the CreditSiteInventory <br /> // stored procedure.<br /> SqlCommand cmdCredit = <br />new SqlCommand("CreditSiteInventory", conn);<br /> cmdCredit.CommandType = CommandType.StoredProcedure;<br /> cmdCredit.Parameters.Add("@PartID", SqlDbType.Int, 0, "PartID");<br /> cmdCredit.Parameters["@PartID"].Direction = <br />ParameterDirection.Input;<br /> cmdCredit.Parameters.Add<br />("@Credit", SqlDbType.Int, 0, "Quantity");<br /> cmdCredit.Parameters["@Credit"].Direction = <br />ParameterDirection.Input;<br /> cmdCredit.Parameters.Add("@SiteID", SqlDbType.Int, 0, "SiteID");<br /> cmdCredit.Parameters["@SiteID"].Direction = <br />ParameterDirection.Input;<br /><br /> // Begin the transaction and enlist the commands.<br /> SqlTransaction tran = conn.BeginTransaction();<br /> cmdDebit.Transaction = tran;<br /> cmdCredit.Transaction = tran;<br /><br /> try<br /> {<br /> // Execute the commands.<br /> cmdDebit.Parameters["@PartID"].Value = partID;<br /> cmdDebit.Parameters["@Debit"].Value = numberMoved;<br /> cmdDebit.ExecuteNonQuery();<br /><br /> cmdCredit.Parameters["@PartID"].Value = partID;<br /> cmdCredit.Parameters["@Credit"].Value = numberMoved;<br /> cmdCredit.Parameters["@SiteID"].Value = siteID;<br /> cmdCredit.ExecuteNonQuery();<br /><br /> // Commit the transaction.<br /> tran.Commit();<br /> }<br /> catch(SqlException ex)<br /> {<br /> // Roll back the transaction.<br /> tran.Rollback();<br /><br /> // Additional error handling if needed.<br /> }<br /> finally<br /> {<br /> // Close the connection.<br />conn.Close();<br /> }<br />}<br /> <br /> <br />60. Whats the difference between Dataset.clone and Dataset.copy ? <br /> <br />The Clone method of the DataSet class copies only the schema of a DataSet object. It returns a new DataSet object that has the same schema as the existing DataSet object, including all DataTable schemas, relations, and constraints. It does not copy any data from the existing DataSet object into the new DataSet. <br /><br />The Copy method of the DataSet class copies both the structure and data of a DataSet object. It returns a new DataSet object having the same structure (including all DataTable schemas, relations, and constraints) and data as the existing DataSet object.<br /> <br /> <br /> <br />61. Difference between OLEDB Provider and SqlClient ? <br /> <br />SQLClient .NET classes are highly optimized for the .net / sqlserver combination and achieve optimal results. The SqlClient data provider is fast. It's faster than the Oracle provider, and faster than accessing database via the OleDb layer. It's faster because it accesses the native library (which automatically gives you better performance), and it was written with lots of help from the SQL Server team. <br /> <br />62. What are the different namespaces used in the project to connect the database? What data providers available in .net to connect to database? <br /> <br />System.Data.OleDb – classes that make up the .NET Framework Data Provider for OLE DB-compatible data sources. These classes allow you to connect to an OLE DB data source, execute commands against the source, and read the results. <br />System.Data.SqlClient – classes that make up the .NET Framework Data Provider for SQL Server, which allows you to connect to SQL Server 7.0, execute commands, and read results. The System.Data.SqlClient namespace is similar to the System.Data.OleDb namespace, but is optimized for access to SQL Server 7.0 and later. <br />System.Data.Odbc - classes that make up the .NET Framework Data Provider for ODBC. These classes allow you to access ODBC data source in the managed space. <br />System.Data.OracleClient - classes that make up the .NET Framework Data Provider for Oracle. These classes allow you to access an Oracle data source in the managed space. <br /> <br /> <br />63. How to check if a datareader is closed or opened? <br /> <br />IsClosed()kalithttp://www.blogger.com/profile/06926361249526760368noreply@blogger.com0tag:blogger.com,1999:blog-661597201796672556.post-52144367581766902372009-10-20T23:52:00.000-07:002009-10-20T23:53:36.408-07:00.Net Framework Interview Questions1. When was .NET announced? <br /><br />Bill Gates delivered a keynote at Forum 2000, held June 22, 2000, outlining the .NET 'vision'. The July 2000 PDC had a number of sessions on .NET technology, and delegates were given CDs containing a pre-release version of the .NET framework/SDK and Visual Studio.NET. <br /> <br />2. When was the first version of .NET released?<br /><br />The final version of the 1.0 SDK and runtime was made publicly available around 6pm PST on 15-Jan-2002. At the same time, the final version of Visual Studio.NET was made available to MSDN subscribers. <br /> <br />3. What platforms does the .NET Framework run on?<br /><br />The final version of the 1.0 SDK and runtime was made publicly available around 6pm PST on 15-Jan-2002. At the same time, the final version of Visual Studio.NET was made available to MSDN subscribers. <br /> <br />4. Explain .NET Framework architecture?<br /><br />The .NET Framework is an integral Windows component that supports building and running the next generation of applications and XML Web services. The .NET Framework is designed to fulfill the following objectives:<br />• To provide a consistent object-oriented programming environment whether object code is stored and executed locally, executed locally but Internet-distributed, or executed remotely. <br />• To provide a code-execution environment that minimizes software deployment and versioning conflicts. <br />• To provide a code-execution environment that promotes safe execution of code, including code created by an unknown or semi-trusted third party. <br />• To provide a code-execution environment that eliminates the performance problems of scripted or interpreted environments. <br />• To make the developer experience consistent across widely varying types of applications, such as Windows-based applications and Web-based applications. <br />• To build all communication on industry standards to ensure that code based on the .NET Framework can integrate with any other code. <br />The .NET Framework has two main components: the common language runtime and the .NET Framework class library. The common language runtime is the foundation of the .NET Framework. You can think of the runtime as an agent that manages code at execution time, providing core services such as memory management, thread management, and remoting, while also enforcing strict type safety and other forms of code accuracy that promote security and robustness. In fact, the concept of code management is a fundamental principle of the runtime. Code that targets the runtime is known as managed code, while code that does not target the runtime is known as unmanaged code. The class library, the other main component of the .NET Framework, is a comprehensive, object-oriented collection of reusable types that you can use to develop applications ranging from traditional command-line or graphical user interface (GUI) applications to applications based on the latest innovations provided by ASP.NET, such as Web Forms and XML Web services. <br /><br />The .NET Framework can be hosted by unmanaged components that load the common language runtime into their processes and initiate the execution of managed code, thereby creating a software environment that can exploit both managed and unmanaged features. The .NET Framework not only provides several runtime hosts, but also supports the development of third-party runtime hosts. <br /><br />For example, ASP.NET hosts the runtime to provide a scalable, server-side environment for managed code. ASP.NET works directly with the runtime to enable ASP.NET applications and XML Web services, both of which are discussed later in this topic. <br /><br />Internet Explorer is an example of an unmanaged application that hosts the runtime (in the form of a MIME type extension). Using Internet Explorer to host the runtime enables you to embed managed components or Windows Forms controls in HTML documents. Hosting the runtime in this way makes managed mobile code (similar to Microsoft® ActiveX® controls) possible, but with significant improvements that only managed code can offer, such as semi-trusted execution and isolated file storage. <br /><br />The following illustration shows the relationship of the common language runtime and the class library to your applications and to the overall system. The illustration also shows how managed code operates within a larger architecture. <br /><br />.NET Framework in context<br /><br />The common language runtime manages memory, thread execution, code execution, code safety verification, compilation, and other system services. These features are intrinsic to the managed code that runs on the common language runtime.<br /><br />With regards to security, managed components are awarded varying degrees of trust, depending on a number of factors that include their origin (such as the Internet, enterprise network, or local computer). This means that a managed component might or might not be able to perform file-access operations, registry-access operations, or other sensitive functions, even if it is being used in the same active application.<br /><br />The runtime enforces code access security. For example, users can trust that an executable embedded in a Web page can play an animation on screen or sing a song, but cannot access their personal data, file system, or network. The security features of the runtime thus enable legitimate Internet-deployed software to be exceptionally feature rich.<br /><br />The runtime also enforces code robustness by implementing a strict type-and-code-verification infrastructure called the common type system (CTS). The CTS ensures that all managed code is self-describing. The various Microsoft and third-party language compilers generate managed code that conforms to the CTS. This means that managed code can consume other managed types and instances, while strictly enforcing type fidelity and type safety.<br /><br />In addition, the managed environment of the runtime eliminates many common software issues. For example, the runtime automatically handles object layout and manages references to objects, releasing them when they are no longer being used. This automatic memory management resolves the two most common application errors, memory leaks and invalid memory references.<br /><br />The runtime also accelerates developer productivity. For example, programmers can write applications in their development language of choice, yet take full advantage of the runtime, the class library, and components written in other languages by other developers. Any compiler vendor who chooses to target the runtime can do so. Language compilers that target the .NET Framework make the features of the .NET Framework available to existing code written in that language, greatly easing the migration process for existing applications.<br /><br />While the runtime is designed for the software of the future, it also supports software of today and yesterday. Interoperability between managed and unmanaged code enables developers to continue to use necessary COM components and DLLs.<br /><br />The runtime is designed to enhance performance. Although the common language runtime provides many standard runtime services, managed code is never interpreted. A feature called just-in-time (JIT) compiling enables all managed code to run in the native machine language of the system on which it is executing. Meanwhile, the memory manager removes the possibilities of fragmented memory and increases memory locality-of-reference to further increase performance.<br /><br />Finally, the runtime can be hosted by high-performance, server-side applications, such as Microsoft® SQL Server™ and Internet Information Services (IIS). This infrastructure enables you to use managed code to write your business logic, while still enjoying the superior performance of the industry's best enterprise servers that support runtime hosting.<br /><br /> <br />The following sections describe the main components and features of the .NET Framework in greater detail.<br />Features of the Common Language Runtime<br /><br />The common language runtime manages memory, thread execution, code execution, code safety verification, compilation, and other system services. These features are intrinsic to the managed code that runs on the common language runtime.<br /><br />With regards to security, managed components are awarded varying degrees of trust, depending on a number of factors that include their origin (such as the Internet, enterprise network, or local computer). This means that a managed component might or might not be able to perform file-access operations, registry-access operations, or other sensitive functions, even if it is being used in the same active application.<br /><br />The runtime enforces code access security. For example, users can trust that an executable embedded in a Web page can play an animation on screen or sing a song, but cannot access their personal data, file system, or network. The security features of the runtime thus enable legitimate Internet-deployed software to be exceptionally feature rich.<br /><br />The runtime also enforces code robustness by implementing a strict type-and-code-verification infrastructure called the common type system (CTS). The CTS ensures that all managed code is self-describing. The various Microsoft and third-party language compilers generate managed code that conforms to the CTS. This means that managed code can consume other managed types and instances, while strictly enforcing type fidelity and type safety.<br /><br />In addition, the managed environment of the runtime eliminates many common software issues. For example, the runtime automatically handles object layout and manages references to objects, releasing them when they are no longer being used. This automatic memory management resolves the two most common application errors, memory leaks and invalid memory references.<br /><br />The runtime also accelerates developer productivity. For example, programmers can write applications in their development language of choice, yet take full advantage of the runtime, the class library, and components written in other languages by other developers. Any compiler vendor who chooses to target the runtime can do so. Language compilers that target the .NET Framework make the features of the .NET Framework available to existing code written in that language, greatly easing the migration process for existing applications.<br /><br />While the runtime is designed for the software of the future, it also supports software of today and yesterday. Interoperability between managed and unmanaged code enables developers to continue to use necessary COM components and DLLs.<br /><br />The runtime is designed to enhance performance. Although the common language runtime provides many standard runtime services, managed code is never interpreted. A feature called just-in-time (JIT) compiling enables all managed code to run in the native machine language of the system on which it is executing. Meanwhile, the memory manager removes the possibilities of fragmented memory and increases memory locality-of-reference to further increase performance.<br /><br />Finally, the runtime can be hosted by high-performance, server-side applications, such as Microsoft® SQL Server™ and Internet Information Services (IIS). This infrastructure enables you to use managed code to write your business logic, while still enjoying the superior performance of the industry's best enterprise servers that support runtime hosting.<br /><br />.NET Framework Class Library<br /><br />The .NET Framework class library is a collection of reusable types that tightly integrate with the common language runtime. The class library is object oriented, providing types from which your own managed code can derive functionality. This not only makes the .NET Framework types easy to use, but also reduces the time associated with learning new features of the .NET Framework. In addition, third-party components can integrate seamlessly with classes in the .NET Framework.<br /><br />For example, the .NET Framework collection classes implement a set of interfaces that you can use to develop your own collection classes. Your collection classes will blend seamlessly with the classes in the .NET Framework.<br /><br />As you would expect from an object-oriented class library, the .NET Framework types enable you to accomplish a range of common programming tasks, including tasks such as string management, data collection, database connectivity, and file access. In addition to these common tasks, the class library includes types that support a variety of specialized development scenarios. For example, you can use the .NET Framework to develop the following types of applications and services: <br /><br />• Console applications. <br />• Windows GUI applications (Windows Forms). <br />• ASP.NET applications. <br />• XML Web services. <br />• Windows services. <br />For example, the Windows Forms classes are a comprehensive set of reusable types that vastly simplify Windows GUI development. If you write an ASP.NET Web Form application, you can use the Web Forms classes.<br />Client Application Development<br /><br />Client applications are the closest to a traditional style of application in Windows-based programming. These are the types of applications that display windows or forms on the desktop, enabling a user to perform a task. Client applications include applications such as word processors and spreadsheets, as well as custom business applications such as data-entry tools, reporting tools, and so on. Client applications usually employ windows, menus, buttons, and other GUI elements, and they likely access local resources such as the file system and peripherals such as printers.<br /><br />Another kind of client application is the traditional ActiveX control (now replaced by the managed Windows Forms control) deployed over the Internet as a Web page. This application is much like other client applications: it is executed natively, has access to local resources, and includes graphical elements.<br /><br />In the past, developers created such applications using C/C++ in conjunction with the Microsoft Foundation Classes (MFC) or with a rapid application development (RAD) environment such as Microsoft® Visual Basic®. The .NET Framework incorporates aspects of these existing products into a single, consistent development environment that drastically simplifies the development of client applications.<br /><br />The Windows Forms classes contained in the .NET Framework are designed to be used for GUI development. You can easily create command windows, buttons, menus, toolbars, and other screen elements with the flexibility necessary to accommodate shifting business needs.<br /><br />For example, the .NET Framework provides simple properties to adjust visual attributes associated with forms. In some cases the underlying operating system does not support changing these attributes directly, and in these cases the .NET Framework automatically recreates the forms. This is one of many ways in which the .NET Framework integrates the developer interface, making coding simpler and more consistent.<br /><br />Unlike ActiveX controls, Windows Forms controls have semi-trusted access to a user's computer. This means that binary or natively executing code can access some of the resources on the user's system (such as GUI elements and limited file access) without being able to access or compromise other resources. Because of code access security, many applications that once needed to be installed on a user's system can now be deployed through the Web. Your applications can implement the features of a local application while being deployed like a Web page.<br /><br />Server Application Development<br /><br />Server-side applications in the managed world are implemented through runtime hosts. Unmanaged applications host the common language runtime, which allows your custom managed code to control the behavior of the server. This model provides you with all the features of the common language runtime and class library while gaining the performance and scalability of the host server.<br /><br />The following illustration shows a basic network schema with managed code running in different server environments. Servers such as IIS and SQL Server can perform standard operations while your application logic executes through the managed code.<br /><br /><br /> <br />ASP.NET is the hosting environment that enables developers to use the .NET Framework to target Web-based applications. However, ASP.NET is more than just a runtime host; it is a complete architecture for developing Web sites and Internet-distributed objects using managed code. Both Web Forms and XML Web services use IIS and ASP.NET as the publishing mechanism for applications, and both have a collection of supporting classes in the .NET Framework.<br /><br />XML Web services, an important evolution in Web-based technology, are distributed, server-side application components similar to common Web sites. However, unlike Web-based applications, XML Web services components have no UI and are not targeted for browsers such as Internet Explorer and Netscape Navigator. Instead, XML Web services consist of reusable software components designed to be consumed by other applications, such as traditional client applications, Web-based applications, or even other XML Web services. As a result, XML Web services technology is rapidly moving application development and deployment into the highly distributed environment of the Internet.<br /><br />If you have used earlier versions of ASP technology, you will immediately notice the improvements that ASP.NET and Web Forms offer. For example, you can develop Web Forms pages in any language that supports the .NET Framework. In addition, your code no longer needs to share the same file with your HTTP text (although it can continue to do so if you prefer). Web Forms pages execute in native machine language because, like any other managed application, they take full advantage of the runtime. In contrast, unmanaged ASP pages are always scripted and interpreted. ASP.NET pages are faster, more functional, and easier to develop than unmanaged ASP pages because they interact with the runtime like any managed application.<br /><br />The .NET Framework also provides a collection of classes and tools to aid in development and consumption of XML Web services applications. XML Web services are built on standards such as SOAP (a remote procedure-call protocol), XML (an extensible data format), and WSDL ( the Web Services Description Language). The .NET Framework is built on these standards to promote interoperability with non-Microsoft solutions.<br /><br />For example, the Web Services Description Language tool included with the .NET Framework SDK can query an XML Web service published on the Web, parse its WSDL description, and produce C# or Visual Basic source code that your application can use to become a client of the XML Web service. The source code can create classes derived from classes in the class library that handle all the underlying communication using SOAP and XML parsing. Although you can use the class library to consume XML Web services directly, the Web Services Description Language tool and the other tools contained in the SDK facilitate your development efforts with the .NET Framework.<br /><br />If you develop and publish your own XML Web service, the .NET Framework provides a set of classes that conform to all the underlying communication standards, such as SOAP, WSDL, and XML. Using those classes enables you to focus on the logic of your service, without concerning yourself with the communications infrastructure required by distributed software development.<br /><br />Finally, like Web Forms pages in the managed environment, your XML Web service will run with the speed of native machine language using the scalable communication of IIS.<br /><br /> <br />5. What are the mobile devices supported by .net platform? <br /><br />The Microsoft .NET Compact Framework is designed to run on mobile devices such as mobile phones, Personal Digital Assistants (PDAs), and embedded devices. The easiest way to develop and test a Smart Device Application is to use an emulator.<br />These devices are divided into two main divisions: <br /><br />1. Those that are directly supported by .NET (Pocket PCs, i-Mode phones, and WAP devices) <br />2. Those that are not (Palm OS and J2ME-powered devices). <br /><br />6. What is the CLR? <br /><br />Common Language Runtime (CLR) is a run-time environment that manages the execution of .NET code and provides services like memory management, debugging, security, etc. The CLR is also known as Virtual Execution System (VES). The CLR is a multi-language execution environment. There are currently over 15 compilers being built by Microsoft and other companies that produce code that will execute in the CLR.<br /><br />The Common Language Runtime (CLR) provides a solid foundation for developers to build various types of applications. Whether you're writing an ASP.Net application , a Windows Forms application, a Web Service, a mobile code application, a distributed application, or an application that combines several of these application models, the CLR provides the following benefits for application developers:<br /><br />• Vastly simplified development <br />• Seamless integration of code written in various languages <br />• Evidence-based security with code identity <br />• Assembly-based deployment that eliminates DLL Hell <br />• Side-by-side versioning of reusable components <br />• Code reuse through implementation inheritance <br />• Automatic object lifetime management <br />• Self describing objects <br /> <br />All Languages have runtime and its the responsibility of the runtime to take care of the code execution of the program. For example VC + + has MSCRT40.DLL,VB6 has MSVBVM60.DLL, Java has Java Virtual Machine etc. Similarly .NET has CLR. <br />Following are the responsibilities of CLR<br /><br />• Garbage Collection :- CLR automatically manages memory thus eliminating memory leakage. When objects are not referred GC automatically releases those memory thus providing efficient memory management. <br />• Code Access Security :- CAS grants rights to program depending on the security configuration of the machine. Example the program has rights to edit or create a new file but the security configuration of machine does not allow the program to delete a file. CAS will take care that the code runs under the environment of machines security configuration. <br />• Code Verification :- This ensures proper code execution and type safety while the code runs. It prevents the source code to perform illegal operation such as accessing invalid memory locations etc. <br />• IL( Intermediate language )-to-native translators and optimizer's :- CLR uses JIT and compiles the IL code to machine code and then executes. CLR also determines depending on platform what is optimized way of running the IL code. <br /> <br />7. What is BCL? <br /><br />The Base Class Library (BCL) is a library of types and functionalities available to all languages using the .NET Framework. In order to make the programmer's job easier, .NET includes the BCL in order to encapsulate a large number of common functions, such as file reading and writing, graphic rendering, database interaction, and XML document manipulation. It is much larger in scope than standard libraries for most other languages, including C++, and would be comparable in scope to the standard libraries of Java. The BCL is sometimes incorrectly referred to as the Framework Class Library (FCL), which is a superset including the Microsoft namespaces. <br /><br />The Base Class Libraries (BCL) provides the fundamental building blocks for any application you develop, be it an ASP.NET application, a Windows Forms application, or a Web Service. The BCL generally serves as your main point of interaction with the runtime. BCL classes include:<br /><br />• System <br />• System.CodeDom <br />• System.Collections <br />• System.Diagnostics <br />• System.Globalization <br />• System.IO <br />• System.Resources <br />• System.Text <br />• System.Text.RegularExpressions <br /><br />8. What is the CLS? <br /><br />Common Language Specification. This is a subset of the CTS which all .NET languages are expected to support. The idea is that any program which uses CLS-compliant types can interoperate with any .NET program written in any language.In theory this allows very tight interop between different .NET languages - for example allowing a C# class to inherit from a VB class. <br /> <br /><br /><br /><br /><br />9. What is a CTS? <br /><br />Common Type System. This is the range of types that the .NET runtime understands, and therefore that .NET applications can use. However note that not all .NET languages will support all the types in the CTS. The CTS is a superset of the CLS. <br /> <br />10. What is a lL? ( MSIL, CIL ) <br /><br /> <br />Intermediate Language. Also known as MSIL (Microsoft Intermediate Language) or CIL (Common Intermediate Language). All .NET source code (of any language) is compiled to MSIL. When compiling the source code to managed code, the compiler translates the source into Microsoft intermediate language (MSIL). This is a CPU-independent set of instructions that can efficiently be converted to native code. Microsoft intermediate language (MSIL) is a translation used as the output of a number of compilers. It is the input to a just-in-time (JIT) compiler. The Common Language Runtime includes a JIT compiler for the conversion of MSIL to native code. <br /><br />Before Microsoft Intermediate Language (MSIL) can be executed it, must be converted by the .NET Framework just-in-time (JIT) compiler to native code. This is CPU-specific code that runs on the same computer architecture as the JIT compiler. Rather than using time and memory to convert all of the MSIL in a portable executable (PE) file to native code. It converts the MSIL as needed whilst executing, then caches the resulting native code so its accessible for any subsequent calls.<br /><br /> <br />11. What is MSIL Assembler (Ilasm.exe) <br /><br />The MSIL Assembler generates a portable executable (PE) file from MSIL assembly language. You can run the resulting executable, which contains MSIL and the required metadata, to determine whether the MSIL performs as expected. <br /> <br />12. What is MSIL Disassembler (Ilasm.exe) ?<br /><br />The MSIL Disassembler is a companion tool to the MSIL Assembler (Ilasm.exe). Ildasm.exe takes a portable executable (PE) file that contains Microsoft intermediate language (MSIL) code and creates a text file suitable as input to Ilasm.exe.<br /> <br />13. Can I look at the IL for an assembly?<br /><br />Yes. MS supply a tool called Ildasm that can be used to view the metadata and IL for an assembly. <br /> <br />14. Can source code be reverse-engineered from IL?<br /><br />Yes, it is often relatively straightforward to regenerate high-level source from IL. Lutz Roeder's Reflector does a very good job of turning IL into C# or VB.NET. <br /> <br />15. How can I stop my code being reverse-engineered from IL?<br /><br />You can buy an IL obfuscation tool. These tools work by 'optimising' the IL in such a way that reverse-engineering becomes much more difficult. Of course if you are writing web services then reverse-engineering is not a problem as clients do not have access to your IL. <br /><br />16. Can I write IL programs directly?<br /><br />Yes <br /> .assembly MyAssembly {}<br /> .class MyApp {<br /> .method static void Main() {<br /> .entrypoint<br /> ldstr "Hello, IL!"<br /> call void System.Console::WriteLine(class System.Object)<br /> ret<br /> }<br /> }<br /><br />Just put this into a file called hello.il, and then run ilasm hello.il. An exe assembly will be generated. <br /> <br />17. Can I do things in IL that I can't do in C#?<br /><br />Yes. A couple of simple examples are that you can throw exceptions that are not derived from System.Exception, and you can have non-zero-based arrays. <br /> <br />18. What is JIT?<br /><br />Just-In-Time compiler- it converts the language that you write in .Net into machine language that a computer can understand. there are tqo types of JITs one is memory optimized & other is performace optimized.<br /><br />JIT (Just - In - Time) is a compiler which converts MSIL code to Native Code (ie.. CPU-specific code that runs on the same computer architecture).<br /><br />Because the common language runtime supplies a JIT compiler for each supported CPU architecture, developers can write a set of MSIL that can be JIT-compiled and run on computers with different architectures. However, your managed code will run only on a specific operating system if it calls platform-specific native APIs, or a platform-specific class library.<br /><br />JIT compilation takes into account the fact that some code might never get called during execution. Rather than using time and memory to convert all the MSIL in a portable executable (PE) file to native code, it converts the MSIL as needed during execution and stores the resulting native code so that it is accessible for subsequent calls. The loader creates and attaches a stub to each of a type's methods when the type is loaded. On the initial call to the method, the stub passes control to the JIT compiler, which converts the MSIL for that method into native code and modifies the stub to direct execution to the location of the native code. Subsequent calls of the JIT-compiled method proceed directly to the native code that was previously generated, reducing the time it takes to JIT-compile and run the code.<br /> <br />19. What is a Managed Code?<br /><br />Managed code is code that has its execution managed by the .NET Framework Common Language Runtime. It refers to a contract of cooperation between natively executing code and the runtime. This contract specifies that at any point of execution, the runtime may stop an executing CPU and retrieve information specific to the current CPU instruction address. Information that must be query-able generally pertains to runtime state, such as register or stack memory contents. <br /><br />The necessary information is encoded in an Intermediate Language (IL) and associated metadata, or symbolic information that describes all of the entry points and the constructs exposed in the IL (e.g., methods, properties) and their characteristics. The Common Language Infrastructure (CLI) Standard (which the CLR is the primary commercial implementation) describes how the information is to be encoded, and programming languages that target the runtime emit the correct encoding.<br /><br />Managed code runs in the Common Language Runtime. The runtime offers a wide variety of services to your running code. In the usual course of events, it first loads and verifies the assembly to make sure the IL is okay. Then, just in time, as methods are called, the runtime arranges for them to be compiled to machine code suitable for the machine the assembly is running on, and caches this machine code to be used the next time the method is called. (This is called Just In Time, or JIT compiling, or often just Jitting.) <br /><br />As the assembly runs, the runtime continues to provide services such as security, memory management, threading, and the like. The application is managed by the runtime.<br /> <br />20. What is a Unmanaged Code?<br /><br />Unmanaged code is what you use to make before Visual Studio .NET 2002 was released. Visual Basic 6, Visual C++ 6. It compiled directly to machine code that ran on the machine where you compiled it—and on other machines as long as they had the same chip, or nearly the same. It didn't get services such as security or memory management from an invisible runtime; it got them from the operating system. And importantly, it got them from the operating system explicitly, by asking for them, usually by calling an API provided in the Windows SDK. More recent unmanaged applications got operating system services through COM calls.<br /><br />Unlike the other Microsoft languages in Visual Studio, Visual C++ can create unmanaged applications. When you create a project and select an application type whose name starts with MFC, ATL, or Win32, you're creating an unmanaged application.<br /><br />This can lead to some confusion: When you create a .Managed C++ application., the build product is an assembly of IL with an .exe extension. When you create an MFC application, the build product is a Windows executable file of native code, also with an .exe extension. The internal layout of the two files is utterly different. You can use the Intermediate Language Disassembler, ildasm, to look inside an assembly and see the metadata and IL. Try pointing ildasm at an unmanaged exe and you'll be told it has no valid CLR (Common Language Runtime) header and can't be disassembled Same extension, completely different files.<br /><br />21. What is portable executable (PE)?<br /><br />The file format defining the structure that all executable files (EXE) and Dynamic Link Libraries (DLL) must use to allow them to be loaded and executed by Windows. PE is derived from the Microsoft Common Object File Format (COFF). The EXE and DLL files created using the .NET Framework obey the PE/COFF formats and also add additional header and data sections to the files that are only used by the CLR. <br /> <br />22. What is a Assembly ?<br /><br />An assembly consists of one or more files (dlls, exe's, html files etc.), and represents a group of resources, type definitions, and implementations of those types. An assembly may also contain references to other assemblies. These resources, types and references are described in a block of data called a manifest. The manifest is part of the assembly, thus making the assembly self-describing.<br /><br />An assembly is completely self-describing.An assembly contains metadata information, Which is used by the CLR for everything from type checking and security to actually invoking the components methods.As all information is in assembly itself it is independent of registry. This is the basic advantage as compared to COM where the version was stored in registry.<br /><br />In the Microsoft .NET framework an assembly is a partially compiled code library for use in deployment, versioning and security. In the Microsoft Windows implementation of .NET, an assembly is a PE (portable executable) file. There are two types, process assemblies (EXE) and library assemblies (DLL). A process assembly represents a process which will use classes defined in library assemblies. In version 1.1 of the CLR classes can only be exported from library assemblies; in version 2.0 this restriction is relaxed. The compiler will have a switch to determine if the assembly is a process or library and will set a flag in the PE file. .NET does not use the extension to determine if the file is a process or library. This means that a library may have either .dll or .exe as its extension.<br /><br />The code in an assembly is compiled into MSIL, which is then compiled into machine language at runtime by the CLR.<br /><br />An assembly can consist of one or more files. Code files are called modules. An assembly can contain more than one code module and since it is possible to use different languages to create code modules this means that it is technically possible to use several different languages to create an assembly. In practice this rarely happens, principally because Visual Studio only allows developers to create assemblies that consist of a single code module.<br /> <br />23. What is GAC? <br /><br />Each computer where the common language runtime is installed has a machine-wide code cache called the global assembly cache. The global assembly cache stores assemblies specifically designated to be shared by several applications on the computer. You should share assemblies by installing them into the global assembly cache only when you need to.<br /><br />There are three ways to add an assembly to the GAC: <br /><br />• Install them with the Windows Installer 2.0 <br />• Use the Gacutil.exe tool <br />• Drag and drop the assemblies to the cache with Windows Explorer <br /> - Create a strong name using sn.exe tool<br />eg: sn -k keyPair.snk<br />- with in AssemblyInfo.cs add the generated file name <br />eg: [assembly: AssemblyKeyFile("abc.snk")]<br />- recompile project, then install it to GAC by either<br />drag & drop it to assembly folder (C:\WINDOWS\assembly OR C:\WINNT\assembly) (shfusion.dll tool)<br />or<br />gacutil -i abc.dll<br /><br /> <br />24. What are different types of Assembly?<br /><br />Private assemblies : <br />When a developer compiles code the compiler will put the name of every library assembly it uses in the compiled assembly's .NET metadata. When the CLR executes the code in the assembly it will use this metadata to locate the assembly using a technology called Fusion. If the called assembly does not have a strong name, then Fusion will only use the short name (the PE file name) to locate the library. In effect this means that the assembly can only exist in the application folder, or in a subfolder, and hence it is called a private assembly because it can only be used by a specific application. Versioning is switched off for assemblies that do not have strong names, and so this means that it is possible for a different version of an assembly to be loaded than the one that was used to create the calling assembly.<br /><br />The compiler will store the complete name (including version) of strongly named assembly in the metadata of the calling assembly. When the called assembly is loaded, Fusion will ensure that only an assembly with the exact name, including the version, is loaded. Fusion is configurable, and so you can provide an application configuration file to tell Fusion to use a specific version of a library when another version is requested.<br /><br />Shared assemblies : <br /><br />Shared assemblies are stored in the GAC. This is a system-wide cache and all applications on the machine can use any assembly in the cache. To the casual user it appears that the GAC is a single folder, however, it is actually implemented using FAT32 or NTFS nested folders which means that there can be multiple versions (or cultures) of the same assembly.<br /><br /> 25. What is Fusion ?<br /><br />Filesystems in common use by Windows (FAT32, NTFS, CDFS, etc.) are restrictive because the names of files do not include information like versioning or localization. This means that two different versions of a file cannot exist in the same folder unless their names have versioning information. Fusion is the Windows loader technology that allows versioning and culture information to be used in the name of a .NET assembly that is stored on these filesystems. Despite being the exlusive system for loading a managed assembly into a process, Fusion is also currently used to load Win32 assemblies independent of managed assembly loading.<br /><br />Fusion uses a specific search order when it looks for an assembly.<br /><br />1. If the assembly is strongly named it will first look in the GAC. <br />2. Fusion will then look for redirection information in the application's configuration file. If the library is strongly named then this can specify that another version should be loaded, or it can specify an absolute address of a folder on the local hard disk, or the URL of a file on a web server. If the library is not strongly named, then the configuration file can specify a subfolder beneath the application folder to be used in the search path. <br />3. Fusion will then look for the assembly in the application folder with either the extension .exe or .dll. <br />4. Fusion will look for a subfolder with the same name as the short name (PE file name) of the folder and look for the assembly in that folder with either the extension .exe or <br />5. .dll. <br /><br />If Fusion cannot find the assembly, the assembly image is bad, or if the reference to the assembly doesn't match the version of the assembly found, it will throw an exception. In addition, information about the name of the assembly, and the paths that it checked, will be stored. This information may be viewed by using the Fusion log viewer (fuslogvw), or if a custom location is configured, directly from the HTML log files generated.<br /><br />26. What is a satellite assembly?<br /><br />In multilingual application in .NET to support multilingual functionality you can have modules which are customized for localization.These assemblies are called as satellite assemblies. You can distribute these assemblies separately than the core modules. <br /><br />A definition from MSDN says something like this: "A .NET Framework assembly containing resources specific to a given language. Using satellite assemblies, you can place the resources for different languages in different assemblies, and the correct assembly is loaded into memory only if the user elects to view the application in that language."<br /><br />This means that you develop your application in a default language and add flexibility to react with change in the locale. Say, for example, you developed your application in an en-US locale. Now, your application has multilingual support. When you deploy your code in, say, India, you want to show labels, messages shown in the national language which is other than English.<br /><br />Satellite assemblies give this flexibility. You create any simple text file with translated strings, create resources, and put them into the bin\debug folder. That's it. The next time, your code will read the CurrentCulture property of the current thread and accordingly load the appropriate resource.<br /><br />This is called the hub and spoke model. It requires that you place resources in specific locations so that they can be located and used easily. If you do not compile and name resources as expected, or if you do not place them in the correct locations, the common language runtime will not be able to locate them. As a result, the runtime uses the default resource set.<br /><br />Every assembly contains an assembly manifest, a set of metadata with information about the assembly. The assembly manifest contains these items: <br /><br />• The assembly name and version <br />• The culture or language the assembly supports (not required in all assemblies) <br />• The public key for any strong name assigned to the assembly (not required in all assemblies) <br />• A list of files in the assembly with hash information <br />• Information on exported types <br />• Information on referenced assemblies <br />In addition, you can add other information to the manifest by using assembly attributes. Assembly attributes are declared inside of a file in an assembly, and are text strings that describe the assembly. For example, you can set a friendly name for an assembly with the AssemblyTitle attribute:<br /><br /> <br />27. How to create satellite assembly?<br /><br />• Create a folder with a specific culture name (for example, en-US) in the application's bin\debug folder. <br />• Create a .resx file in that folder. Place all translated strings into it. <br />• Create a .resources file by using the following command from the .NET command prompt. (localizationsample is the name of the application namespace. If your application uses a nested namespace structure like MyApp.YourApp.MyName.YourName as the type of namespace, just use the uppermost namespace for creating resources files—MyApp.) <br /> <br />resgen Strings.en-US.resx LocalizationSample.<br /> Strings.en-US.resources<br />al /embed:LocalizationSample.Strings.en-US.resources<br /> /out:LocalizationSample.resources.dll /c:en-US<br /><br /><br /><br />The above step will create two files, LocalizationSample.Strings.en-US.resources and LocalizationSample.resources.dll. Here, LocalizationSample is the name space of the application.<br /><br />• In the code, find the user's language; for example, en-US. This is culture specific. <br />• Give the assembly name as the name of .resx file. In this case, it is Strings. <br /><br />Using a Satellite Assembly <br /><br />Follow these steps:<br /> <br />Thread.CurrentThread.CurrentCulture =<br /> CultureInfo.CreateSpecificCulture(specCult);<br />Thread.CurrentThread.CurrentUICulture =<br /> new CultureInfo(specCult);<br />ResourceManager resMgr =<br /> new ResourceManager(typeof(Form1).Namespace + "." +<br /> asmName, this.GetType().Assembly);<br />btnTest.Text = resMgr.GetString("Jayant");<br /><br /> <br />28. What is Shadow Copy?<br /><br />In order to replace a COM component on a live web server, it was necessary to stop the entire website, copy the new files and then restart the website. This is not feasible for the web servers that need to be always running. .NET components are different. They can be overwritten at any time using a mechanism called Shadow Copy. It prevents the Portable Executable (PE) files like DLLs and EXEs from being locked. Whenever new versions of the PEs are released, they are automatically detected by the CLR and the changed components will be automatically loaded. They will be used to process all new requests not currently executing, while the older version still runs the currently executing requests. By bleeding out the older version, the update is completed.<br /> <br />29. What is DLL Hell? <br /><br />DLL hell is the problem that occurs when an installation of a newer application might break or hinder other applications as newer DLLs are copied into the system and the older applications do not support or are not compatible with them. .NET overcomes this problem by supporting multiple versions of an assembly at any given time. This is also called side-by-side component versioning. <br /> <br />30. What is GUID , why we use it and where?<br /><br />GUID is Short form of Globally Unique Identifier, a unique 128-bit number that is produced by the Windows OS or by some Windows applications to identify a particular component, application, file, database entry, and/or user. For instance, a Web site may generate a GUID and assign it to a user's browser to record and track the session. A GUID is also used in a Windows registry to identify COM DLLs. Knowing where to look in the registry and having the correct GUID yields a lot information about a COM object (i.e., information in the type library, its physical location, etc.). Windows also identifies user accounts by a username (computer/domain and username) and assigns it a GUID. Some database administrators even will use GUIDs as primary key values in databases. <br /><br />GUIDs can be created in a number of ways, but usually they are a combination of a few unique settings based on specific point in time (e.g., an IP address, network MAC address, clock date/time, etc.).<br /><br />31. How to create GUID?<br /><br />Start guidgen.exe, or when you click the New GUID button in the Create GUID dialog box, guidgen.exe generates a GUID.<br /><br />To run guidgen.exe from the IDE<br /><br />• On the Tools menu, click Create GUID. The Create GUID tool appears with a GUID in the Result box. <br />• Select the format you want for the GUID. <br />• Click Copy. <br />• The GUID is copied to the Clipboard so that you can paste it into your source code. <br />• If you want to generate another GUID, click New GUID. <br /> <br />32. What is NameSpace?<br /><br />Namespace is a logical naming scheme for group related types. Some class types that logically belong together they can be put into a common namespace. They prevent namespace collisions and they provide scoping. They are imported as "using" in C# or "Imports" in Visual Basic. It seems as if these directives specify a particular assembly, but they don't. A namespace can span multiple assemblies, and an assembly can define multiple namespaces. When the compiler needs the definition for a class type, it tracks through each of the different imported namespaces to the type name and searches each referenced assembly until it is found. Namespaces can be nested. This is very similar to packages in Java as far as scoping is concerned. <br /> <br />33. What is Difference between NameSpace and Assembly ? <br /><br />The concept of a namespace is not related to that of an assembly. A single assembly may contain many types whose hierarchical names have different namespace roots, and a logical namespace root may span multiple assemblies. In the .NET Framework, a namespace is a logical design-time naming convention, whereas an assembly establishes the name scope for types at run time. <br />Namespace:<br />• Namespace is logical grouping unit. <br />• It is a Collection of names wherein each name is Unique. <br />• They form the logical boundary for a Group of classes. <br />• Namespace must be specified in Project-Properties. <br /><br />Assembly: <br />• Assembly is physical grouping unit. <br />• It is an Output Unit. It is a unit of Deployment & a unit of versioning. Assemblies contain MSIL code. <br />• Assemblies are Self-Describing. [e.g. metadata,manifest] <br />• An assembly is the primary building block of a .NET Framework application. It is a collection of functionality that is built, versioned, and deployed as a single implementation unit (as one or more files). All managed types and resources are marked either as accessible only within their implementation unit, or by code outside that unit. <br /> <br />34. How can you view Assembly? <br /><br />Using ILDAM tool.<br /> <br />35. What is Manifest? <br /><br />Every assembly, whether static or dynamic, contains a collection of data that describes how the elements in the assembly relate to each other. The assembly manifest contains this assembly metadata. An assembly manifest contains all the metadata needed to specify the assembly's version requirements and security identity, and all metadata needed to define the scope of the assembly and resolve references to resources and classes. The assembly manifest can be stored in either a PE file (an .exe or .dll) with Microsoft intermediate language (MSIL) code or in a standalone PE file that contains only assembly manifest information.<br /><br />For an assembly with one associated file, the manifest is incorporated into the PE file to form a single-file assembly. You can create a multifile assembly with a standalone manifest file or with the manifest incorporated into one of the PE files in the assembly.<br /><br />Each assembly's manifest performs the following functions: <br />• Enumerates the files that make up the assembly. <br />• Governs how references to the assembly's types and resources map to the files that contain their declarations and implementations. <br />• Enumerates other assemblies on which the assembly depends. <br />• Provides a level of indirection between consumers of the assembly and the assembly's implementation details. <br />• Renders the assembly self-describing. <br />Assembly Manifest Contents<br />The following table shows the information contained in the assembly manifest. The first four items — the assembly name, version number, culture, and strong name information — make up the assembly's identity.<br /><br />Information Description<br />Assembly name A text string specifying the assembly's name.<br />Version number A major and minor version number, and a revision and build number. The common language runtime uses these numbers to enforce version policy.<br />Culture Information on the culture or language the assembly supports. This information should be used only to designate an assembly as a satellite assembly containing culture- or language-specific information. (An assembly with culture information is automatically assumed to be a satellite assembly.)<br />Strong name information The public key from the publisher if the assembly has been given a strong name.<br />List of all files in the assembly A hash of each file contained in the assembly and a file name. Note that all files that make up the assembly must be in the same directory as the file containing the assembly manifest.<br />Type reference information Information used by the runtime to map a type reference to the file that contains its declaration and implementation. This is used for types that are exported from the assembly.<br />Information on referenced assemblies A list of other assemblies that are statically referenced by the assembly. Each reference includes the dependent assembly's name, assembly metadata (version, culture, operating system, and so on), and public key, if the assembly is strong named.<br /><br /><br />36. Where is version information stored of a assembly ? <br /><br />Manifest contains the version details. <br /> <br />37. Is versioning applicable to private assemblies?<br /><br />Versioning concepts apply only to public assembly. <br /> <br />38. What is strong names?<br /><br />Strong Name is a technology introduced with the .NET platform and it brings many possibilities into .NET applications. But many .NET developers still see Strong Names as security enablers (which is very wrong!) and not as a technology uniquely identifying assemblies.<br /><br />Assemblies can be assigned a cryptographic signature, called a strong name, which provides name uniqueness for the assembly and prevents someone from taking over the name of your assembly (name spoofing). If you are deploying an assembly that will be shared among many applications on the same machine, it must have a strong name. Even if you only use the assembly within your application, using a strong name ensures that the correct version of the assembly gets loaded<br /><br />Strong Names are not any security enhancement; they enable unique identification and side-by-side code execution.<br />Strong Namesare used for :<br />• Versioning <br />• Authentication <br /><br />Versioning solves known problem called as "DLL hell". Signed assemblies are unique and Strong Names solves problem with namespace collisions (developers can distribute their assemblies even with the same file names as shown of figure below). Assemblies signed with Strong Names are uniquely identified and are protected and stored in different spaces. <br /><br /><br />Versioning solves known problem called as "DLL hell". Signed assemblies are unique and Strong Names solves problem with namespace collisions (developers can distribute their assemblies even with the same file names as shown of figure below). Assemblies signed with Strong Names are uniquely identified and are protected and stored in different spaces.<br /> <br />39. 31) What is the difference between C# Boolean and C++ Boolean? <br /><br />In C# Boolean values (true, false) do not equate to integer variables. <br /> <br />40. What is Delay signing ? <br /><br />An organization can have a closely guarded key pair that developers do not have access to on a daily basis. The public key is often available, but access to the private key is restricted to only a few individuals. When developing assemblies with strong names, each assembly that references the strong-named target assembly contains the token of the public key used to give the target assembly a strong name. This requires that the public key be available during the development process.<br /><br />You can use delayed or partial signing at build time to reserve space in the portable executable (PE) file for the strong name signature, but defer the actual signing until some later stage (typically just before shipping the assembly).<br /><br />The following steps outline the process to delay sign an assembly: <br /><br />1. Obtain the public key portion of the key pair from the organization that will do the eventual signing. Typically this key is in the form of an .snk file, which can be created using the Strong Name tool (Sn.exe) provided by the .NET Framework SDK. <br /><br />2. Annotate the source code for the assembly with two custom attributes from System.Reflection: <br /><br />• AssemblyKeyFileAttribute, which passes the name of the file containing the public key as a parameter to its constructor. <br />• AssemblyDelaySignAttribute, which indicates that delay signing is being used by passing true as a parameter to its constructor. <br />For example: <br /><br />[Visual Basic] <br /> <Assembly:AssemblyKeyFileAttribute("myKey.snk")><br /> <Assembly:AssemblyDelaySignAttribute(true)><br /><br />[C#]<br /> [assembly:AssemblyKeyFileAttribute("myKey.snk")]<br /> [assembly:AssemblyDelaySignAttribute(true)]<br /><br />3. The compiler inserts the public key into the assembly manifest and reserves space in the PE file for the full strong name signature. The real public key must be stored while the assembly is built so that other assemblies that reference this assembly can obtain the key to store in their own assembly reference. <br />4. Because the assembly does not have a valid strong name signature, the verification of that signature must be turned off. You can do this by using the –Vr option with the Strong Name tool. <br />The following example turns off verification for an assembly called myAssembly.dll. <br /><br />sn –Vr myAssembly.dll<br /><br />5. Later, usually just before shipping, you submit the assembly to your organization's signing authority for the actual strong name signing using the –R option with the Strong Name tool. <br />The following example signs an assembly called myAssembly.dll with a strong name using the sgKey.snk key pair. <br /><br />sn -R myAssembly.dll sgKey.snk<br /> <br /><br /><br />41. What is garbage collection? <br /><br />Short :<br />Garbage collection is a CLR feature which automatically manages memory. Programmers forget<br />to release the objects while coding. CLR automatically releases objects when they are no longer referenced and in use. CLR runs on non-deterministic to see the unused objects and cleans them. One side effect of this non-deterministic feature is that we cannot assume an object is destroyed when it goes out of the scope of a function. Therefore, we should not put code into a class destructor to release resources.<br />Detailed :<br />Every program uses resources of one sort or another -- memory buffers, network connections, database resources and so on. In fact, in an object-oriented environment, every type identifies some resource available for a program's use. To use any of these resources, memory must be allocated to represent the type. <br />The steps required to access a resource are as follows: <br />1. Allocate memory for the type that represents the resource. <br />2. Initialize the memory to set the initial state of the resource and to make the resource usable. <br />3. Use the resource by accessing the instance members of the type (repeat as necessary). <br />4. Tear down the state of the resource to clean up. <br />5. Free the memory. <br />The garbage collector (GC) of .NET completely absolves the developer from tracking memory usage and knowing when to free memory. <br />The Microsoft .NET CLR (common language runtime) requires that all resources be allocated from the managed heap. You never free objects from the managed heap -- objects are automatically freed when they are no longer needed by the application. <br /><br />Memory is not infinite. The garbage collector must perform a collection in order to free some memory. The garbage collector's optimizing engine determines the best time to perform a collection, (the exact criteria is guarded by Microsoft) based upon the allocations being made. When the garbage collector performs a collection, it checks for objects in the managed heap that are no longer being used by the application and performs the necessary operations to reclaim their memory. <br /><br />However, for automatic memory management, the garbage collector has to know the location of the roots -- i.e. it should know when an object is no longer in use by the application. This knowledge is made available to the GC in .NET by the inclusion of a concept know as metadata. Every data type used in .NET software includes metadata that describes it. With the help of metadata, the CLR knows the layout of each of the objects in memory, which helps the garbage collector in the compaction phase of Garbage collection. Without this knowledge the garbage collector wouldn't know where one object instance ends and the next begins. <br /><br />Garbage collection algorithm <br /><br />Application roots <br />Every application has a set of roots. Roots identify storage locations, which refer to objects on the managed heap or to objects that are set to null. <br />For example: <br />• All the global and static object pointers in an application. <br />• Any local variable/parameter object pointers on a thread's stack. <br />• Any CPU registers containing pointers to objects in the managed heap. <br />• Pointers to the objects from Freachable queue <br /><br />The list of active roots is maintained by the just-in-time (JIT) compiler and common language runtime, and is made accessible to the garbage collector's algorithm. <br /><br />Implementation <br /><br />Garbage collection in .NET is done using tracing collection and specifically the CLR implements the mark/compact collector. This method consists of two phases as described below. <br />Phase 1: Mark <br /><br />Find memory that can be reclaimed. <br />When the garbage collector starts running, it makes the assumption that all objects in the heap are garbage. In other words, it assumes that none of the application's roots refer to any objects in the heap. <br /><br />The following steps are included in phase one: <br />1. The GC identifies live object references or application roots. <br />2. It starts walking the roots and building a graph of all objects reachable from the roots. <br />3. If the GC attempts to add an object already present in the graph, then it stops walking down that path. This serves two purposes. First, it helps performance significantly, since it doesn't walk through a set of objects more than once. Second, it prevents infinite loops should you have any circular linked lists of objects. Thus cycles are handles properly. <br /><br />Once all the roots have been checked, the garbage collector's graph contains the set of all objects that are somehow reachable from the application's roots; any objects that are not in the graph are not accessible by the application, and are therefore considered garbage. <br /><br />Finalization <br /><br />The .NET Framework's garbage collection implicitly keeps track of the lifetime of the objects that an application creates, but fails when it comes to the unmanaged resources (i.e. a file, a window or a network connection) that objects encapsulate. <br /><br />The unmanaged resources must be explicitly released once the application has finished using them. The .NET Framework provides the Object.Finalize method, a method that the garbage collector must run on the object to clean up its unmanaged resources, prior to reclaiming the memory used up by the object. Since finalize method does nothing, by default, this method must be overridden if explicit cleanup is required. <br /><br />It would not be surprising if you will consider finalize just another name for destructors in C++. Though, both have been assigned the responsibility of freeing the resources used by the objects, they have very different semantics. In C++, destructors are executed immediately when the object goes out of scope, whereas a finalize method is called once garbage collection gets around to cleaning up an object. <br /><br />The potential existence of finalizers complicates the job of garbage collection in .NET by adding some extra steps before freeing an object. <br /><br />Whenever a new object with a finalize method is allocated on the heap, a pointer to the object is placed in an internal data structure called the finalization queue. When an object is not reachable, the garbage collector considers the object garbage. The garbage collector scans the finalization queue looking for pointers to these objects. When a pointer is found, the pointer is removed from the finalization queue and appended to another internal data structure called Freachable queue, making the object no longer a part of the garbage. At this point, the garbage collector has finished identifying garbage. The garbage collector compacts the reclaimable memory and the special runtime thread empties the freachable queue, executing each object's finalize method. <br /><br />The next time the garbage collector is invoked, it sees that the finalized objects are truly garbage and the memory for those objects is then, simply freed. <br /><br />Thus when an object requires finalization, it dies, then lives (resurrects) and finally dies again. It is recommended to avoid using finalize method, unless required. Finalize methods increase memory pressure by not letting the memory and the resources used by that object to be released, until two garbage collections. Since you do not have control on the order in which the finalize methods are executed, it may lead to unpredictable results. <br /><br />Weak references <br /><br />Weak references are a means of performance enhancement, used to reduce the pressure placed on the managed heap by large objects. <br /><br />When a root points to an object, it's called a strong reference to the object, and the object cannot be collected because the application's code can reach the object. <br /><br />When an object has a weak reference to it, it basically means that if there is a memory requirement and the garbage collector runs, the object can be collected; and when the application later attempts to access the object, the access will fail. On the other hand, to access a weakly referenced object, the application must obtain a strong reference to the object. If the application obtains this strong reference before the garbage collector collects the object, then the GC cannot collect the object because a strong reference to the object exists. <br /><br />The managed heap contains two internal data structures whose sole purpose is to manage weak references: the short weak reference table and the long weak reference table. <br /><br />Weak references are of two types: <br /><br />1. A short weak reference doesn't track resurrection -- i.e. the object which has a short weak reference to itself is collected immediately without running its finalization method. <br />2. A long weak reference tracks resurrection -- i.e. the garbage collector collects object pointed to by the long weak reference table only after determining that the object's storage is reclaimable. If the object has a finalize method, the finalize method has been called and the object was not resurrected. <br /><br />These two tables simply contain pointers to objects allocated within the managed heap. Initially, both tables are empty. When you create a WeakReference object, an object is not allocated from the managed heap. Instead, an empty slot in one of the weak reference tables is located; short weak references use the short weak reference table and long weak references use the long weak reference table. <br /><br />Consider an example of what happens when the garbage collector runs. The diagrams (Figure 1 & 2) below show the state of all the internal data structures before and after the GC runs. <br /> <br />Now here's what happens when a garbage collection (GC) runs: <br />1. The garbage collector builds a graph of all the reachable objects. In the above example, the graph will include objects B, C, E, G. <br />2. The garbage collector scans the short weak reference table. If a pointer in the table refers to an object that is not part of the graph, then the pointer identifies an unreachable object and the slot in the short weak reference table is set to null. In the above example, slot of object D is set to null since it is not a part of the graph. <br />3. The garbage collector scans the finalization queue. If a pointer in the queue refers to an object that is not part of the graph, then the pointer identifies an unreachable object and the pointer is moved from the finalization queue to the freachable queue. At this point, the object is added to the graph, since the object is now considered reachable. In the above example, though objects A, D, F are not included in the graph, they are treated as reachable objects because they are part of the finalization queue. Finalization queue thus gets emptied. <br />4. The garbage collector scans the long weak reference table. If a pointer in the table refers to an object that is not part of the graph (which now contains the objects pointed to by entries in the freachable queue), then the pointer identifies an unreachable object and the slot is set to null. Since both the objects C and F are a part of the graph (of the previous step), none of them are set to null in the long reference table. <br />5. The garbage collector compacts the memory, squeezing out the holes left by the unreachable objects. In the above example, object H is the only object that gets removed from the heap and it's memory is reclaimed. <br /> <br />Generations <br /><br />Since garbage collection cannot complete without stopping the entire program, it can cause arbitrarily long pauses at arbitrary times during the execution of the program. Garbage collection pauses can also prevent programs from responding to events quickly enough to satisfy the requirements of real-time systems. <br /><br />One feature of the garbage collector that exists purely to improve performance is called generations. A generational garbage collector takes into account two facts that have been empirically observed in most programs in a variety of languages: <br />1. Newly created objects tend to have short lives. <br />2. The older an object is, the longer it will survive. <br /><br />Generational collectors group objects by age and collect younger objects more often than older objects. When initialized, the managed heap contains no objects. All new objects added to the heap can be said to be in generation 0, until the heap gets filled up which invokes garbage collection. As most objects are short-lived, only a small percentage of young objects are likely to survive their first collection. Once an object survives the first garbage collection, it gets promoted to generation 1.Newer objects after GC can then be said to be in generation 0. The garbage collector gets invoked next only when the sub-heap of generation 0 gets filled up. All objects in generation 1 that survive get compacted and promoted to generation 2. All survivors in generation 0 also get compacted and promoted to generation 1. Generation 0 then contains no objects, but all newer objects after GC go into generation 0. <br /><br />Thus, as objects "mature" (survive multiple garbage collections) in their current generation, they are moved to the next older generation. Generation 2 is the maximum generation supported by the runtime's garbage collector. When future collections occur, any surviving objects currently in generation 2 simply stay in generation 2. <br />Therefore, dividing the heap into generations of objects and collecting and compacting younger generation objects improves the efficiency of the basic underlying garbage collection algorithm by reclaiming a significant amount of space from the heap, and also being faster than if the collector had examined the objects in all generations. <br /><br />A garbage collector that can perform generational collections, each of which is guaranteed (or at least very likely) to require less than a certain maximum amount of time, can help make runtime suitable for real-time environment and also prevent pauses that are noticeable to the user. <br /><br />Myths related to garbage collection <br /><br /><br />GC is necessarily slower than manual memory management. <br /><br />Counter explanation: Not necessarily. Modern garbage collectors appear to run as quickly as manual storage allocators (malloc/free or new/delete). Garbage collection probably will not run as quickly as customized memory allocator designed for use in a specific program. On the other hand, the extra code required to make manual memory management work properly (for example, explicit reference counting) is often more expensive than a garbage collector would be. <br /><br />GC will necessarily make my program pause. <br /><br />Counter explanation: Since garbage collectors usually stop the entire program while seeking and collecting garbage objects, they cause pauses long enough to be noticed by the users. But with the advent of modern optimization techniques, these noticeable pauses can be eliminated. <br /><br />Manual memory management won't cause pauses. <br /><br />Counter explanation: Manual memory management does not guarantee performance. It may cause pauses for considerable periods either on allocation or deallocation. <br /><br />Programs with GC are huge and bloated; GC isn't suitable for small programs or systems. <br /><br />Counter explanation: Though using garbage collection is advantageous in complex systems, there is no reason for garbage collection to introduce any significant overhead at any scale. <br /><br />I've heard that GC uses twice as much memory. <br /><br />Counter explanation: This may be true of primitive GCs, but this is not generally true of .NET garbage collection. The data structures used for GC need be no larger than those for manual memory management. <br /><br /> <br />42. Is it true that objects don't always get destroyed immediately when the last reference goes away?<br /><br />Yes. The garbage collector offers no guarantees about the time when an object will be destroyed and its memory reclaimed. <br /> <br />43. Why doesn't the .NET runtime offer deterministic destruction? <br /><br />Because of the garbage collection algorithm. The .NET garbage collector works by periodically running through a list of all the objects that are currently being referenced by an application. All the objects that it doesn't find during this search are ready to be destroyed and the memory reclaimed. The implication of this algorithm is that the runtime doesn't get notified immediately when the final reference on an object goes away - it only finds out during the next 'sweep' of the heap.<br /><br />Futhermore, this type of algorithm works best by performing the garbage collection sweep as rarely as possible. Normally heap exhaustion is the trigger for a collection sweep.<br /> <br />44. Is the lack of deterministic destruction in .NET a problem? <br /><br />It's certainly an issue that affects component design. If you have objects that maintain expensive or scarce resources (e.g. database locks), you need to provide some way to tell the object to release the resource when it is done. Microsoft recommend that you provide a method called Dispose() for this purpose. However, this causes problems for distributed objects - in a distributed system who calls the Dispose() method? Some form of reference-counting or ownership-management mechanism is needed to handle distributed objects - unfortunately the runtime offers no help with this. <br /> <br />45. Should I implement Finalize on my class? Should I implement IDisposable? <br /><br />This issue is a little more complex than it first appears. There are really two categories of class that require deterministic destruction - the first category manipulate unmanaged types directly, whereas the second category manipulate managed types that require deterministic destruction. An example of the first category is a class with an IntPtr member representing an OS file handle. An example of the second category is a class with a System.IO.FileStream member.<br /><br />For the first category, it makes sense to implement IDisposable and override Finalize. This allows the object user to 'do the right thing' by calling Dispose, but also provides a fallback of freeing the unmanaged resource in the Finalizer, should the calling code fail in its duty. However this logic does not apply to the second category of class, with only managed resources. In this case implementing Finalize is pointless, as managed member objects cannot be accessed in the Finalizer. This is because there is no guarantee about the ordering of Finalizer execution. So only the Dispose method should be implemented. (If you think about it, it doesn't really make sense to call Dispose on member objects from a Finalizer anyway, as the member object's Finalizer will do the required cleanup.)<br /><br />Note that some developers argue that implementing a Finalizer is always a bad idea, as it hides a bug in your code (i.e. the lack of a Dispose call). A less radical approach is to implement Finalize but include a Debug.Assert at the start, thus signalling the problem in developer builds but allowing the cleanup to occur in release builds.<br /> <br /><br />46. Do I have any control over the garbage collection algorithm? <br /><br />A little. For example the System.GC class exposes a Collect method, which forces the garbage collector to collect all unreferenced objects immediately.<br /><br />Also there is a gcConcurrent setting that can be specified via the application configuration file. This specifies whether or not the garbage collector performs some of its collection activities on a separate thread. The setting only applies on multi-processor machines, and defaults to true.<br /> <br />47. How can I find out what the garbage collector is doing? <br /><br />Lots of interesting statistics are exported from the .NET runtime via the '.NET CLR xxx' performance counters. Use Performance Monitor to view them.<br /> <br />48. What is the lapsed listener problem? <br /><br />The lapsed listener problem is one of the primary causes of leaks in .NET applications. It occurs when a subscriber (or 'listener') signs up for a publisher's event, but fails to unsubscribe. The failure to unsubscribe means that the publisher maintains a reference to the subscriber as long as the publisher is alive. For some publishers, this may be the duration of the application.<br /><br />This situation causes two problems. The obvious problem is the leakage of the subscriber object. The other problem is the performance degredation due to the publisher sending redundant notifications to 'zombie' subscribers. <br /><br />There are at least a couple of solutions to the problem. The simplest is to make sure the subscriber is unsubscribed from the publisher, typically by adding an Unsubscribe() method to the subscriber. <br /> <br />49. What is the difference between Finalize and Dispose (Garbage collection) ? <br /><br />Class instances often encapsulate control over resources that are not managed by the runtime, such as window handles (HWND), database connections, and so on. Therefore, you should provide both an explicit and an implicit way to free those resources. Provide implicit control by implementing the protected Finalize Method on an object (destructor syntax in C# and the Managed Extensions for C++). The garbage collector calls this method at some point after there are no longer any valid references to the object. In some cases, you might want to provide programmers using an object with the ability to explicitly release these external resources before the garbage collector frees the object. If an external resource is scarce or expensive, better performance can be achieved if the programmer explicitly releases resources when they are no longer being used. To provide explicit control, implement the Dispose method provided by the IDisposable Interface. The consumer of the object should call this method when it is done using the object. <br /><br />Dispose can be called even if other references to the object are alive. Note that even when you provide explicit control by way of Dispose, you should provide implicit cleanup using the Finalize method. Finalize provides a backup to prevent resources from permanently leaking if the programmer fails to call Dispose.<br /> <br />50. What is Reflection in .NET? Namespace? How will you load an assembly which is not referenced by current assembly? <br /><br />All .NET compilers produce metadata about the types defined in the modules they produce. This metadata is packaged along with the module (modules in turn are packaged together in assemblies), and can be accessed by a mechanism called reflection. The System.Reflection namespace contains classes that can be used to interrogate the types for a module/assembly.<br /><br />Using reflection to access .NET metadata is very similar to using ITypeLib/ITypeInfo to access type library data in COM, and it is used for similar purposes - e.g. determining data type sizes for marshaling data across context/process/machine boundaries.<br /><br />Reflection can also be used to dynamically invoke methods (see System.Type.InvokeMember), or even create types dynamically at run-time (see System.Reflection.Emit.TypeBuilder).<br /><br />Reflection generally means that a program can gain knowledge about its own structure. With .NET, Reflection describes the possibility - depending on security features - to detect mega data of a .NET-application and of the data types and functions contained therein.<br /><br />During the runtime, it is therefore possible that an application can determine its own functionality. For example, an application could be developed, which can be extended by functionality through the adding of the respective Assemblies - without changing the main program.<br /><br />51. When do I need to use GC.KeepAlive? <br /><br />It's very unintuitive, but the runtime can decide that an object is garbage much sooner than you expect. More specifically, an object can become garbage while a method is executing on the object, which is contrary to most developers' expectations. <br /><br />Example:<br /><br /> using System;<br /> using System.Runtime.InteropServices;<br /> class Win32<br /> {<br /> [DllImport("kernel32.dll")] <br /> public static extern IntPtr CreateEvent( IntPtr lpEventAttributes, <br /> bool bManualReset,bool bInitialState, string lpName);<br /> [DllImport("kernel32.dll", SetLastError=true)] <br /> public static extern bool CloseHandle(IntPtr hObject);<br /> [DllImport("kernel32.dll")] <br /> public static extern bool SetEvent(IntPtr hEvent);<br /> }<br /> class EventUser<br /> {<br /> public EventUser() <br /> { <br /> hEvent = Win32.CreateEvent( IntPtr.Zero, false, false, null ); <br /> }<br /> <br /> ~EventUser() <br /> { <br /> Win32.CloseHandle( hEvent ); <br /> Console.WriteLine("EventUser finalized");<br /> }<br /> public void UseEvent() <br /> { <br /> UseEventInStatic( this.hEvent ); <br /> }<br /> static void UseEventInStatic( IntPtr hEvent )<br /> {<br /> //GC.Collect();<br /> bool bSuccess = Win32.SetEvent( hEvent );<br /> Console.WriteLine( "SetEvent " + (bSuccess ? "succeeded" : "FAILED!") );<br /> }<br /> IntPtr hEvent;<br /> }<br /> class App<br /> {<br /> static void Main(string[] args)<br /> {<br /> EventUser eventUser = new EventUser();<br /> eventUser.UseEvent();<br /> }<br /> }<br />If you run this code, it'll probably work fine, and you'll get the following output:<br /> SetEvent succeeded<br /> EventDemo finalized<br />However, if you uncomment the GC.Collect() call in the UseEventInStatic() method, you'll get this output:<br /><br /> EventDemo finalized<br /> SetEvent FAILED!<br />(Note that you need to use a release build to reproduce this problem.)<br /><br />So what's happening here? Well, at the point where UseEvent() calls UseEventInStatic(), a copy is taken of the hEvent field, and there are no further references to the EventUser object anywhere in the code. So as far as the runtime is concerned, the EventUser object is garbage and can be collected. Normally of course the collection won't happen immediately, so you'll get away with it, but sooner or later a collection will occur at the wrong time, and your app will fail. <br /><br />A solution to this problem is to add a call to GC.KeepAlive(this) to the end of the UseEvent method <br /> <br />52. Explain how the objects are created and destroyed ? or Explain object life time? <br /><br />An instance of a class, an object, is created by using the New keyword. Initialization tasks often must be performed on new objects before they are used. Common initialization tasks include opening files, connecting to databases, and reading values of registry keys. Microsoft Visual Basic 2005 controls the initialization of new objects using procedures called constructors (special methods that allow control over initialization).<br /><br />After an object leaves scope, it is released by the common language runtime (CLR). Visual Basic 2005 controls the release of system resources using procedures called destructors. Together, constructors and destructors support the creation of robust and predictable class libraries.<br /> <br />Sub New and Sub Finalize<br /> <br />The Sub New and Sub Finalize procedures in Visual Basic 2005 initialize and destroy objects; they replace the Class_Initialize and Class_Terminate methods used in Visual Basic 6.0 and earlier versions. Unlike Class_Initialize, the Sub New constructor can run only once when a class is created. It cannot be called explicitly anywhere other than in the first line of code of another constructor from either the same class or from a derived class. Furthermore, the code in the Sub New method always runs before any other code in a class. Visual Basic 2005 implicitly creates a Sub New constructor at run time if you do not explicitly define a Sub New procedure for a class.<br /><br />Before releasing objects, the CLR automatically calls the Finalize method for objects that define a Sub Finalize procedure. The Finalize method can contain code that needs to execute just before an object is destroyed, such as code for closing files and saving state information. There is a slight performance penalty for executing Sub Finalize, so you should define a Sub Finalize method only when you need to release objects explicitly.<br /><br />The garbage collector in the CLR does not (and cannot) dispose of unmanaged objects, objects that the operating system executes directly, outside the CLR environment. This is because different unmanaged objects must be disposed of in different ways. That information is not directly associated with the unmanaged object; it must be found in the documentation for the object. A class that uses unmanaged objects must dispose of them in its Finalize method.<br /><br />The Finalize destructor is a protected method that can be called only from the class it belongs to, or from derived classes. The system calls Finalize automatically when an object is destroyed, so you should not explicitly call Finalize from outside of a derived class's Finalize implementation. <br /><br />Unlike Class_Terminate, which executes as soon as an object is set to nothing, there is usually a delay between when an object loses scope and when Visual Basic 2005 calls the Finalize destructor. Visual Basic 2005 allows for a second kind of destructor, Dispose, which can be explicitly called at any time to immediately release resources.<br /><br />A Finalize destructor should not throw exceptions, because they cannot be handled by the application and can cause the application to terminate.<br /> <br />IDisposable Interface<br /> <br />Class instances often control resources not managed by the CLR, such as Windows handles and database connections. These resources must be disposed of in the Finalize method of the class, so that they will be released when the object is destroyed by the garbage collector. However, the garbage collector destroys objects only when the CLR requires more free memory. This means that the resources may not be released until long after the object goes out of scope.<br />To supplement garbage collection, your classes can provide a mechanism to actively manage system resources if they implement the IDisposable interface. IDisposable has one method, Dispose, which clients should call when they finish using an object. You can use the Dispose method to immediately release resources and perform tasks such as closing files and database connections. Unlike the Finalize destructor, the Dispose method is not called automatically. Clients of a class must explicitly call Dispose when you want to immediately release resources. <br />Implementing IDisposable<br />A class that implements the IDisposable interface should include these sections of code:<br /> <br />• A field for keeping track of whether the object has been disposed: <br /> Protected disposed As Boolean = False<br />• An overload of the Dispose that frees the class's resources. This method should be called by the Dispose and Finalize methods of the base class: <br /> Protected Overridable Sub Dispose(ByVal disposing As Boolean) If Not Me.disposed Then If disposing Then ' Insert code to free unmanaged resources. End If ' Insert code to free shared resources. End If Me.disposed = True End Sub<br />• An implementation of Dispose that contains only the following code: <br /> Public Sub Dispose() Implements IDisposable.Dispose Dispose(True) GC.SuppressFinalize(Me) End Sub<br />o An override of the Finalize method that contains only the following code: <br /> Protected Overrides Sub Finalize() Dispose(False) MyBase.Finalize() End Sub<br />Deriving from a Class that Implements IDisposable<br /> <br />A class that derives from a base class that implements the IDisposable interface does not need to override any of the base methods unless it uses additional resources that need to be disposed. In that situation, the derived class should override the base class's Dispose(disposing) method to dispose of the derived class's resources. This override must call the base class's Dispose(disposing) method.<br /> <br /> Protected Overrides Sub Dispose(ByVal disposing As Boolean) If Not Me.disposed Then If disposing Then ' Insert code to free unmanaged resources. End If ' Insert code to free shared resources. End If MyBase.Dispose(disposing) End Sub<br />A derived class should not override the base class's Dispose and Finalize methods. When those methods are called from an instance of the derived class, the base class's implementation of those methods call the derived class's override of the Dispose(disposing) method.<br /> <br />Visualization<br /> <br />The following diagram shows which methods are inherited and which methods are overridden in the derived class.<br /> <br /> <br /> <br />When this Dispose Finalize pattern is followed, the resources of the derived class and base class are correctly disposed. The following diagram shows which methods get called when the classes are disposed and finalized.<br /> <br /> <br /> <br />Garbage Collection and the Finalize Destructor<br /> <br />The .NET Framework uses the reference-tracing garbage collection system to periodically release unused resources. Visual Basic 6.0 and earlier versions used a different system called reference counting to manage resources. Although both systems perform the same function automatically, there are a few important differences.<br /><br />The CLR periodically destroys objects when the system determines that such objects are no longer needed. Objects are released more quickly when system resources are in short supply, and less frequently otherwise. The delay between when an object loses scope and when the CLR releases it means that, unlike with objects in Visual Basic 6.0 and earlier versions, you cannot determine exactly when the object will be destroyed. In such a situation, objects are said to have non-deterministic lifetime. In most cases, non-deterministic lifetime does not change how you write applications, as long as you remember that the Finalize destructor may not immediately execute when an object loses scope.<br /><br />Another difference between the garbage-collection systems involves the use of Nothing. To take advantage of reference counting in Visual Basic 6.0 and earlier versions, programmers sometimes assigned Nothing to object variables to release the references those variables held. If the variable held the last reference to the object, the object's resources were released immediately. In Visual Basic 2005, while there may be cases in which this procedure is still valuable, performing it never causes the referenced object to release its resources immediately. To release resources immediately, use the object's Dispose method, if available. The only time you should set a variable to Nothing is when its lifetime is long relative to the time the garbage collector takes to detect orphaned objects.<br /> <br /> <br />53. What are different type of JIT ? <br /><br />In Microsoft .NET there are three types of JIT compilers:<br />• Pre-JIT. Pre-JIT compiles complete source code into native code in a single compilation cycle. This is done at the time of deployment of the application. <br />• Kcono-JIT. Econo-JIT compiles only those methods that are called at runtime. However, these compiled methods are removed when they arc not required. <br />• Normal-JIT. Normal-JIT compiles only those methods that are called at runtime. These methods are compiled the first time they are called, and then they are stored in cache. When the same methods are called again, the compiled code from cache is used for execution. <br /> <br />54. What are Value types and Reference types ? <br /><br />Reference Type:<br />Reference types are allocated on the managed CLR heap, just like object types.<br />A data type that is stored as a reference to the value's location. The value of a reference type is the location of the sequence of bits <br />that represent the type's data. Reference types can be self-describing types, pointer types, or interface types<br /> <br />Value Type:<br /><br />Value types are allocated on the stack just like primitive types in VBScript, VB6 and C/C++. Value types are not instantiated using new go out of scope when the function they are defined within returns.<br />Value types in the CLR are defined as types that derive from system.valueType.<br /> <br />A data type that fully describes a value by specifying the sequence of bits that constitutes the value's representation. Type information for a value type instance is not stored with the instance at run time, but it is available in metadata. Value type instances can be treated as objects using boxing. <br /> <br />55. What is concept of Boxing and Unboxing ? <br /><br />Boxing:<br /><br />The conversion of a value type instance to an object, which implies that the instance will carry full type information at run time and will be allocated in the heap. The Microsoft intermediate language (MSIL) instruction set's box instruction converts a value type to an object by making a copy of the value type and embedding it in a newly allocated object.<br /> <br />Un-Boxing:<br /><br />The conversion of an object instance to a value type. <br /> Dim x As Integer <br />Dim y As Object<br /> x = 10<br /> boxing process <br />y = x<br />unboxing process <br />x = y<br /> <br /><br />56. What is difference between constants, readonly and, static ? <br /><br />Constants : The value can’t be changed <br />Read-only : The value will be initialized only once from the constructor of the class.<br />Static : Value can be initialized once.<br /><br /> <br />57. What's difference between VB.NET and C# ? <br /><br />Advantages VB.NET <br /> <br />• Has support for optional parameters which makes COM interoperability much easy. <br />• With Option Strict off late .binding is supported.Legacy VB functionalities can be used by using Microsoft.VisualBasic namespace. <br />• Has the WITH construct which is not in C#. <br />• The VB.NET part of Visual Studio .NET compiles your code in the background. While this is considered an advantage for small projects, people creating very large projects have found that the IDE slows down considerably as the project gets larger. <br />Advantages ofC# <br /> <br />• XMI. documentation is generated from source code but this is now been incorporated in Whidbey. <br />• Operator overloading which is not in current VB.NET but is been introduced in hidbey. <br />• The using statement, which makes unmanaged resource disposal simple. <br />• Access to Unsafe code. This allows pointer arithmetic etc, and can improve performance in some situations. However, it is not to be used lightly, as a lot of the normal safety of C# is lost (as the name implies).This is the major difference that you can access unmanaged code in C# and not in VB.NET. <br /> <br /> <br />58. What's difference between System exceptions and Application exceptions?<br /><br />The difference between ApplicationException and SystemException is that SystemExceptions are thrown by the CLR, and ApplicationExceptions are thrown by Applications. For example, SqlException inherits from SystemException. Included here to make this list complete, there should not be any circumstances where one would need to inherit from SystemException.<br /> <br />System.Exception<br /> <br />If extending the base exception class with additional members, inherit from System.Exception. The name of such inherited classes should end with “Exception.” <br /><br />Properly, System.Exception should have been declared as abstract with a recommendation to be inherited only by concrete exception classes. Doing so would have avoided the question of which to use right from the start, and would have also helped MS with versioning. However, since things are what they are, in this situation it makes sense to inherit and extend System.Exception. The rule to tend towards a flat hierarchy wins. Inherit from System.Exception when creating a class which adds members.<br /> <br />System.ApplicationException<br /> <br />Applications often provide their own custom exception types (e.g. CmsException, SharePointException) which do not add properties or methods, but simply subclass System.ApplicationException with a new name. If more detailed expections are required for the application, they will then inherit from this base class. This makes it convenient to throw application-specific exceptions that can be identified distinctly inside a try-catch block. When doing the same for your own applications, inherit from System.ApplicationException. <br />For both System.Exception and System.Application, the rules are consistent in this way: they work with the Framework as it is, not as we would like it to be. Since ApplicationException exists and is in common use, it would be a deviation to derive application-specific exceptions directly from System.Exception. So while you still won't catch ApplicationException directly, you should certainly catch its descendants.<br /> <br />59. What is namespace used for loading assemblies at run time and name the methods? <br /><br />System.Reflection <br /> <br />60. What is Code Access security <br /><br />CAS is the part of the .NET security model that determines whether or not a piece of code is allowed to run, and what resources it can use when it is running. For example, it is CAS that will prevent a .NET web applet from formatting your hard disk. <br /><br />61. How does CAS work? <br /><br />The CAS security policy revolves around two key concepts - code groups and permissions. Each .NET assembly is a member of a particular code group, and each code group is granted the permissions specified in a named permission set.<br /><br />For example, using the default security policy, a control downloaded from a web site belongs to the 'Zone - Internet' code group, which adheres to the permissions defined by the 'Internet' named permission set. (Naturally the 'Internet' named permission set represents a very restrictive range of permissions.)<br /> <br />62. Who defines the CAS code groups? <br /><br />Microsoft defines some default ones, but you can modify these and even create your own. To see the code groups defined on your system, run 'caspol -lg' from the command-line. On my system it looks like this: <br /> <br /> Level = Machine<br /> Code Groups:<br /> 1. All code: Nothing<br /> 1.1. Zone - MyComputer: FullTrust<br /> 1.1.1. Honor SkipVerification requests: SkipVerification<br /> 1.2. Zone - Intranet: LocalIntranet<br /> 1.3. Zone - Internet: Internet<br /> 1.4. Zone - Untrusted: Nothing<br /> 1.5. Zone - Trusted: Internet<br /> 1.6. StrongName -<br /> 0024000004800000940000000602000000240000525341310004000003<br /> 000000CFCB3291AA715FE99D40D49040336F9056D7886FED46775BC7BB5430BA4444FEF8348EBD06<br /> F962F39776AE4DC3B7B04A7FE6F49F25F740423EBF2C0B89698D8D08AC48D69CED0FC8F83B465E08<br /> 07AC11EC1DCC7D054E807A43336DDE408A5393A48556123272CEEEE72F1660B71927D38561AABF5C<br /> AC1DF1734633C602F8F2D5: Everything<br />Note the hierarchy of code groups - the top of the hierarchy is the most general ('All code'), which is then sub-divided into several groups, each of which in turn can be sub-divided. Also note that (somewhat counter-intuitively) a sub-group can be associated with a more permissive permission set than its parent. <br /> <br />63. How do I define my own code group? <br /><br />Use caspol. For example, suppose you trust code from www.mydomain.com and you want it have full access to your system, but you want to keep the default restrictions for all other internet sites. To achieve this, you would add a new code group as a sub-group of the 'Zone - Internet' group, like this:<br /> caspol -ag 1.3 -site www.mydomain.com FullTrust <br />Now if you run caspol -lg you will see that the new group has been added as group 1.3.1:<br /> ...<br /> 1.3. Zone - Internet: Internet<br /> 1.3.1. Site - www.mydomain.com: FullTrust<br /> ...<br />Note that the numeric label (1.3.1) is just a caspol invention to make the code groups easy to manipulate from the command-line. The underlying runtime never sees it.kalithttp://www.blogger.com/profile/06926361249526760368noreply@blogger.com1tag:blogger.com,1999:blog-661597201796672556.post-26014801457228296962009-10-20T03:46:00.000-07:002009-10-20T03:48:19.680-07:00Difference between shadowing and overridenShadowing :- This is a VB.Net Concept by which you can provide a new implementation for the base class member without overriding the member. You can shadow a base class member in the derived class by using the keyword Shadows . The method signature access level and return type of the shadowed member can be completely different than the base class member.<br /><br />Hiding : - This is a C# Concept by which you can provide a new implementation for the base class member without overriding the member. You can hide a base class member in the derived class by using the keyword new . The method signature access level and return type of the hidden member has to be same as the base class member.Comparing the three :-<br /><br />1) The access level signature and the return type can only be changed when you are shadowing with VB.NET. Hiding and overriding demands the these parameters as same.<br /><br />2) The difference lies when you call the derived class object with a base class variable.In class of overriding although you assign a derived class object to base class variable it will call the derived class function. In case of shadowing or hiding the base class function will be called.kalithttp://www.blogger.com/profile/06926361249526760368noreply@blogger.com0tag:blogger.com,1999:blog-661597201796672556.post-7491324270315765212009-10-20T02:55:00.000-07:002009-10-20T02:57:25.711-07:00Difference between Constant and Readonly<strong> Constant : </strong>The constants are the one whose value remain same at all the time.<br />it will be used if u want to define something at compile time.<br /><br /><strong>Read only:</strong><br /><br />if you don't know value at compile time but u can find that at runtime that time u can use readonly ..<br /><br />Read only are the things which are not to allowed to alter by the user but it can be altered by itself. Readonly is generally use in Constructor of the classes.<br /><br />Like the path of the application exe is read only. If you copy exe to some other directory the path will change. But still it is read onlykalithttp://www.blogger.com/profile/06926361249526760368noreply@blogger.com0tag:blogger.com,1999:blog-661597201796672556.post-73207793222809478502009-10-20T02:46:00.000-07:002009-10-20T02:47:03.197-07:00Interface Vs DelegateInterfaces allow us to extend some object's functionality, it's a contract<br />between the interface and the object that implements it. It is used to<br />simulate multiple inheritance in C#. In the other hand, we have delegates...<br />They're just safe callbacks or function pointers. They allow us to notify<br />that something has happened (Events). As you can see they're different as<br />well as their use.kalithttp://www.blogger.com/profile/06926361249526760368noreply@blogger.com1tag:blogger.com,1999:blog-661597201796672556.post-66937308202642907362009-10-19T04:05:00.000-07:002009-10-19T04:10:21.792-07:00Interview Questions on UMLWhat is UML? <br /><br />UML is Unified Modeling Language. It is a graphical language for visualizing specifying constructing and documenting the artifacts of the system. It allows you to create a blue print of all the aspects of the system, before actually physically implementing the system.<br /><br />What is modeling? What are the advantages of creating a model? <br /><br />Modeling is a proven and well-accepted engineering technique which helps build a model. Model is a simplification of reality; it is a blueprint of the actual system that needs to be built. Model helps to visualize the system. Model helps to specify the structural and behavior of the system. Model helps make templates for constructing the system. Model helps document the system.<br /><br />What are the different views that are considered when building an object-oriented software system? <br /><br />Normally there are 5 views. Use Case view - This view exposes the requirements of a system. Design View - Capturing the vocabulary. Process View - modeling the distribution of the systems processes and threads. Implementation view - addressing the physical implementation of the system. Deployment view - focus on the modeling the components required for deploying the system.<br />What are diagrams? Diagrams are graphical representation of a set of elements most often shown made of things and associations.<br /><br />What are the major three types of modeling used? <br />Major three types of modeling are structural, behavioral, and architectural.<br /><br />Mention the different kinds of modeling diagrams used? <br />Modeling diagrams that are commonly used are, there are 9 of them. Use case diagram, Class Diagram, Object Diagram, Sequence Diagram, statechart Diagram, Collaboration Diagram, Activity Diagram, Component diagram, Deployment Diagram.<br /><br />What is Architecture? <br />Architecture is not only taking care of the structural and behavioral aspect of a software system but also taking into account the software usage, functionality, performance, reuse, economic and technology constraints.<br /><br />What is SDLC? <br />SDLC is Software Development Life Cycle. SDLC of a system included processes that are Use case driven, Architecture centric and Iterative and Incremental. This Life cycle is divided into phases. Phase is a time span between two milestones. The milestones are Inception, Elaboration, Construction, and Transition. Process Workflows that evolve through these phase are Business Modeling, Requirement gathering, Analysis and Design, Implementation, Testing, Deployment. Supporting Workflows are Configuration and change management, Project management.<br /><br />What are Relationships? <br />There are different kinds of relationships: Dependencies, Generalization, and Association. Dependencies are relations ships between two entities that that a change in specification of one thing may affect another thing. Most commonly it is used to show that one class uses another class as an argument in the signature of the operation. Generalization is relationships specified in the class subclass scenario, it is shown when one entity inherits from other. Associations are structural relationships that are: a room has walls, Person works for a company. Aggregation is a type of association where there is a has a relation ship, That is a room has walls, ño if there are two classes room and walls then the relation ship is called a association and further defined as an aggregation.<br /><br />How are the diagrams divided? <br />The nine diagrams are divided into static diagrams and dynamic diagrams.<br />Static Diagrams (Also called Structural Diagram): Class diagram, Object diagram, Component Diagram, Deployment diagram.<br />Dynamic Diagrams (Also called Behavioral Diagrams): Use Case Diagram, Sequence Diagram, Collaboration Diagram, Activity diagram, Statechart diagram.<br /><br />What are Messages? A message is the specification of a communication, when a message is passed that results in action that is in turn an executable statement.<br /><br />What is an Use Case? <br />A use case specifies the behavior of a system or a part of a system, óse cases are used to capture the behavior that need to be developed. It involves the interaction of actors and the system.kalithttp://www.blogger.com/profile/06926361249526760368noreply@blogger.com0tag:blogger.com,1999:blog-661597201796672556.post-82261423150402977162009-09-28T21:52:00.000-07:002009-09-28T21:53:28.063-07:00What is observableCollection in WPFIn many cases the data that you work with is a collection of objects. For example, a common scenario in data binding is to use anItemsControl such as a ListBox, ListView, or TreeView to display a collection of records.<br />You can enumerate over any collection that implements the IEnumerable interface. However, to set up dynamic bindings so that insertions or deletions in the collection update the UI automatically, the collection must implement the INotifyCollectionChanged interface. This interface exposes the CollectionChanged event, an event that should be raised whenever the underlying collection changes.<br />WPF provides the ObservableCollection(T) class, which is a built-in implementation of a data collection that implements theINotifyCollectionChanged interface.<br />Before implementing your own collection, consider using ObservableCollection(T) or one of the existing collection classes, such as List(T),Collection(T), and BindingList(T), among many others. If you have an advanced scenario and want to implement your own collection, consider using IList, which provides a non-generic collection of objects that can be individually accessed by index. Implementing IList provides the best performance with the data binding engine.kalithttp://www.blogger.com/profile/06926361249526760368noreply@blogger.com0tag:blogger.com,1999:blog-661597201796672556.post-23659080494681852282009-09-17T23:08:00.000-07:002009-09-17T23:09:37.097-07:00What is SaaSSoftware as a service (or SaaS) is a way of delivering applications over the Internet—as a service. Instead of installing and maintaining software, you simply access it via the Internet, freeing yourself from complex software and hardware management.<br /><br />SaaS applications are sometimes called Web-based software, on-demand software, or hosted software. Whatever the name, SaaS applications run on a SaaS provider’s servers. The provider manages access to the application, including security, availability, and performance.kalithttp://www.blogger.com/profile/06926361249526760368noreply@blogger.com0tag:blogger.com,1999:blog-661597201796672556.post-3587247969775653312009-09-12T04:53:00.001-07:002009-09-12T04:53:54.130-07:00What are the three kinds of routed events in WPF and how do they differ?Routed events in WPF are direct, tunneling a bubbling. A direct event can be raised only by the element in which it originated. A bubbling event is raised first by the element in which it originates and then is raised by each successive container in the visual tree. A tunneling event is raised first by the topmost container in the visual tree and then down through each successive container until it is finally raised by the element in which it originated. Tunneling and bubbling events allow elements of the user interface to respond to events raised by their contained elements.kalithttp://www.blogger.com/profile/06926361249526760368noreply@blogger.com0