Thursday, February 28, 2008

The Cost of Function Use In A Where Clause

A common mistake made in writing SQL statements is to wrap filtering columns of a WHERE clause inside of a function. SQL server performance is very dependant on SQL's ability to properly use indexes when it retrieves records. The lifeblood of proper query plan creation is using very deterministic filtering logic on indexed columns (Some refer to this as Sargable Where clauses).

The biggest problem we see is when a filtering column is wrapped inside of a function, the query optimizer does not see the column and if an index exists on that column, the index likely will not be used (of course, just like anything else in our hi-tech world, this is not always the case).

Take this example

In the CadencedEventCustomer table, we have a nonclustered index on FullName1. The simple query below should use this index and should perform an index seek to find the qualifying records instantly (Seek = FAST).

Select
*
FROM
CadencedEventCustomer
WHERE
isNull(FullName1,'') = 'Ed Jones'


Our intention here (albeit incorrect) is to correctly handle records in which the FullName1 field is null. Our concern is that Null records will be missed (more on this later).

Since we are wrapping the FullName1 column in the IsNull function, the query optimizer doesn't see it and the index is not used.

Here is the query plan produced by SQL Server's query engine:

|--Index Scan(OBJECT:([CIPv3].[dbo].[CadencedEventCustomer].[nci_CadencedEventCustomer_FullName1]), WHERE:(isnull([CIPv3].[dbo].[CadencedEventCustomer].[FullName1],'bob')='Ed Jones'))

We see an Index Scan on the FullName1 index (the index exists, but a seek cannot be performed due to the function wrapping the indexed column). A Scan means the entire table is being searched from beginning to end until SQL happens to find all the records satisfying the condition. On a table with millions of records, this obviously takes a long time.

The subtree cost of this query is 10.58. Subtree cost is a relative cost to SQL Server indicating how difficult this query is for SQL Server to retrieve the data. The higher the relative cost, the more of a hit on performance (CPU, IO, RAM, etc). From my experience, anything in the double digit range is getting pretty spendy and will not perform well in a production environment. In this case, the cost is attributed to the inability of the optimizer to use an index which means SQL Server is having to do a full table scan to return all the data for the query result...OUCH.

If we remove the IsNull() function our query looks like this:

Select
*
FROM
CadencedEventCustomer
WHERE
FullName1 = 'Ed Jones'


This is a much cleaner WHERE clause and SQL Server will now see the indexed column and will use the index correctly:

|--Index Seek(OBJECT:([CIPv3].[dbo].[CadencedEventCustomer].[nci_CadencedEventCustomer_FullName1]), SEEK:([CIPv3].[dbo].[CadencedEventCustomer].[FullName1]='Ed Jones') ORDERED FORWARD)

Notice we now have a SEEK (Fast). The query subtree cost is now at 0.0093 !

Keep in mind, this behavior is true for ALL Functions (Not just IsNull). We should try very hard to not wrap filtering WHERE clause columns within a function.

Now....as for IsNull. The reason we are using IsNull() in the first place is to return records where the value is null in the column (I recommend not allowing nulls in the database, but that topic is for another article).

However, if we look closely at this WHERE clause, we can immediately see that the net result of our query (in this case) is identical whether we use the IsNull function or not.

Select
*
FROM
CadencedEventCustomer
WHERE
isNull(FullName1,'') = 'Ed Jones'


What this query says is return all records where the FullName1 = 'Ed Jones' and IF the FullName1 column in the database holds a Null Value assume that value is '' (an empty string instead of a Null value).

So....Null values will be replaced with '' which still does NOT Equal 'Ed Jones'. The net effect is that records with NULL will NOT be returned.

So if we change our query to this:

Select
*
FROM
CadencedEventCustomer
WHERE
FullName1 = 'Ed Jones'


We would get exactly the same result.

So the question is, when DO we need IsNull?

Well....if the IsNull function is replacing nulls with a value that satisfies the comparison and returns TRUE, then the IsNull function is more useful (still not needed as we'll see below).

Take this example:

Select
*
FROM
CadencedEventCustomer
WHERE
isNull(FullName1,'Ed Jones') = 'Ed Jones'


In this case, if we find Null values in the FullName1 column, we will replace them with 'Ed Jones'. Notice that 'Ed Jones' satisfies our comparison and will return true. So in a sense, this logic is forcing the Query to return Null values instead of filtering them out. In this case, we can argue that the IsNull function is useful.

Now, having said that, although the IsNull() logic is now useful, the problem is that the column is being hidden from the optimizer and we lose the index relief provided by the index.

We can rewrite this SQL statement to get the same result AND to still get index relief for the optimizer.

Essentially what we want is this:

Select All CadencedEventCustomers Where the FullName1 is 'Ed Jones' OR where the FullName1 is a null value.

So the correct rewrite to still obtain index relief is as follows:

Select
*
FROM
CadencedEventCustomer
WHERE
((FullName1 = 'Ed Jones') OR (FullName1 IS NULL))


Now we get the best of both worlds. We get the correct logic producing the results we want and SQL servers query engine can see the column so we get index relief.

A few examples other than IsNull()

I often see the use of Substring() and other string functions used in WHERE Clauses.

Take the following

WHERE SUBSTRING(MasterDealer.Name,4) = 'Ford'

This is effectively hiding the MasterDealer.Name from the optimizer and is negating the use of any index on the Name column.

The fix

WHERE MasterDealerName Like 'Ford%'

This produces the exact same result, yet also provides index relief.

Another example working with dates

WHERE DateDiff(mm,PlacedOnQueue,GetDate()) >= 30

Again, this is a common technique used when working with DateTime data. This is, again, hiding the PlacedOnQueue column from the optimizer.

The fix

WHERE PlacedOnQueue < DateAdd(mm,-30,GetDate())

The exact same results, yet this time we get index relief.

Keep in mind, this behavior holds true for all functions wrapping columns in your WHERE clause. So remember anytime you feel a need to wrap a WHERE clause filtering column within a function, try really hard of a way to rewrite the statement without function use. If you need help rewriting these types of queries, please feel free to email me.

By Gregory Jackson

Preventing Multiple instances of the same application

As many programmers have come across, sometimes you want to prevent your app being launched more than once. This is a desirable function for many reasons. This can be used when you're app relies on writing or reading from files, and to prevent locking of files and possible corruption, you only want your app to run once. If you're writing something that is going to be only useful in a single instance, you can check and prevent multiple instances being run. The code snippet below details how to perform this check at the launch of your app.

using System;

using System.Windows.Forms;

public partial class MainDesktop : System.Windows.Window

{

public MainDesktop()

{

if (System.Diagnostics.Process.GetProcessesByName(System.Diagnostics.Process.GetCurrentProcess().ProcessName).Length == 1)

{

/// App initilization code here

}

else

{

this.Close();

}

}

}

Put this code in the main method of your app. This way when it runs, it checks and if finds another process with the same name, causes the new app to close instantly.

Tuesday, February 26, 2008

Query Analyzer Tips and Tricks

If you've worked with Microsoft SQL Server at all, you've run across SQL Query Analyzer. Of course, this tool is essential for running ad hoc queries and executing SQL statements. But have you ever taken the time to really investigate its capabilities? The SQL Server developers built a lot of functionality into Query Analyzer, not all of which is obvious to the casual user. In this article, I'll offer you ten bits of Query Analyzer that you might not have looked at already.

1. Getting Database Object Information

You probably know that SQL Server stores metadata about all of the objects in a database. The system tables contain a wealth of information about column names, data types, identity seeds, and so on. But did you know that you can get that information with a single keystroke via Query Analyzer? Highlight the object name in any SQL statement and press Alt+F1. Figure 1 shows the results for a SQL Server table. If you don't have anything highlighted, Alt+F1 will give you information about the database itself. For an equally neat trick, highlight a SQL keyword and press Shift+F1; you'll go straight to the Books Online page that describes that keyword.

Object information in Query Analyzer

2. Executing Part of a SQL Statement

Sometimes it's convenient to execute only part of a complex SQL statement that you're developing. For example, you might be working on a stored procedure that batches many statements together, or a query that contains a subquery. No problem! Just highlight the part that you want to execute, and press F5 (or press the Execute Query toolbar button if you're a mouse sort of person). Query Analyzer will only execute the highlighted text. You can parse the highlighted text without executing it by pressing Ctrl+F5.

3. Alter Objects Fast with the Object Browser

I can't possibly be the only one who's ever needed to fix an existing stored procedure. Fortunately, using Query Analyzer means never having to write the ALTER PROC statement by hand. First, display the Object Browser by pressing F8, if it's not already showing. Expand the tree to show the object that you're interested in (this tip works with any object, not just stored procedures). Right-click on the object and select Script Object to New Window As -> Alter. Query Analyzer will open a new query window, and build the necessary ALTER PROC statement for you.

If you've never looked at them, take a few minutes to explore the Object Browser shortcut menus. You can send object scripts to the clipboard or to a file, execute stored procs, or build CREATE or DROP SQL statements, among other options.

4. Drag and Drop from the Object Browser

The Object Browser is also a drag and drop source. Drag a table to a query window, and you get the table's name. Drag the Columns node under a table, and you get all of the columns from the table, separated by commas. Drag a single column and you get the column name. Judicious use of this technique can make fast work of building things like INSERT INTO statements, as well as avoid spelling errors.

5. Templates are Your Friend

Query Analyzer supports templates - boilerplate files containing SQL statements - to help you build tricky SQL more quickly. These files have the extension .tql, and they're stored in folders underneath the Templates\SQL Query Analyzer directory if you've done a full client tools install. Use Ctrl+Shift+Ins or Edit -> Insert Template to open a template into the current query window. Many templates contain parameters, which are delimited by angle brackets. Press Ctrl+Shift+M and you'll get the Replace Template Parameters dialog box, as shown in Figure 2.

Inserting parameters in a template

Templates are just plain text files. If you find yourself needing to frequently insert some complex chunk of SQL, you can create and save your own template file to make life simpler in the future.

6. What's Up With This Query?

If you're faced with a query that has a performance issue, query analyzer is your first stop for gathering information. You have easy access to four different ways of looking at query performance:

  • Ctrl+L will show you the estimated execution plan before you run the query.
  • Ctrl+K will show you the actual execution plan after you run the query.
  • Ctrl+Shift+T will open the trace pane, showing you the trace events on the server as you run the query.
  • Ctrl+Shift+S will show you client-side statistics after you run the query.

There are items on the Query menu for each of these panes, just in case your brain is already too full to hold new shortcut keys.

7. Customize Connection Properties

When you fire up Query Analyzer and connect it to a database, it sets a few defaults - for instance, ANSI style null handling. If you'd like to tweak the connection properties for your own server environment, select Connection Properties from the Edit menu to open the dialog box shown in Figure 3, and click to your heart's content.

Editing connection properties

8. Get Back Wide Results

If you run a query that returns a column with lots of data, you'll discover that Query Analyzer truncates the results after the first 256 characters. This is especially annoying when you're working with FOR XML queries, which can return thousands of characters in an XML document format. Fortunately, this limitation is easy to modify. Select Options from the Tools menu, and navigate to the Results tab of the Options dialog box. Enter a new value for the Maximum Characters Per Column property and you're all set. While you're there, take the time to click around the rest of the Options dialog box; Query Analyzer lets you tweak quite a few things to match your own preferences.

9. Query Debugging

Query Analyzer contains a complete debugger. You can single step through stored procedures, inspect the value of local and global variables, supply values for parameters, inspect the callstack when multiple procedures are nested, and so on. The easiest way to get started is to right-click a stored procedure in the Object Browser window and select Debug . Figure 4 shows the debugging environment in action. This can be a real lifesaver when you're trying to figure out what's wrong with a complex stored procedure.

Debugging a stored procedure

10. The Tools Menu the Way You Want It

The Query Analyzer Tools menu is extensible. Select Customize from the Tools menu to open the Customize dialog box, and switch to the Tools tab. Here you can enter new values to appear on the menu, and specify the executable file to call for each entry. You can also pass the server name, database name, or user name to the external utility.

And Yes, There's More

I hope this article has convinced you that Query Analyzer can be a powerful tool for creating, executing, and debugging SQL statements. There are plenty of features that I didn't cram into this top ten. For example, the Query Analyzer editor supports bookmarks, forcing case, adjusting tabs, and more. If this is a product that you use on a frequent basis, you owe it to yourself to keep digging and find out how to use it more efficiently. The time spent learning will be amply repaid.

About the Author

Mike Gunderloy is the author of over 20 books and numerous articles on development topics, and the lead developer for Larkware. Check out his latest book, Coder to Developer from Sybex. When he's not writing code, Mike putters in the garden on his farm in eastern Washington state.

Monday, February 25, 2008

To Subreport or Not to Subreport (Crystal Report)

I don't know whether there is a general rule about when to use subreports. Many times you will see people recommend against the use of subreport because they slow reports, and in general, they are right, but there are some cases when using a subreport actually speeds up reports. Some cases when subreports make sense:

1-To avoid a geometric increase in records due to joining multiple tables that have one to many or many to many relationships. If you have two tables joined by an ID field and there are two or more records in the first for each ID, then if you join it to a table with two or more records per ID, the records from the second table would repeat for each record in the first table. Instead you can use a subreport for the second table, place it in a group header for a group based on ID, link the subreport to the main report by ID, and the display will be limited to one set of records for the second table.

2-To show a limited set of information in the report header. For example, to show description fields for a parameter based on a numerical value.

3-To link data from two different datasources that otherwise cannot be linked, because CR doesn't support the the use of multiple datasources in the visual linking expert, e.g., data from an Excel database and from an Oracle database.

4-To link data from two tables where one field is of a different datatype or length and therefore must be converted by formula in order to be linked to a second table. Again, you can't do this in the visual linking expert, but you can link a formula and a field in the subreport links.

5-Subreports also can help in complex reporting situations--
charting on print-time formulas, creating ranks of data without sorting the displayed data, grouping on the number of records on a page, limiting the vertical display of multiple column data, etc.

6-To display a summary report that displays summary data from many different analyses based on different datasets or based on different selection criteria.

7-On-demand subreports can be used to display detail on an as-needed basis.

8-To display data from two tables where there are no fields that can be linked.

9-To allow the sequential display of all records from one table, followed by all records from another.

10-To create multiple copies of the same data, e.g., on different pages.

11-To allow drill-down on each of several values in the same row.

This is all I can think of for now. Subreports are sometimes very useful, and even essential in many situations. Besides the (usually) slower speed caused by subreports, another downside is that to use subreport data in calculations in the main report, you have to use shared variables, which can become quite complex.

Thursday, February 21, 2008

INF: Cross-Database Ownership Chaining Behaviour Changes in SQL Server 2000 Service Pack 3

SUMMARY

Microsoft SQL Server Service Pack 3 (SP3) provides a new security enhancement related option for configuring cross-database ownership chaining, Enable cross-database ownership chaining for all databases during setup. This article discusses the cross-database ownership chaining behavior in SQL Server 2000 SP3. With this new option, you can control whether or not you permit cross-database ownership chaining. By default, this option is disabled. Microsoft recommends that you use the default option, because it makes your database server more secure.

MORE INFORMATION

Ownership Chaining

By default, all database objects have owners. When an object such as a view, a stored procedure, or a user-defined function references another object, an ownership chain is established. For example, a table that is owned by the same user. When the same user owns the source object, the view, stored procedure, or user-defined function, and all target objects (underlying tables, views, or other objects), the ownership chain is said to be unbroken. When the ownership chain is unbroken, SQL Server checks permissions on the source object but not on the target objects.

Cross-Database Ownership Chaining

Cross-database ownership chaining occurs when the source object depends on objects in another database. A cross-database ownership chain works in the same way as ownership chaining in a database, except that an unbroken ownership chain is based on all the object owners being mapped to the same login account. Therefore, in a cross-database ownership chain, if the source object in the source database and the target objects in the target databases are owned by the same login account, SQL Server does not check permissions on the target objects. If you have more than one database used by an application, and that application calls stored procedures or views in a database that is based on objects in another database, then cross-database ownership chaining is used. Applications that rely on cross-database ownership chaining may generate permission denied errors if cross-database ownership chaining option is turned off.

Risks Associated with Cross-Database Ownership Chaining

Microsoft recommends that you disable the cross-database ownership chaining option because of the actions that highly-privileged users can perform:

Database owners and members of the db_ddladmin or the db_owners database roles can create objects that are owned by other users. These objects can potentially target objects in other databases. This means that if you enable cross-database ownership chaining, you must fully trust these users with data in all databases. To identify the members of the db_ddladmin and the db_owners roles in the current database, execute the following Transact-SQL commands:

exec sp_helprolemember 'db_ddladmin' exec sp_helprolemember 'db_owner'

Users with CREATE DATABASE permission can create new databases and attach existing databases. If cross-database ownership chaining is enabled, these users can access objects in other databases from newly created or attached databases.

Even though Microsoft recommends that you turn off cross-database ownership chaining for maximum security, there are some environments where you can fully trust your highly-privileged users; therefore, you can enable cross database ownership for specific databases to meet the requirements of specific applications.

How to Configure Cross-Database Ownership Chaining During Setup

In Microsoft SQL Server Service Pack 3 (SP3) Setup, a new dialog box has been added to allow the system administrator to control whether or not cross database ownership chaining will be permitted. If you select Enable cross-database ownership chaining for all databases during the SQL Server 2000 SP3 setup, you are enabling this option across all databases. This was the default behavior before SQL Server 2000 SP3. Regardless of the option that you select during setup, you can later modify server and database support for cross-database ownership chaining either by using Transact-SQL commands or from SQL Server Enterprise Manager.

How to Configure Cross-Database Ownership Chaining After Installation

To change the cross-database ownership chaining configuration, use the new options in the sp_configure and the sp_dboption stored procedures. Note If you detach and then reattach a database, you must re-enable cross-database ownership chaining. Configuring cross-database ownership chaining by using Transact-SQL commands:

Configure cross-database ownership chaining support for the instance of SQL Server with the new Cross DB Ownership Chaining option for sp_configure. When this option is set to 0, you can control cross-database ownership chaining at the database level by using sp_dboption. When this option is set to 1, you cannot restrict cross-database ownership chaining. This is the pre-SQL Server 2000 SP3 behavior. If you change this option, include the RECONFIGURE option to reconfigure the instance without having to restart it. For example, use the following command to allow cross-database ownership chaining in all databases:

EXEC sp_configure 'Cross DB Ownership Chaining', '1'; RECONFIGURE 

Configure cross-database ownership chaining at the database level with the new db chaining option for sp_dboption. When this option is set to false, the database cannot participate in cross-database ownership chaining as either the source or the target database. When this option is set to true, the database can participate in a cross-database ownership chain. By default, this option is false for all user databases after you apply SQL Server 2000 SP3. The following command enables cross-database ownership chaining for the Northwind database:

EXEC sp_dboption 'Northwind', 'db chaining', 'true'

The effects of sp_dboption are manifested only when the sp_configure Cross DB Ownership Chaining option is set to 0. Also, to enable cross-database ownership chaining at the database level, you must enable this option on both the source and the target database.

Configuring cross-database ownership chaining by using SQL Enterprise Manager: To set this option for all databases, follow these steps:

1. Right-click <server>.

2. Click to select Properties.

3. Click Security.

4. Click to select Allow cross-database ownership chaining in the Ownership chaining section. 5. Click OK. You are prompted to stop and restart the SQL Server services.

6. Click OK .

To enable this option at the database level, follow these steps:

1. Right-click the <database>.

2. Click to select Properties.

3. Click Options. 4. Click to select Allow Cross Database Ownership Chaining in the Settings section

Ownership Chains

When multiple database objects access each other sequentially, the sequence is known as a chain. Although such chains do not independently exist, when SQL Server 2005 traverses the links in a chain, SQL Server evaluates permissions on the constituent objects differently than it would if it were accessing the objects separately. These differences have important implications for managing security.

Ownership chaining enables managing access to multiple objects, such as multiple tables, by setting permissions on one object, such as a view. Ownership chaining also offers a slight performance advantage in scenarios that allow for skipping permission checks.

How Permissions Are Checked in a Chain

When an object is accessed through a chain, SQL Server first compares the owner of the object to the owner of the calling object. This is the previous link in the chain. If both objects have the same owner, permissions on the referenced object are not evaluated.

Example of Ownership Chaining

In the following illustration, the July2003 view is owned by Mary. She has granted to Alex permissions on the view. He has no other permissions on database objects in this instance. What happens when Alex selects the view?

Diagram of ownership chaining

  1. Alex executes SELECT * on the July2003 view. SQL Server checks permissions on the view and confirms that Alex has permission to select on it.
  2. The July 2003 view requires information from the SalesXZ view. SQL Server checks the ownership of the SalesXZ view. Because this view has the same owner (Mary) as the view that calls it, permissions on SalesXZ are not checked. The required information is returned.
  3. The SalesXZ view requires information from the InvoicesXZ view. SQL Server checks the ownership of the InvoicesXZ view. Because this view has the same owner as the previous object, permissions on InvoicesXZ are not checked. The required information is returned. To this point, all items in the sequence have had one owner (Mary). This is known as an unbroken ownership chain.
  4. The InvoicesXZ view requires information from the AcctAgeXZ view. SQL Server checks the ownership of the AcctAgeXZ view. Because the owner of this view is different from the owner of the previous object (Sam, not Mary), full information about permissions on this view is retrieved. If the AcctAgeXZ view has permissions that allow access by Alex, information will be returned.
  5. The AcctAgeXZ view requires information from the ExpenseXZ table. SQL Server checks the ownership of the ExpenseXZ table. Because the owner of this table is different from the owner of the previous object (Joe, not Sam), full information about permissions on this table is retrieved. If the ExpenseXZ table has permissions that allow access by Alex, information is returned.
  6. When the July2003 view tries to retrieve information from the ProjectionsXZ table, the server first checks to see whether cross-database chaining is enabled between Database 1 and Database 2. If cross-database chaining is enabled, the server will check the ownership of the ProjectionsXZ table. Because this table has the same owner as the calling view (Mary), permissions on this table are not checked. The requested information is returned.

Cross-Database Ownership Chaining

SQL Server can be configured to allow ownership chaining between specific databases or across all databases inside a single instance of SQL Server. Cross-database ownership chaining is disabled by default, and should not be enabled unless it is specifically required.

Potential Threats

Ownership chaining is very useful in managing permissions on a database, but it does assume that object owners anticipate the full consequences of every decision to grant permission on a securable. In the previous illustration, Mary owns most of the underlying objects of the July2003 view. Because Mary has the right to make objects that she owns accessible to any other user, SQL Server behaves as though whenever Mary grants access to the first view in a chain she has made a conscious decision to share the views and table it references. In real life, this might not be a valid assumption. Production databases are far more complex than the one in the illustration, and the permissions that regulate access to them rarely map perfectly to the administrative structures of the organizations that use them.

You should understand that members of highly privileged database roles can use cross-database ownership chaining to access objects in databases external to their own. For example, if cross-database ownership chaining is enabled between database A and database B, a member of the db_owner fixed database role of either database can spoof her way into the other database. The process is simple: Diane (a member of db_owner in database A) creates user Stuart in database A. Stuart already exists as a user in database B. Diane then creates an object (owned by Stuart) in database A that calls any object owned by Stuart in database B. Because the calling and called objects have a common owner, permissions on the object in database B will not be checked when Diane accesses it through the object she has created.

More details can be found at

Cross Database Chaining  MSDN Online Library

Tuesday, February 19, 2008

HierarGrid - A hierarchical DataGrid that displays master-detail relations

 

Problem:

The ASP.NET DataGrid is a popular control to display data in a table form with editing, paging and sorting capabilites.
However it is only suitable to display single DataTables - there is no support for parent-child relations.

Goal:

To create a control that provides the standard DataGrid functionality and at the same time can display parent-child relations using templates to display the child elements.

Procedure:

Create a custom control called HierarGrid that derives from the DataGrid and a custom DataGridColumn called HierarColumn.

The HierarGrid takes a DataSet that contains relations between the tables.

While iterating over the parent table it checks the related tables for child rows and if one is found it dynamically loads a template for the child

row(s)

The template is rendered invisibly into the custom HierarColumn and when the user clicks the plus icon, the template content is copied via JavaScript into a newly created TableRow.

Download (V2.2):
How to get started:

By Denis Bauer

Monday, February 18, 2008

AJAX Basics

When I first posted my first Ajax links on this blog, some of my friends asked me to post more basics of Ajax, so that they can use the link more effectively, so today while surfing web I found very good topic on Ajax Basics originally posted on 4GuysfromRolla.com by Scott Mitchell. I hope this will help.

You can find the download links of Ajax components, samples and other utilities on my previous blog post

http://www.dotnetglobe.com/2008/01/aspnet-ajax-useful-links.html

Introduction Over the past several years web developers have started using JavaScript to make asynchronous postbacks to the web server that only transmit and receive the necessary data; these techniques are commonly referred to as AJAX. When properly implemented, AJAX-enabled web applications offer a highly interactive user interface whose responsiveness rivals that of desktop applications. Popular web applications like the social networking news site Digg and GMail are prime examples of AJAX techniques in action.

Since AJAX involves many disparate technologies at different layers in the networking stack, implementing AJAX without the use of an AJAX framework is difficult and error-prone. Fortunately, Microsoft has released a free AJAX framework for ASP.NET developers: Microsoft ASP.NET AJAX. This article is the first in a series of articles that examines the ASP.NET AJAX framework. This installment provides an overview of AJAX technologies and looks at getting started with Microsoft's framework. Future installments will focus on specific controls and scenarios. Read on to learn more!

A Brief History of Ajax The client-server model is an architecture that involes two actors: a client and a server. The server passively waits for a request from a client and, upon receiving such a request, processes it and returns a reply. The client is responsible for initiating requests to the server, after which is waits for and then processes the data returned in the response. Web applications are classic examples of the client-server model. The client - a web browser, most often - sends a request to a web server for a particular resource. The resource may be static content like an HTML page or an image that the web server can simply return, or it may be dynamic content like an ASP.NET page that must first be processed on the web server before its generated markup can be sent back. Regardless, the interaction is the same: the client requests a particular resource, and the server returns it, be it the binary content of a JPG image or the HTML of a rendered ASP.NET page.

One drawback of the client-server models is latency. Clients must periodically communicate with the server to update the server with the user's input, or to retrieve the latest data from the server. During these periods, the user must wait while the request/response lifecycle plays out. This delay is most clearly evidenced in ASP.NET applications when a postback occurs. Imagine an eCommerce website that lists products in a grid whose contents can be sorted and paged through. However, stepping to the next page requires a postback to the server in order to retrieve the next page of products. Consequently, moving to the next page introduces a delay in the user experience that can take anywhere from less than a second to several seconds, depending on many factors (the user's Internet connection speed, the network congestion, the server load, the database query duration, and so on).

The main culprit here is that the postback requires that all of the page's form fields be sent back to the server and that the entire web page's content be returned to the browser. This volume of exchanged data is overkill since all that is really needed by the client is information about the next page of products. AJAX mitigates these latency issues by using JavaScript to make asynchronous postbacks to the web server; these postbacks transmit and receive the minimum amount of data necessary to perform the requested operation. For a more thorough background of AJAX, be sure to read Jesse James Garrett's essay where he coined the term "Ajax": Ajax: A New Approach to Web Applications.

There are a number of AJAX frameworks available. Most ASP.NET control vendors offer commercial AJAX implementations, and there are many open source libraries as well. In early 2006 Microsoft released its own AJAX framework, Microsoft ASP.NET AJAX, which is the focus of this article series.

An Overview of Microsoft ASP.NET AJAX Microsoft's ASP.NET AJAX framework was designed to work with ASP.NET 2.0 and future versions; it does not work with ASP.NET version 1.x applications. The ASP.NET AJAX framework will ship with Visual Studio 2008 and ASP.NET version 3.5. ASP.NET 2.0 developers, however, need to download and install the framework from Microsoft's website; the "Getting Started with Microsoft ASP.NET AJAX" section later in this article includes a discussion on installing ASP.NET AJAX in a 2.0 environment.

The ASP.NET AJAX framework consists of client-side and server-side logic. There are a bevy of JavaScript libraries that simplify initiating an asychronous postback and processing the response from the server. The client-side libraries also include classes that mimic the .NET Framework's core classes and data types. The server-side components include ASP.NET controls that, when added to a page, implement various AJAX techniques. One such example is the ScriptManager control, which adds references to the client-side script in the page, so that the browser requesting the ASP.NET page downloads the appropriate JavaScript libraries as well. Consequently, you'll use the ScriptManager on any ASP.NET page where you want to utilize the ASP.NET AJAX framework.

In addition to the ScriptManager control, the ASP.NET AJAX framework includes a handful of other server-side controls, such as the UpdatePanel, the Timer, and the UpdateProgress controls. The UpdatePanel control allows you to define a portion of the page that will be updated by an asynchronous request. In short, it allows you to make partial postbacks rather than a full page postback. This improves the responsiveness of the page in two ways: first, when a partial postback occurs only the data relevant to that UpdatePanel is sent to the server, and only the corresponding data is returned; and, second, the partial page postback does not cause the entire page to be "re-drawn" by the browser, so there's no "flash" that is all too common when making full postbacks.

The UpdatePanel is one of the core pieces of the ASP.NET AJAX framework, and one which we will be examining later on in this article. Once an UpdatePanel has been added to a page, you can add the standard ASP.NET web controls - TextBoxes, Buttons, GridViews, DropDownLists, and so on - and they will automatically take advantage of AJAX techniques. For example, if you have a Button and a TextBox in an UpatePanel and the Button is clicked, a partial postback will occur. The Button's Click event handler will be called on the server-side, as expected, and the value of the TextBox's Text property can be accessed as usual. Moreover, any other Web controls within the same UpdatePanel can have their properties read or assigned and they will be re-rendered and their output updated in the user's browser.

In addition to the base server-side controls (the ScriptManager, UpdatePanel, Timer, and so on), Microsoft offers an additional set of interactive controls via the AJAX Control Toolkit. This toolkit includes ratings controls, sliders, modal popup windows, and so forth.

Getting Started with Microsoft ASP.NET AJAX For ASP.NET 2.0 developers, the first step in working with Microsoft ASP.NET AJAX is to download the AJAX Extensions and, optionally, the AJAX Control Toolkit. (ASP.NET 3.5 developers will already have the ASP.NET AJAX framework installed.)

Note: This article only looks at working with the AJAX Extensions (the core of the framework) and leaves the Control Toolkit for a future installment.

To download the ASP.NET AJAX 1.0 framework, visit this page and click the Download button. The ASP.NET AJAX framework is packaged up as an MSI file. Once you've downloaded the MSI file to your computer, double-click it to install the framework. After downloading and installing the ASP.NET AJAX framework, start Visual Studio and choose to create a New Project. In the New Project dialog box you should see a new project type named "ASP.NET AJAX-Enabled Web Application."

Visual Studio includes a new Project Type named ASP.NET AJAX-Enabled Web Application.

Creating an ASP.NET AJAX-Enabled Web Application creates a new Web Application Project with the System.Web.Extensions assembly added as a reference. The System.Web.Extensions assembly contains the core client- and server-side pieces of Microsoft's ASP.NET AJAX framework. Also, the Toolbox includes an AJAX Extensions category with the core server-side AJAX controls.

Our First ASP.NET AJAX Example: Using the UpdatePanel

The UpdatePanel is useful in situations where you only want a portion of the page to postback rather than the entire page. Such a limited postback is called a partial postback, and is easy to implement using the UpdatePanel. As you know, many ASP.NET controls can cause postbacks: Button controls, when clicked; DropDownLists and CheckBoxes, when their AutoPostBack property is set to True; and so on. Under normal circumstances, when these controls cause a postback, the entire page is posted back. All form field values are sent from the browser to the server. The server then re-renders the entire page and returns the complete HTML, which is then redisplayed by the browser.

When these controls appear in an UpdatePanel, however, a partial page postback is initiated instead. Only the form fields in the UpdatePanel are sent to the server. The server then re-renders the page, but only sends back the markup for those controls in the UpdatePanel. The client-side script that initiated the partial postback receives the partial markup results from the server and seamlessly updates the display in the browser with the returned values. Consequently, the UpdatePanel improves the reponsiveness of a page by reducing the amount of data exchanged between the client and the server and by "redrawing" only the portion of the screen that kicked off the partial page postback.

Let's take a look at the UpdatePanel in action. The following demo, which is downloadable at the end of this article, shows a simple example. The UpdatePanel in the demo includes only two controls: a Label and a Button. The Label Web control displays the text of a randomly selected joke. Clicking the Button loads a new randomly selected joke into the Label. If you are following along at your computer, start by adding a new ASP.NET page to the ASP.NET AJAX-Enabled Web Application we created back in the "Getting Started with Microsoft ASP.NET AJAX" section.

Whenever we use the ASP.NET AJAX framework in a page, we need to start by adding a ScriptManager control, so start by adding a ScriptManager to the page. Next, add an UpdatePanel to the page. Within that UpdatePanel, add a Label control and a Button control. After performing these steps, the declarative markup in your web page should look similar to the following:

<asp:ScriptManager ID="myScriptManager" runat="server"> </asp:ScriptManager> <asp:UpdatePanel ID="JokeUpdatePanel" runat="server"> <ContentTemplate> <asp:Label ID="JokeText" runat="server" Font-Italic="False" Font-Names="Comic Sans MS" Font-Size="Large"></asp:Label> <br /> <br /> <asp:Button ID="NewJokeButton" runat="server" Text="Show Me a Random Joke!" /> </ContentTemplate> </asp:UpdatePanel>

At this point, all that remains is to write the server-side code. When the page is first loaded we want to set the JokeText Label's Text property to a randomly selected joke; likewise, whenever the NewJokeButton is clicked, we want to refresh the Label's Text property with a new joke.

protected void Page_Load(object sender, EventArgs e) { JokeText.Text = GetRandomJoke(); } protected void NewJokeButton_Click(object sender, EventArgs e) { JokeText.Text = GetRandomJoke(); } private string GetRandomJoke() { // Get a random number Random r = new Random(); switch (r.Next(5)) { case 0: return "Why did the chicken cross the road? To get to the other side!!"; case 1: return "How much do pirates pay for their earrings? A Buccaneer!"; case 2: return "Why did the computer squeak? Because someone stepped on it's mouse!"; case 3: return "What is a golfer's favorite letter? Tee!"; default: return "A child comes home from his first day at school. Mom asks, "What did you learn today?" "Not enough," the kid replies, "I have to go back tomorrow.""; } }

At this point we have a page that will utilize AJAX techniques to make a partial page postback when the Button in the UpdatePanel is clicked. Consequently, clicking the "Show Me a Random Joke!" button displays a new joke promptly without having the entire page refresh. Granted, this is an overly simple example since the page already is very lightweight, but this concept can be extended to more real-world scenarios (and will be, in future installments of this article series). For example, you might have a page that has several grids on it showing a plethora of data. You could place each grid in its own UpdatePanel. That way, when a user sorted or paged a grid, a partial postback would occur and the particular grid could be paged or sorted without requiring a full postback.

The takeaway here is that implementing AJAX techniques in an ASP.NET application using the ASP.NET AJAX framework is remarkably easy. The ScriptManager and UpdatePanel controls automatically handle all of the complexities involved with initiating the asynchronous postback and displaying the returned data.

Looking Forward... This article only looked at a simple UpdatePanel example. In real-world scenarios, however, things aren't always as simple. For example, we might want to have some event external to the UpdatePanel trigger a partial postback. We've not yet looked at working directly with the client-side AJAX libraries; nor have we explored the wealth of controls in the AJAX Control Toolkit. These, and many more topics, You can find the download links of Ajax components, samples and other utilities on my previous blog post http://www.dotnetglobe.com/2008/01/aspnet-ajax-useful-links.html

Happy Programming!

By Scott Mitchell

Tuesday, February 12, 2008

.NET source code unveiled

In a surprise announcement within Scott Guthrie’s blog, Microsoft has promised to make the source code of the .NET 3.5 Framework libraries available under the Microsoft Reference Licence – essentially a “look but don’t touch” licence. You won’t be able to make use of the source code in your own applications, but it will mean that you can determine exactly how a class implements a particular feature.

It is planned that the release will begin with the .NET Base Class Libraries, including ASP.NET, Windows Forms, ADO.NET, XML and WPF. Later LINQ and WCF among others will be added. The source code will include comments and will be available as a single download that can be viewed in any text editor. Debugging support will also be available within VS 2008. The debugger will be able to dynamically download the .NET Framework symbols and source code and you will be able to set breakpoints and single step inside the .NET Framework code.

VS 2005 Intellisense in web.config files

I’ve seen a few questions about intellisense support for ASP.NET web.config files with VS 2005, and thought I’d blog a quick post to answer a few of them.

First the good news:

VS 2005 now supports intellisense for web.config files you add to your web project (yea!). It uses the new XML editor (which btw is much better in VS 2005), and has a built-in schema for all of the built-in .NET Framework settings:

Now one annoying gotcha:

There is one gotcha to be aware of, though, that can sometimes cause intellisense for the web.config file to stop working in the IDE. This happens when a default namespace is added to the root <configuration> element. For example, like so:

<configuration

xmlns="http://schemas.microsoft.com/.NetConfiguration/v2.0">

This doesn’t cause any runtime problems – but it does stop intellisense completion happening for the built-in .NET XML elements in the web.config file.

The bad news is that the built-in web admin tool (launched via the WebSite->ASP.NET Configuration menu item in VS 2005 and Visual Web Developer) always adds this xmlns namespace when it launches – so if you use this tool to manage users/roles you’ll end up having it added to your web.config file for you.

How to fix this gotcha:

To get intellisense back when you are editing the web.config file in the IDE, just delete the xmlns reference and have the root configuration element look like so:

<configuration>

Everything will then work fine again.

Friday, February 8, 2008

Programming with Google Maps API

The Google Maps API lets you embed Google Maps in your own web pages with JavaScript. The API provides a number of utilities for manipulating maps (just like on the http://maps.google.com web page) and adding content to the map through a variety of services, allowing you to create robust maps applications on your website.

How do I start?

  1. Sign up for a Google Maps API key.
  2. Read the Maps API Concepts.
  3. Check out some Maps examples.
  4. Read the Maps API Reference.

Google Maps API Examples

All of the examples contained within the Google Maps API documentation set are listed below for quick reference.

Map Examples

Event Examples

Controls Examples

Marker Examples

Polyline Examples

Polygon and Tile Overlay Examples

Custom Overlay Examples

Services Examples

Featured Video

These are just the few links and introduction of Google Map API, very soon I wil post my sample project using Google Map API,

You can also get the map API class reference at Google MAP API Class Reference

And you can get the full documentation and downloadable examples and reference at Google Maps API

Using LINQ with ASP.NET

One of the new things I’m super excited about right now is the LINQ family of technologies that are starting to come out (LINQ, DLINQ, XLINQ and others soon).

LINQ will be fully integrated with the next release of Visual Studio (code-name: Orcas) and it will include some very cool framework and tool support (including full intellisense and designer support). Last week the LINQ team released the May CTP drop of LINQ that you can download from here. What is cool about this CTP is that it works with VS 2005, and allows you to start learning more about it immediately. It incorporates a bunch of customer feedback (for example: support for stored procedures in DLINQ), and also includes a built-in ASP.NET Web-Site Project to enable you to leverage it with ASP.NET apps (note: you can also use LINQ with the new VS 2005 Web Application Project option as well).

Note: LINQ, DLINQ and XLINQ will be fully supported in both C# and VB. I am using C# for the example belows.

Step 0: Creating a C# LINQ ASP.NET Web Site

To create a new ASP.NET Web Site that can use LINQ/DLINQ/XLINQ and the new C# 3.0 language features, choose File->New Web Site in VS and select the “LINQ ASP.NET Web Site Template”:

This will create a web-site project with the following files in-it by default:

Note that it includes a number of LINQ assemblies in the bin folder. It also adds the following setting to the app’s web.config file which tells both VS and ASP.NET to use the C# 3.0 compiler to compile and run the app:

<system.codedom>

<compilers>

<compiler language="c#;cs;csharp"

extension=".cs"

type="Microsoft.CSharp.CSharp3CodeProvider, CSharp3CodeDomProvider"/>

</compilers>

</system.codedom>

Note that the C# 3.0 compiler and CodeDOM provider can run side-by-side with the C# 2.0 versions (so you don’t have to worry about it breaking VS or ASP.NET when you install it).

Step 1: Creating your first ASP.NET page using LINQ

Create a new page called Step1.aspx. Within the .aspx page add a GridView control like so:

<%@ Page Language="C#" CodeFile="Step1.aspx.cs" Inherits="Step1" %>

<html>

<body>

<form id="form1" runat="server">

<div>

<h1>City Names</h1>

<asp:GridView ID="GridView1" runat="server">

</asp:GridView>

</div>

</form>

</body>

</html>

Within the code-behind file we’ll then write the canonical “hello world” LINQ sample – which involves searching and ordering a list of strings:


using System;

using System.Web;

using System.Web.UI;

using System.Web.UI.WebControls;

using System.Query;

public partial class Step1 : System.Web.UI.Page

{

protected void Page_Load(object sender, EventArgs e)

{

string[] cities = { "London", "Amsterdam", "San Francisco", "Las Vegas",

"Boston", "Raleigh", "Chicago", "Charlestown",

"Helsinki", "Nice", "Dublin" };

GridView1.DataSource = from city in cities

where city.Length > 4

orderby city

select city.ToUpper();

GridView1.DataBind();

}

}

In the above sample I’ve created an array of strings listing the cities I’ve visited from Jan->May of this year. I’m then using a LINQ query expression against the array. This query expression returns all cities where the city name is greater than 4 characters, and orders the result in alphabetical order and transforms those city names into upper case.

LINQ queries return results of type: IEnumerable<T> -- where <T> is determined by the object type of the “select” clause. In the above sample “city” is a string, so the type-safe result is a generics based collection like so:

IEnumerable<string> result = from city in cities

where city.Length > 4

orderby city

select city.ToUpper();

Because ASP.NET controls already support databinding to any IEnumerable collection, we can easily assign this LINQ query result to the GridView and call its DataBind() method to generate this page output result:

Note that instead of using the GridView control I could have just as easily used the <asp:repeater>, <asp:datalist>, <asp:dropdownlist>, or any other ASP.NET list control (both those built-into the product or ones built by other developers). For the purposes of these samples I’m just going to use the <asp:gridview> -- but again know that you can use any.

Step2: Using Richer Collections

Searching an array of strings is not terribly interesting (although sometimes actually useful). More interesting would be the ability to search and work against richer collections of our own making. The good news is that LINQ makes this easy. For example, to better track trips I can create a simple class called “Location” in my project below:

using System;

public class Location

{

// Fields

private string _country;

private int _distance;

private string _city;

// Properties

public string Country

{

get { return _country; }

set { _country = value; }

}

public int Distance

{

get { return _distance; }

set { _distance = value; }

}

public string City

{

get { return _city; }

set { _city = value; }

}

}

This exposes 3 public properties to track the County, City name and Distance from Seattle. I can then create a Step2.aspx file with a GridView control that defines 3 columns like so:

<%@ Page Language="C#" CodeFile="Step2.aspx.cs" Inherits="Step2" %>

<html>

<body>

<form id="form1" runat="server">

<h1>Cities and their Distances</h1>

<asp:GridView ID="GridView1" AutoGenerateColumns="false" runat="server">

<Columns>

<asp:BoundField HeaderText="Country" DataField="Country" />

<asp:BoundField HeaderText="City" DataField="City" />

<asp:BoundField HeaderText="Distance from Seattle" DataField="Distance" />

</Columns>

</asp:GridView>

</form>

</body>

</html>

I can then populate a collection of Location objects and databind it to the Grid in my code-behind like so:

using System;

using System.Collections.Generic;

using System.Web;

using System.Query;

public partial class Step2 : System.Web.UI.Page

{

protected void Page_Load(object sender, EventArgs e)

{

List<Location> cities = new List<Location>{

new Location { City="London", Distance=4789, Country="UK" },

new Location { City="Amsterdam", Distance=4869, Country="Netherlands" },

new Location { City="San Francisco", Distance=684, Country="USA" },

new Location { City="Las Vegas", Distance=872, Country="USA" },

new Location { City="Boston", Distance=2488, Country="USA" },

new Location { City="Raleigh", Distance=2363, Country="USA" },

new Location { City="Chicago", Distance=1733, Country="USA" },

new Location { City="Charleston", Distance=2421, Country="USA" },

new Location { City="Helsinki", Distance=4771, Country="Finland" },

new Location { City="Nice", Distance=5428, Country="France" },

new Location { City="Dublin", Distance=4527, Country="Ireland" }

};

GridView1.DataSource = from location in cities

where location.Distance > 1000

orderby location.Country, location.City

select location;

GridView1.DataBind();

}

}

The above code-behind shows off a few cool features. The first is the new C# 3.0 support for creating class instances, and then using a terser syntax for setting properties on them:

new Location { City="London", Distance=4789, Country="UK" }

This is very useful when instantiating and adding classes within a collection like above (or within an anonymous type like we’ll see later). Note that rather than use an array this time, I am using a Generics based List collection of type “Location”. LINQ supports executing queries against any IEnumerable<T> collection, so can be used against any Generics or non-Generics based object collections you already have.

For my LINQ query I’m then returning a collection of all cities that are more than 1000 miles away from Seattle. I’ve chosen to order the result in alphabetical order – first by country and then by city name. The result of this LINQ query is again dictated by the type of the “location” variable – so in this case of type “Location”:

IEumerable<Location> result = from location in cities

where location.Distance > 1000

orderby location.Country, location.City

select location;

When I databind this result against the GridView I get a result like so:

Step 3: Refactoring the City Collection Slightly

Since we’ll be re-using this collection of cities in several other samples, I decided to encapsulate my travels in a “TravelOrganizer” class like so:

using System;

using System.Collections.Generic;

public class TravelOrganizer

{

public List<Location> PlacesVisited

{

get

{

List<Location> cities = new List<Location>{

new Location { City="London", Distance=4789, Country="UK" },

new Location { City="Amsterdam", Distance=4869, Country="Netherlands" },

new Location { City="San Francisco", Distance=684, Country="USA" },

new Location { City="Las Vegas", Distance=872, Country="USA" },

new Location { City="Boston", Distance=2488, Country="USA" },

new Location { City="Raleigh", Distance=2363, Country="USA" },

new Location { City="Chicago", Distance=1733, Country="USA" },

new Location { City="Charleston", Distance=2421, Country="USA" },

new Location { City="Helsinki", Distance=4771, Country="Finland" },

new Location { City="Nice", Distance=5428, Country="France" },

new Location { City="Dublin", Distance=4527, Country="Ireland" }

};

return cities;

}

}

}

This allows me to then just write the below code in our code-behind to get the same result as before:

using System;

using System.Collections.Generic;

using System.Web;

using System.Web.UI;

using System.Query;

public partial class Step3 : System.Web.UI.Page

{

protected void Page_Load(object sender, EventArgs e)

{

TravelOrganizer travel = new TravelOrganizer();

GridView1.DataSource = from location in travel.PlacesVisited

where location.Distance > 1000

orderby location.Country, location.City

select location;

GridView1.DataBind();

}

}

What is really cool about LINQ is that it is strongly-typed. What this means is that:

1) You get compile-time checking of all queries. Unlike SQL statements today (where you typically only find out at runtime if something is wrong), this means you will be able to check during development that your code is correct (for example: if I wrote “distanse” instead of “distance” above the compiler would catch it for me).

2) You will get intellisense within VS (and the free Visual Web Developer) when writing LINQ queries. This makes both typing faster, but also make it much easier to work against both simple and complex collection and datasource object models.

Step 4: Skipping and Taking using .NET Standard Query Operators

LINQ comes with built-in support for many built-in Standard Query Operators. These can be used within code by adding a “using System.Query” statement at the top of a class file, and can be applied to any sequence of data. For example, if I wanted to list cities in order of distance and list the 2nd->6th farthest away cities I could write my code-behind file like so:

using System;

using System.Web.UI;

using System.Query;

public partial class Step4 : System.Web.UI.Page

{

protected void Page_Load(object sender, EventArgs e)

{

TravelOrganizer travel = new TravelOrganizer();

GridView1.DataSource = (from location in travel.PlacesVisited

orderby location.Distance descending

select location).Skip(1).Take(5);

GridView1.DataBind();

}

}

Note how I am ordering the result by the distance (farthest to least). I am then using the “Skip” operator to skip over the first city, and the "Take" operator to only return the remaining 5.

What is really powerful is that the .NET Standard Query Operators are not a hard-coded list, and can be added to and replaced by any developer. This enables very powerful domain specific implementations. For example, when the Skip() and Take() operators are used with DLINQ – it translates the calls into back-end SQL logic that performs server-side paging (so that only a few rows are returned from the SQL database – regardless of whether it is from a table with 100,000+ rows of data). This means that you will be able to trivially build efficient web data paging over lots of relational data (note: until then you can use the techniques listed here).

Step 5: More Fun with .NET Standard Query Operators

In addition to returning sequences of data, we can use .NET Standard Query Operators to return single or computed results of data. The below samples show examples of how to-do this:

<%@ Page Language="C#" CodeFile="Step5.aspx.cs" Inherits="Step5" %>

<html>

<body>

<form id="form1" runat="server">

<div>

<h1>Aggregate Value Samples</h1>

<div>

<b>Farthest Distance City:</b>

<asp:Label ID="MaxCityNameTxt" runat="server" Text="Label"></asp:Label>

<asp:Label ID="MaxCityDistanceTxt" runat="server" Text="Label"></asp:Label>

</div>

<div>

<b>Total Travel Distance (outside of US):</b>

<asp:Label ID="TotalDistanceTxt" runat="server" Text="Label"></asp:Label>

</div>

<div>

<b>Average Distance:</b>

<asp:Label ID="AverageDistanceTxt" runat="server" Text="Label"></asp:Label>

</div>

</div>

</form>

</body>

</html>

Step5.aspx.cs code-behind file:

using System;

using System.Collections.Generic;

using System.Web.UI;

using System.Query;

public partial class Step5 : System.Web.UI.Page

{

protected void Page_Load(object sender, EventArgs e)

{

TravelOrganizer travel = new TravelOrganizer();

//

// Calculate farthest city away

Location farthestCity = (from location in travel.PlacesVisited

orderby location.Distance descending

select location).First();

MaxCityNameTxt.Text = farthestCity.City;

MaxCityDistanceTxt.Text = "(" + farthestCity.Distance + " miles)";

//

// Calculate total city distances of all cities outside US

int totalDistance = (from location in travel.PlacesVisited

where location.Country != "USA"

select location).Sum(loc => loc.Distance);

TotalDistanceTxt.Text = totalDistance + " miles";

//

// Calculate average city distances of each city trip

double averageDistance = travel.PlacesVisited.Average(loc => loc.Distance);

AverageDistanceTxt.Text = averageDistance + " miles";

}

}

Note that the last two examples above use the new Lambda Expression support – which enable fragments of code (like delegates) that can operate on top of data to compute a result. You can build your own .NET Query Operators that use these (for example: you could build domain specific ones to calculate shipping costs or payroll tax). Everything is strongly-typed, and will support intellisense and compilation checking support.

The output of the above sample looks like so:

Step 6: Anonymous Types

One of the new C# and VB language features that LINQ can take advantage of is support for “Anonymous Types”. This allows you to easily create and use type structures inline without having to formally declare their object model (instead it can be inferred by the initialization of the data). This is very useful to “custom shape” data with LINQ queries.

For example, consider a scenario where you are working against a database or strongly-typed collection that has many properties – but you only really care about a few of them. Rather than create and work against the full type, it might be useful to only return those properties that you need. To see this in action we’ll create a step6.aspx file like so:

<%@ Page Language="C#" AutoEventWireup="true" CodeFile="Step6.aspx.cs" Inherits="Step6" %>

<html>

<body>

<form id="form1" runat="server">

<div>

<h1>Anonymous Type</h1>

<asp:GridView ID="GridView1" runat="server">

</asp:GridView>

</div>

</form>

</body>

</html>

And within our code-behind file we’ll write a LINQ query that uses anonymous types like so:

using System;

using System.Web.UI;

using System.Query;

public partial class Step6 : System.Web.UI.Page

{

protected void Page_Load(object sender, EventArgs e)

{

TravelOrganizer travel = new TravelOrganizer();

GridView1.DataSource = from location in travel.PlacesVisited

orderby location.City

select new {

City = location.City,

Distance = location.Distance

};

GridView1.DataBind();

}

}

Note that instead of returning a “location” from our select clause like before, I am instead creating a new anonymous type that has two properties – “City” and “Distance”. The types of these properties are automatically calculated based on the value of their initial assignment (in this case a string and an int), and when databound to the GridView produce an output like so:

Step 7: Anonymous Types (again)

The previous sample showed a basic example of using anonymous types to custom-shape the output of a LINQ query. The below sample provides a richer and more practical scenario. It transforms our list of cities into a hierarchical result collection – where we group the results around countries using an anonymous type that we define that contains the country name, a sub-collection list of city details, and the sum of the total distance of all cities within the country (computed using a lambda expression like we demonstrated in step5 above):

using System;

using System.Web.UI;

using System.Query;

public partial class Step7 : System.Web.UI.Page

{

protected void Page_Load(object sender, EventArgs e)

{

TravelOrganizer travel = new TravelOrganizer();

GridView1.DataSource = from location in travel.PlacesVisited

group location by location.Country into loc

select new {

Country = loc.Key,

Cities = loc,

TotalDistance = loc.Sum(dist => dist.Distance)

};

GridView1.DataBind();

}

}

The GridView on our .aspx page is then defined like so:

<%@ Page Language="C#" AutoEventWireup="true" CodeFile="Step7.aspx.cs" Inherits="Step7" %>

<html>

<body>

<form id="form1" runat="server">

<div>

<h1>Groupings with Anonymous Classes</h1>

<asp:GridView ID="GridView1" AutoGenerateColumns="false" runat="server">

<Columns>

<asp:BoundField HeaderText="Country" DataField="Country" />

<asp:TemplateField HeaderText="Cities">

<ItemTemplate>

<asp:BulletedList ID="BulletedList1" runat="server"

DataSource='<%#Eval("Cities")%>' DataValueField="City"/>

</ItemTemplate>

</asp:TemplateField>

<asp:BoundField HeaderText="Total Distance" DataField="TotalDistance" />

</Columns>

</asp:GridView>

</div>

</form>

</body>

</html>

Notice how I’ve added a GridView templatefield column for the “Cities” column – and within that I’ve then added an <asp:bulletedlist> control (a new control built-in with ASP.NET 2.0) that databinds its values from the cities property of the hierarchical result we created using our LINQ query above. This generates output like so:

Note that all of the databind syntax and hierarchical binding support in the .aspx page above is fully supported in ASP.NET 2.0 today – so you can use this same technique with any existing app you have now. What is new (and I think very cool) is the data shaping capabilities provided by anonymous types and LINQ – which makes binding hierarchical data against ASP.NET controls very easy.

Next Steps

All of my samples above were against in-memory collections. They show you how you will be able to use LINQ against any .NET object model (includes all the ones you have already).

For example, if you use DLINQ to generate a Northwinds database mapping of Suppliers and their Products (no code is required to set this up), the below code is all you need to write to obtain and databind a hierarchical database result against a GridView like we did above (note: we are using the same data-shaping technique as our previous sample to only require fetching two columns from the database, and automatically join the products of each supplier as a hierarchical group result):

using System;

using System.Query;

public partial class Data_Data2 : System.Web.UI.Page

{

protected void Page_Load(object sender, EventArgs e)

{

Northwind db = new Northwind();

GridView1.DataSource = from x in db.Suppliers

where x.Country == "USA"

orderby x.Country

select new {

x.CompanyName,

x.Country,

x.Products

};

GridView1.DataBind();

}

}

No custom SQL syntax or code is required – this is all that needs to be written to efficiently fetch and populate hierarchical data now (note: only the rows and columns needed will be fetched -- DLINQ can use the remote function support within LINQ so that it does not need to materialize or fetch the full database table or all columns from a row). And it is all type-safe, with full compiler checking, intellisense, and debugging supported.

Even better, the ability to plug-in new LINQ providers (of which DLINQ and XLINQ are just two examples) is completely open – so developers who either build or use existing data providers today (for example: O/R database mappers) can easily integrate their implementations with LINQ to have a seamless developer experience. Once you know LINQ you will know all the basics needed to program against any of them.

Summary

Hopefully this provides a glimpse of some of the cool new things coming. You can try it all out today by downloading the May CTP drop of LINQ today from here. You can also download and run all of the samples built above from this .ZIP file here.

Original Post can be found at http://weblogs.asp.net/scottgu/archive/2006/05/14/Using-LINQ-with-ASP.NET-_2800_Part-1_2900_.aspx