Você está na página 1de 113

now available in digital format!

If you are an MSDN program subscriber MSDN Magazine is a


free benefit of your MSDN subscription.

To register for your digital subscription, go to:


http://msdn.microsoft.com/subscriptions
sign in (link upper left), and then click on MSDN Magazine
Subscription in the My MSDN Subscription box on the right.
You'll be redirected to the registration page for this exclusive
offer. Submit the form to start your free digital subscription.

If you not an MSDN Program subscriber and would like to apply for
your own digital version, 12 issues for $25.00 please apply here:
https://cmp-sub.halldata.com/mfdigital

If you would like to learn more about an MSDN program


subscription please click here:
http://msdn.microsoft.com/en-us/subscriptions/aa718656.aspx
COLUMNS
Toolbox
Composite Web Apps With Prism Static Analysis Database Tools,
Managing Remote Computers,
Shawn Wildermuth page 35
And More Scott Mitchell page 11

CLR Inside Out


Building Tuple
Matt Ellis page 14

Basic Instincts
Stay Error Free With Error
Corrections Dustin Campbell page 21

RESTful Services With ASP.NET MVC Cutting Edge


Aaron Skonnard page 48 Comparing Web Forms And
ASP.NET MVC
Dino Esposito page 28

Test Run
Request-Response Testing
With F# James McCaffrey page 72

Distributed Caching On The Path Service Station


To Scalability More On REST
Jon Flanders page 79
Iqbal Khan page 62
Extreme ASP.NET
Guiding Principles For Your
ASP.NET MVC Applications
THIS MONTH at msdn.microsoft.com/magazine: Scott Allen page 85

Wicked Code
USABILITY IN PRACTICE: Usability Testing Taking Silverlight Deep Zoom
Dr. Charles B. Kreitzberg & Ambrose Little
To The Next Level
INSIDE WINDOWS 7: Introducing The Taskbar APIs Jeff Prosise page 90
Yochay Kiriaty & Sasha Goldshtein
Foundations
TESTABLE MVC: Unit Testing ASP.NET MVC Securing The .NET Service Bus
Justin Etheredge Juval Lowy page 99

JULY 2009 VOL 24 NO 7


learn more at: infragistics.com
Superior UI Components for Windows Forms, WPF, ASP.NET & Silverlight
From the experts in creating exceptional User Experiences

Four Platforms. One Package.

Copyright 1996-2009 Infragistics, Inc. All rights reserved. Infragistics, the Infragistics logo and NetAdvantage are registered trademarks of Infragistics, Inc. All other trademarks or registered trademarks are the respective property of their owners.
Instantly Search
Terabytes of Text
N dozens of indexed, JULY 2009 VOLUME 24 NUMBER 7

unindexed,
LUCINDA ROWLEY Director
fielded data and
EDITORIAL: mmeditor@microsoft.com
full-text search HOWARD DIERKING Editor-in-Chief
options (including
WEB SITE
Unicode support MICHAEL RICHTER Webmaster
for hundreds
CONTRIBUTING EDITORS Don Box, Keith Brown, Dino Esposito, Juval Lowy,
of international Dr. James McCaffrey, Fritz Onion, John Papa, Ted Pattison, Charles Petzold,
languages) Jeff Prosise, Jeffrey Richter, John Robbins, Aaron Skonnard, Stephen Toub
MSDN Magazine (ISSN # 1528-4859) is published monthly by TechWeb, a division of United Business
Media LLC., 600 Community Drive, Manhasset, NY 11030 516-562-5000. Periodicals Postage Paid
N file parsers / at Manhasset, NY and at additional mailing offices. Back issue rates: U.S. $10. All others: $12. Basic
one-year subscription rates: U.S. $45. Canada and Mexico $55. Registered for GST as TechWeb, a
converters for division of United Business Media LLC., GST No. R13288078, Customer No. 2116057 Canada Post:
Publications Mail Agreement #40612608. Canada Returns to be sent to Bleuchip International, P.O.
hit-highlighted Box 25542, London, ON N6C 6B2. All other foreign orders $70, must be prepaid in U.S. dollars drawn
on a U.S. bank. Circulation Department, MSDN Magazine, P.O. Box 1081, Skokie, IL 60076-8081, fax
display of all 847-763-9583. Subscribers may call from 8:00 AM to 4:00 PM CST M-F. In the U.S. and Canada 888-
847-6188; all others 847-763-9606. U.K. subscribers may call Jill Sutcliffe at Parkway Gordon 01-49-
1875-386. Manuscript submissions and all other correspondence should be sent to MSDN Magazine,
popular file 6th Floor, 1290 Avenue of the Americas, New York, NY 10104. Copyright © 2009 Microsoft Corporation.
All rights reserved; reproduction in part or in whole without permission is prohibited.
types
h Spider
Desktop wit
POSTMASTER: Send address changes to MSDN Magazine, P.O. Box 1081, Skokie, IL 60076-8081

N Spider supports h Spider


READERS: Order, renew, or make payment for subscriptions; order back issues; and submit

static and Network wit customer service inquiries via the Web at http://msdn.microsoft.com/msdnmag/service.

CD/DVDs
dynamic web Publish for PUBLISHER:
data; highlights
hits while Web with S
pider
New Jill Thiry jthiry@techweb.com

Win & .NET 64-bit


ADVERTISING SALES: 785-838-7573/dtimmons@techweb.com
displaying links, Engine for David Timmons Associate Publisher
formatting and Linux Jon Hampson Regional Account Director, Northeast US and EMEA
images intact Engine for Ed Day Regional Account Manager, West & South US and APAC
Brenner Fuller Strategic Account Manager, West & South US and APAC
Michele Hurabiell-Beasley Strategic Account Manager, Key Accounts
N API supports .NET, C++, Java, databases, etc. Julie Thibault Strategic Account Manager, Northeast US and EMEA
Meagon Marshall Account Manager
New .NET Spider API Pete C. Scibilia Production Manager / 516-562-5134
Wright’s Reprints 877-652-5295
® ubmreprints@wrightsreprints.com
The Smart Choice for Text Retrieval ONLINE SERVICES: 415-947-6158/mposth@techweb.com
since 1991 Mark Posth Community Manager
Meagon Marshall Online Specialist
N “Bottom line: dtSearch manages a terabyte of AUDIENCE SERVICES: 516-562-7833/kmcaleer@techweb.com
text in a single index and returns results in less Karen McAleer Audience Development Director
than a second” – InfoWorld SUBSCRIBER SERVICES: 800-677-2452/lawrencecs@techinsights.com

N “For combing through large amounts of data,” TechWeb, a division of United Business Media LLC.–The Global Leader in Business Technology Media
Tony L. Uphoff CEO
dtSearch “leads the market” Bob Evans SVP and Content Director
Eric Faurot SVP, Live Events Group
– Network Computing Joseph Braue SVP, Light Reading Communications Group
John Siefert VP and Publisher, InformationWeek and TechWeb Network
N dtSearch “covers all data sources ... powerful Scott Vaughan VP, Marketing Services
John Ecke VP, Financial Technology Group
Web-based engines” – eWEEK Jill Thiry Publisher, MSDN Magazine and TechNet Magazine
John Dennehy General Manager
Fritz Nelson Executive Producer, TechWeb TV
N dtSearch “searches at blazing speeds” Scott Popowitz Senior Group Director, Audience Development
– Computer Reseller News Test Center Beth Rivera Senior Human Resources Manager

See www.dtsearch.com for hundreds more reviews,


and hundreds of developer case studies

Contact dtSearch for


fully-functional evaluations

1-800-IT-FINDS • www.dtsearch.com
4 msdn magazine Printed in the USA
Your best source for
software development tools!
®

LEADTOOLS Document dtSearch Engine for Win & .NET VMware vSphere Essentials
Imaging v 16.5: Add dtSearch‘s “blazing speeds” New VMware Solutions for Small Businesses
by LEAD Technologies (CRN Test Center) searching and New VMware vSphere Essentials Editions combine the
file format support 64-bit high availability, performance and reliability of
LEADTOOLS Document Imaging has every Version!
component you need to develop powerful • dozens of full-text and fielded our enterprise class solutions with packaging ideal
image-enabled business applications including data search options for smaller IT environments and a price starting
specialized bi-tonal image display and • file parsers/converters for hit-highlighted at less than $1,000 for a complete solution.
processing, document clean up, high-speed display of all popular file types • VMware vSphere Essentials provides an New
scanning, advanced compression (CCITT • Spider supports dynamic and static web data; all-in-one solution for small offices to virtualize Release!
New G3/G4, JBIG2, MRC, ABC) and more. highlights hits with links, images, etc. intact three physical servers for consolidating and
Re lea se! managing applications to reduce hardware and
• Multi-threaded OCR/ICR/OMR/ • API supports .NET, C++, Java, SQL and more;
MICR/Barcodes (1D/2D) 3 Server Pack operating costs with a low up front investment. VMware
new .NET Spider API
• Forms recognition/processing Paradise # vSphere
• VMware vSphere Essentials Plus adds
Paradise # • PDF and PDF/A “Bottom line: dtSearch manages a terabyte of D29 02101A08 Essentials
high application availability and data protection
L05 03201A01 • Annotation (Image Mark-up)
• C/C++, .NET, WPF - Win32/64, WCF, WF
text in a single index and returns results in
less than a second.” —InfoWorld
$
2,375. 99 for a complete server consolidation, manage-
ment and business continuity solution
CALLI
$
2,007. 99
programmers.com/lead programmers.com/dtsearch for the small office IT environment. programmers.com/vmware

Pragma Fortress SSH c-treeACE™ Professional TX Text Control 15


—SSH Server for Windows by FairCom Word Processing Components NEW
RELEASE!
by Pragma Systems The c-treeACE database engine is a high performance TX Text Control is royalty-free,
Contains Windows SSH & SFTP Servers. Certified database alternative proven by developers in mission robust and powerful word processing
for Windows Server 2008. Works with PowerShell. critical enterprise systems, desktop deployments, and software in reusable component form.
embedded devices for over 25 years. • .NET WinForms control for VB.NET and C#
• Full-featured server with centralized
& graphical management • Complete set of APIs including ADO.NET, LINQ, • ActiveX for VB6, Delphi, VBScript/HTML, ASP
C#, C/C++, ODBC, JDBC, VCL, and dbExpress
• GSSAPI Kerberos & NTLM authentication • File formats DOCX, DOC, RTF, HTML, XML, TXT
• Graphical productivity tools Professional Edition
• Fastest SFTP & SCP file transfer • PDF and PDF/A export, PDF text import
• Simple deployment Paradise #
• Supports over 1000 sessions • Tables, headers & footers, text frames, bullets,
• No DBA or ongoing administration T79 02101A02
• Runs console applications & allows history Paradise # structured numbered lists, multiple undo/redo,
• Low deployment licensing costs
Paradise #
scroll back within the same session
• Cross-platform support for all major platforms F01 0131
sections, merge fields, columns $
848. 99
• Runs in Windows 2008/2003/Vista/XP/2000 including Windows, UNIX, Linux, and Mac OS X • Ready-to-use toolbars and dialog boxes
P35 0439
Make your applications faster, easier to deploy,
792.
$ 99 Download a demo today.
$
550.99 programmers.com/pragma and more affordable with c-treeACE. programmers.com/faircom programmers.com/theimagingsource

FarPoint Spread Enterprise Architect 7.5 CA ERwin r7.3


for Windows Forms Visualize, Document and NEW
The Best Grid is a Spreadsheet. Give your users Control Your Software Project RELEASE! CA ERwin® Data Modeler
the look, feel, and power of Microsoft® Excel®, by Sparx Systems r7.3 – Product Plus 1 Year
without needing Excel installed on their machines.
Join the professional developers around the
Enterprise Architect is a comprehensive, Enterprise Maintenance
integrated UML 2.1 modeling suite
world who consistently turn to FarPoint Spread CA ERwin Data Modeler is a data modeling
providing key benefits at each stage of
to add powerful, extendable spreadsheet solu- solution that enables you to create and
system development. Enterprise Architect
tions to their COM, ASP.NET, .NET, BizTalk Server maintain databases, data warehouses
7.5 supports UML, SysML, BPMN and
and SharePoint Server applications. and enterprise data resource models.
other open standards to analyze, design, Corporate Edition These models help you visualize data Paradise #
• World’s #1 selling development spreadsheet test and construct reliable, well under-
1-4 Users structures so that you can effectively P26 04201E01
• Read/Write native Microsoft Excel Files stood systems. Additional plug-ins are
Paradise # organize, manage and moderate data
also available for Zachman Framework,
Paradise #
• Cross-sheet formula referencing
MODAF, DoDAF and TOGAF, and to SP6 03101A02 complexities, database technologies and $
3,832.99
F02 01101A01 • Fully extensible models the deployment environment.
• Royalty-free, run-time free
integrate with Eclipse and Visual Studio
2005/2008.
$
182.99
programmers.com/ca
$
936.99 programmers.com/farpoint programmers.com/sparxsystems
FREE WEBINAR SERIES:
MORE Maximum Data
Mindjet MindManager 8 Orion Network Modeling with CA ERwin 7.3
for Windows Performance Monitor In our last webinar series, we looked at CA
by Mindjet by Solarwinds ERwin’s core functionality. In this second series,
Do you harness the wealth of data, Web pages, Orion Network Performance Monitor is a we’ll provide a grounding in how CA ERwin r7.3’s
and other input that comes your way every comprehensive fault and network performance new features help you with Master Data Management, Metadata
day? Is there a way to use it more effectively management platform that scales with the Management, Data Warehousing, Data Governance and Business Intelligence.
to formulate new ideas, sharpen your focus, rapid growth of your network and expands There will be six sessions in the series:
and ultimately drive your success? New with your network management needs. • What’s New in CA ERwin 7.3
MindManager 8 for Windows is the answer. It offers out-of-the-box network-centric views • MDM (Master Data Management) with CA ERwin and Data
that are designed to deliver the critical Profiling tool
Unlike the usual linear-based approach of most
information network engineers need.
productivity tools, MindManager 8 uses mind- • Collaborative model management with CA ERwin ModelManager
Orion NPM is the easiest product of its
1 User mapping technology to let you capture, organ- Paradise # • Validate the integrity of your model with CA ERwin Validator
kind to use and maintain, meaning you
Paradise # ize, and communicate information using an S4A 08201E02 • Reporting: Crystal Reports, PDF, HTML
will spend more time actually managing
F15 17301A02 intuitive visual canvas. You’ll be able to work
smarter and transform your ideas into action
networks, not supporting Orion NPM. $
4,606.99 • SAPHIR Option: light at the end of the metadata tunnel
$
299. 99 more quickly.
programmers.com/mindjet programmers.com/solarwinds REGISTER TODAY: programmers.com/MDM_2009

800-445-7899 programmersparadise.com
Prices subject to change. Not responsible for typographical errors.
EDITOR’S NOTE

Viva la Evolution!
In this issue of MSDN Magazine, CTP of Indigo. I’m not suggesting that such an article should
we will take a more focused look at the state be outright pulled down—there may be value in the conceptual
of Web development. This is actually a pretty explanations. However, there should absolutely be more attention
relevant topic for me at the moment, as I’ve been given to lifetime management policy.
in and out of meetings over the past month in So why am I bringing up issues of content management? For
which we’ve been trying to define and come to one, it’s a domain that I’m pretty familiar with. More important,
consensus around what MSDN (not just the however, I think it highlights where the problems are now shifting
magazine) should be and how it should evolve. on the Web. For example, we’ve solved the problem of publishing
As such, I want to give you my perspective as a content publisher and we have decent a solution for finding content. The next major
on how the problem domain of the Web has evolved and how Web evolution is in composing experiences that bring together lots
applications are going to need to evolve to solve them. of disparate elements into a single context. This means thinking
It’s interesting to see how far the pendulum has swung in terms about how to break apart various application and content silos
of the fundamental problems related to content and the Web. Not such that a combination of content and application services can
too long ago, the major challenge was making content available. be put together into any context, as decided by the user—in short,
Evolution happened quickly, however—and handwritten HTML this means thinking of everything as a mashup. For me, it means
was replaced by WYSIWYG editors, and then the whole edit- that if my intent—my context—is to read an article about WCF,
ing environment was supplanted entirely by blogs and wikis. I shouldn’t have to stop what I’m doing so that I can navigate to a
The problem of publishing effectively disappeared. Anyone with different Web application if I have a question to post, only to later
an idea and an Internet connection could publish content for all rely on the back button to return to the article. I should be able to
to consume. construct a single context consisting of a content item and a forums
Before the novelty of this newly democratized world of content application service.
could sink in, the core problem shifted from making content Getting to this next step is going to require some work, in both
available to finding the right content. Enter the rise of search refactoring to break application silos into services and learning to
engines. Now, I’ve got no problem with search—I use it as a regu- think of Web applications as compositions. I’m therefore excited
lar part of doing business. For example, have you ever submitted to see articles like Shawn Wildermuth’s article on composite
an article idea and received a response similar to, “There’s already Silverlight applications (Page ) and Aaron Skonnard’s article on
a sufficient amount of coverage on this topic?” You can blame REST and the ASP.NET MVC framework (Page ). Finally, for
your search engine for that. For all of its eff ectiveness in finding more on refactoring Web applications, check James Kovacs’ series
content, however, I think that search has effectively enabled folks on brownfield development, aptly titled “Extreme
in the content business to gloss over some more fundamental ASP.NET Makeover,” at msdn.microsoft.com/magazine.
problems in our approach to publishing and managing content Viva la evolution!
on the Web—particularly with respect to information architecture.
As I look over all of the content that can be found today on MSDN,
I am overwhelmed at both the sheer volume of content and the lack THANKS TO THE FOLLOWING MICROSOFT TECHNICAL EXPERTS FOR THEIR HELP WITH
THIS ISSUE: Adrian Bateman, Sam Bent, Laurent Bugnion, Matt Ellis, Mike Fourie,
of cohesion among the various content elements. I’m also both- Steve Fox, Don Funk, Alex Gorev, Aaron Hallberg, Luke Hoban, Robert Horvick,
ered by the fact that I can search for MSDN Magazine articles on John Hrvatin, Bret Humphrey, Katy King, Brian Kretzler, Bertrand LeRoy, Dan
WCF and be presented with an article that was based on an early Moseley, Stephen Powell, and Delian Tchoparinov.

Visit us at msdn.microsoft.com/magazine. Questions, comments, or suggestions for MSDN Magazine? Send them to the editor: mmeditor@microsoft.com.

© 2009 Microsoft Corporation. All rights reserved.


Complying with all applicable copyright laws is the responsibility of the user. Without limiting the rights under copyright, you are not permitted to reproduce, store, or introduce into a retrieval system MSDN Magazine or any part of MSDN
Magazine. If you have purchased or have otherwise properly acquired a copy of MSDN Magazine in paper format, you are permitted to physically transfer this paper copy in unmodified form. Otherwise, you are not permitted to transmit
copies of MSDN Magazine (or any part of MSDN Magazine) in any form or by any means without the express written permission of Microsoft Corporation.
A listing of Microsoft Corporation trademarks can be found at microsoft.com/library/toolbar/3.0/trademarks/en-us.mspx. Other trademarks or trade names mentioned herein are the property of their respective owners.
MSDN Magazine is published by United Business Media LLC. United Business Media LLC is an independent company not affiliated with Microsoft Corporation. Microsoft Corporation is solely responsible for the editorial contents of this
magazine. The recommendations and technical guidelines in MSDN Magazine are based on specific environments and configurations. These recommendations or guidelines may not apply to dissimilar configurations. Microsoft Corporation
does not make any representation or warranty, express or implied, with respect to any code or other information herein and disclaims any liability whatsoever for any use of such code or other information. MSDN Magazine, MSDN, and
Microsoft logos are used by United Business Media under license from owner.

8 msdn magazine
FOCUS ON YOUR CODE.
AND LET CRYSTAL REPORTS
HANDLE THE REPORTING.
NEW ROYALTY-FREE
RUNTIME NOW AVAILABLE.

WWW.SAP.COM/CRYSTAL
REPORTS/DEV

OR CONTACT US AT 1-888-
333-6007.

Copyright © 2008 Business Objects SA. All rights reserved. Business Objects and the Business Objects logo and Crystal Reports are
trademarks or registered trademarks of Business Objects SA or its affiliated companies in the United States and/or other countries.
Business Objects is an SAP company. SAP is a registered trademark of SAP AG in Germany and in other countries.
EVOLVE YOUR CODE.
Parallelism breakthrough.
Analyze, compile, debug, check, and tune your code for multicore
with Intel® Parallel Studio. Designed for today’s serial apps and
tomorrow’s parallel innovators.
Get free eval software: www.intel.com/software/products/eval

© 2009-2010, Intel Corporation. All rights reserved. Intel and the Intel logo are trademarks of Intel Corporation in the U.S. and other countries. *Other names and brands may be claimed as the property of others.
SCOTT MITCHELL TOOLBOX

Static Analysis Database Tools, Managing


Remote Computers, and More

Static Analysis for Your


Database Design
Software design standards are an
important aspect of building reliable and
maintainable applications. Most compa-
nies have some sort of coding standards,
such as naming conventions and security
guidelines. Static code analysis tools, such as
FxCop and StyleCop, are useful for evaluat-
ing an application’s intermediate code or
source code to ensure that it conforms to
the standards recommended by Microsoft
or defined by your company. (FxCop and
StyleCop were reviewed in the December
2008 issue, available at msdn.microsoft.com/
en-us/magazine/dd263071.aspx.)
But what about database design and con-
figuration? Many companies have naming
ApexSQL Enforce
conventions for tables and their columns,
as well as guidelines for column data types,
Critical. A Best Practices rule base ships runs the selected rules against the specified
such as under what circumstances NULLable
with ApexSQL Enforce and contains more database and displays any rule violations in
columns are permissible. The same applies
than 80 rules, including ones that apply to a separate tab. Each rule violation includes a
for the use and cascade behavior of foreign
database maintenance and administra- summary, noting the rule that was violated
key constraints and triggers. There may also
tion. The Critical severity rule, for example, and the database object in violation; a
be configuration standards that apply to
login credentials, backup schedules, and requires that the database be backed up at description of the violated rule; and advice
other concerns. least every seven days. There are also data on how to remedy the violation. For some
modeling rules, such as a High severity rule rules, ApexSQL Enforce can provide the
ApexSQL Enforce (version 2008.04) is a
that ensures all tables have a primary key T-SQL syntax to fix the violation.
static analysis tool for Microsoft SQL Server
and a Tip rule that recommends against ApexSQL Enforce offers a high degree of
databases. When you first run ApexSQL
naming tables with a “tbl” prefix. customization and flexibility. For instance,
Enforce, you are prompted to select the
After selecting the rule base and the you can create your own rule bases or
rule base to use. Each rule is associated with
particular rules to apply, you choose the modify the built-in Best Practices rule
a database object type, such as the server,
database to analyze. ApexSQL Enforce base. Rules are defined using C# or Visual
a table, a trigger, a column, or a primary
key. The rules are classified further into
categories such as Server Configuration, Send your questions and comments for Scott to toolsmm@microsoft.com.
Database Modeling, Performance, and
All prices confirmed at press time are subject to change. The opinions expressed in this column are solely
so on. Each rule is assigned one out of six those of the author and do not necessarily reflect the opinions at Microsoft.
possible severities, ranging from Info to
July 2009 11
Basic code. Creating and modifying these (as well as each week’s new episode) are these remote connections’ display and
rules is straightforward thanks to a built-in available for download in several popular behavior properties, supply connection
rule editor that offers syntax highlighting multimedia formats. And Scott does a great credentials, or categorize the connec-
and IntelliSense support. ApexSQL Enforce job interviewing his guests and keeping the tions using tags. Double-clicking one of
can be run from its graphical user interface dialogue interesting and on-topic. the remote connections in the Favorites
or from the command line, and violation Price: Free window causes Terminals to connect to
reports can be exported to XML or rendered hanselminutes.com that computer in a new tab. There’s also a
into an HTML report. History view that shows what connections
Static analysis is a useful and automated Manage Remote Computers were made today, yesterday, in the past
technique for ensuring that your software from One Program week, and so on.
design adheres to prescribed standards. Tools like Virtual Network Computing Terminals includes a number of helpful
ApexSQL Enforce is a powerful and cus- (VNC) and the Microsoft Remote Desktop tools and utilities. There’s a feature that
tomizable tool for applying static analysis Protocol (RDP) make it easy to log into makes taking screen captures of remote
to your databases. and manage remote computers from your connections as easy as clicking a button,
Price: $999 home or office and have long been used by and a manager that catalogues and orga-
www.apexsql.com/sql_tools_enforce.asp system administrators to manage remote nizes your screen captures. Terminals also
computer assets. If you are tasked with offers a suite of network-related utilities.
.NET Podcasts managing many different computers, or For example, you’ll find tools for perform-
Podcasts are audio or video productions routinely find yourself with multiple remote ing a Whois or DNS lookup from within
that are available for download over the connections open at the same time, check Terminals, along with tools for examining
Internet and are typically played on por- out Terminals (version 1.7e), an open- the shares on a local or remote computer.
table MP3 players, such as the Apple iPod source project that consolidates managing Other tools show information about
or the Microsoft Zune. Unlike streaming and running remote connections. When the local computer’s Network Interface
audio shows, podcasts are prerecorded a remote connection is launched from Controllers (NICs), the open connections,
and syndicated. You can configure your Terminals, it is displayed in a new tab within and the packet traffic. There’s also one-click
MP3 player’s software to automatically the Terminals user interface. This tabbed UI access to common system administration
download the latest shows of your favorite streamlines multiple, simultaneous remote and network maintenance configuration.
podcasts—they are great listening material connections into a single window. And The Terminals toolbar contains shortcut
for your daily commute. Terminals works with a variety of proto- icons to programs like the Registry Editor
There are a number of very interesting, cols, including VNC, RDP, Virtual Machine (regedit.exe), the Computer Management
educational, and well-produced podcasts Remote Control (VMRC), Remote Access Console, the Control Panel, and the local
specifically for Microsoft .NET developers. Service (RAS), Telecommunication network computer’s Internet Properties, Network
One of my favorite .NET-focused shows is (Telnet), and Secure Shell (SSH). Connections, and Services Manager, among
Hanselminutes, a weekly audio podcast When Terminals is launched for the first many others.
hosted by Scott Hansel man, a senior pro- time, it searches your computer for remote Terminals is a nifty application that
gram manager at Microsoft. Most episodes connection files and adds any discovered consolidates working with remote connec-
are around 30 minutes in length, focus on connections to the Favorites window. You tions into a single, simple interface. If you
a single topic, and include a guest who can edit these automatically added entries, regularly connect to remote computers,
is a pioneer or an expert on the topic of or you can manually add your own remote give Terminals a try.
discussion. What I like best about the show connections to Favorites. And with a few Price: Free, open-source
is the wide range of subject matter, with clicks of the mouse, you can customize terminals.codeplex.com
episodes on specific technologies—such
as JavaScript, ADO.NET Data Services, and
parallel programming in .NET—as well as
shows that discuss software development
models like Scrum, test-driven develop-
ment, and the SOLID principles of object-
oriented design.
What sets Hanselminutes apart from
many other podcasts is its consistent
schedule and level of quality. A new
Hanselminutes episode has appeared nearly
every single week since January 2006, with
over 155 episodes so far. All past episodes Terminals
12 msdn magazine Toolbox
The Bookshelf (O’Reilly, 2009). Julia’s book starts with a solid Designer and to demonstrate working with
Most .NET developers are familiar with overview of the design goals of the Entity entities using both LINQ to Entities and Entity
ADO.NET, the data access library that has Framework, its pros and cons, and where it fits SQL queries. Later, in Chapter 7, Julia unveils
been part of the .NET Framework since its into Microsoft’s data access story. The reader a more complex, real-world database that
inception. One challenge of using ADO.NET is shown how to create an Entity Data Model contains many more tables and more types
is that the developer must always keep the using the Designer, how to query against this of relationships. She uses this new database
details of the underlying data store in mind. model, how to insert, update, and delete enti- throughout the remainder of the book to
When querying data, the developer must ties, and how to display and manage data via illustrate more advanced topics, such as
be cognizant of the tables to query, their the Entity Framework in WinForms, Windows customizing entities and the Entity Data
relationships, what JOINs are needed and Presentation Foundation (WPF), and ASP. Model, transaction processing, handling
whether they are INNER or OUTER JOINs. NET applications, as well as in Web Services exceptions raised by the Entity Framework,
When retrieving data from a DataSet or a and Windows Communication Foundation and so on. These two databases, along with
DataReader, the developer must recall the (WCF) services. the complete sample code, are available
column names and their data types. At nearly 800 pages, Programming Entity for download from the book’s Web site,
Framework is fairly hefty, but it offers a solid LearnEntityFramework.com.
grounding in using the Entity Framework. Price: $54.99, 792 pages
The book assumes its readers are intermedi- oreilly.com
ate to advanced .NET developers who are
already familiar with database concepts,
ADO.NET, LINQ, and other core .NET SCOTT MITCHELL, author of numerous books and
features and spends no time introducing founder of GuysFromRolla.com, is an MVP who has
been working with Microsoft Web technologies since
these topics. Instead, the book is packed . Scott is an independent consultant, trainer, and
with walkthroughs that illustrate the use writer. Reach him at Mitchell@guysfromrolla.com
of Entity Framework in various scenarios. or via his blog at ScottOnWriting.NET.
It also does a great
job pointing out
what this first ver-
sion of the Entity
Framework can
and cannot do and
what use cases are
difficult or tricky
Programming Entity Framework
to implement,
Over the past several years, Microsoft along with work-
has been developing the ADO.NET Entity arounds where
Framework, a new library for accessing data. appropriate.
When using the Entity Framework, you do Julia’s use of real-
not run queries against the database. Instead, world examples
you query the Entity Data Model, which is a really helped me
set of classes in your application that model to understand and
the database’s structure in an object-oriented become familiar
manner. In addition to the Entity Data Model, with using the Entity
the Entity Framework maintains a logical Framework. In the
model of the database and a map that indi- second chapter,
cates how the objects in the logical model Julia introduces a
correspond to the objects in the Entity Data simple database
Model. While the Entity Framework is not a with a handful
replacement for ADO.NET, it is an important of tables, views,
tool that Microsoft is investing in and is be- and stored proce-
ing used in technologies like ADO.NET Data dures. She uses this
Services and ASP.NET Dynamic Data. database over the
One of the best ways to learn about the next several chap-
Entity Framework is to read Julia Lerman’s ters to highlight
book, Programming Entity Framework the features of the
msdnmagazine.com July 2009 13
CLR INSIDE OUT MATT ELLIS

Building Tuple

The upcoming . release of Microsoft .NET Framework introduces With the tuple type, we’ve been able to remove the cast opera-
a new type called System.Tuple. System.Tuple is a fixed-size col- tors from our code, giving us better error checking at compile time.
lection of heterogeneously typed data. Like an array, a tuple has a When constructing and using elements from the tuple, the compiler
fixed size that can’t be changed once it has been created. Unlike an will ensure our types are correct.
array, each element in a tuple may be a different type, and a tuple As this example shows, it’s obvious that a tuple is a useful type
is able to guarantee strong typing for each element. Consider try- to have in the Microsoft .NET Framework. At first, it might seem
ing to use an array to store a string and integer value together, as like a very easy enhancement. However, we’ll discuss some of the
in the following: interesting design challenges that the team faced while designing
static void Main(string[] args) { System.Tuple. We’ll also explore more about what we added to the
object[] t = new object[2];
t[0] = "Hello"; Microsoft .NET Framework and why these additions were made.
t[1] = 4;

PrintStringAndInt((string) t[0], (int) t[1]); Existing Tuple Types


}
There is already one example of a tuple floating around the
static void PrintStringAndInt(string s, int i) { Microsoft .NET Framework, in the System.Collections.Generic
Console.WriteLine("{0} {1}", s, i);
} namespace: KeyValuePair. While KeyValuePair<TKey, TValue>
The above code is both awkward to read and dangerous from a can be thought of as the same as Tuple<T, T>, since they are
type safety point of view. We have no guarantee that the elements both types that hold two things, KeyValuePair feels different from
of the tuple are the correct type when we go to use them. What if, Tuple because it evokes a relationship between the two values it
instead of an integer, we put a string in the second element in our stores (and with good reason, as it supports the Dictionary class).
tuple, as follows: Furthermore, tuples can be arbitrarily sized, whereas KeyValuePair
static void Main(string[] args) { holds only two things: a key and a value.
object[] t = new object[2];
t[0] = "Hello";
Within the framework, many teams created, each for its own use,
t[1] = "World"; a private version of a tuple but didn’t share their versions across
PrintStringAndInt((string) t[0], (int) t[1]); teams. The Base Class Libraries (BCL) team looked at all of these
} uses when designing their version of tuple. After other teams heard
static void PrintStringAndInt(string s, int i) { that the BCL team would be adding a tuple in Microsoft .NET
Console.WriteLine("{0} {1}", s, i);
}
Framework , other teams felt a strong desire to start using tuples
This code would compile, but at run time, the call to in their code, too. The Managed Extensibility Framework (MEF)
PrintStringAndInt would fail with an InvalidCastException when team even took a look at the draft of one of BCL’s specs before im-
we tried to cast the string “World” to an integer. Let’s now take a plementing a version of tuple in their code. They later released this
look at what the code might look like with a tuple type: version as part of a consumer technology preview, with a support-
static void Main(string[] args) { ing comment that it was a temporary implementation until tuples
Tuple<string, int> t = new Tuple<string, int>("Hello", 4); were added to the Microsoft .NET Framework!
PrintStringAndInt(t.Item1, t.Item2);
}

static void PrintStringAndInt(string s, int i) {


Interoperability
Console.WriteLine("{0} {1}", s, i); One of the biggest set of customers inside Microsoft for tuple was
}
the language teams themselves. While C# and VB.NET languages
do not have the concept of a tuple as part of the core language, it
is a common feature of many functional languages. When target-
This column is based on a prerelease version of the Microsoft .NET Framework 4.
Details are subject to change. ing such languages to the Micrososft.NET Framework, language
Send your questions and comments to clrinout@microsoft.com.
developers had to define a managed representation of a tuple,
which leads to unnecessary duplication. One language suffering
14 msdn magazine
that problem was F#, which previously had defined its own tuple very interesting design decisions that needed to be made during
type in FSharp.Core.dll but will now use the tuple added in Mi- its development.
crosoft .NET Framework . The first major decision was whether to treat tuples either as a ref-
In addition to enabling the removal of duplicate types, having erence or value type. Since they are immutable any time you want
a common type ensures that it is easy to call functions across lan- to change the values of a tuple, you have to create a new one. If they
guage boundaries. Consider what would happen if C# had added a are reference types, this means there can be lots of garbage gener-
tuple type (as well as new syntax to support it) to the language, but ated if you are changing elements in a tuple in a tight loop. F# tuples
didn’t use the same managed representation as F#. If this were the were reference types, but there was a feeling from the team that they
case, any time you wanted to call a method from an F# assembly could realize a performance improvement if two, and perhaps three,
element tuples were value types instead. Some teams that had creat-

A common tuple type makes


ed internal tuples had used value instead of reference types, because
their scenarios were very sensitive to creating lots of managed objects.

interoperability that much


They found that using a value type gave them better performance.
In our first draft of the tuple specification, we kept the two-, three-,

easier.
and four-element tuples as value types, with the rest being reference
types. However, during a design meeting that included represen-
tatives from other languages it was decided that this “split” design
that took a tuple as an argument, you would be unable to use the would be confusing, due to the slightly different semantics between
normal C# syntax for a tuple or pass an existing C# tuple. Instead, the two types. Consistency in behavior and design was determined
you would be forced to convert your C# tuple to an F# tuple, and to be of higher priority than potential performance increases. Based
then call the method. It’s our hope that by providing a common on this input, we changed the design so that all tuples are reference
tuple type, we’ll make interoperability between managed languages types, although we asked the F# team to do some performance in-
with tuples that much easier. vestigation to see if it experienced a speedup when using a value
type for some sizes of tuples. It had a good way to test this, since its
Using Tuple from C# compiler, written in F#, was a good example of a large program that
While some languages like F# have special syntax for tuples, you used tuples in a variety of scenarios. In the end, the F# team found
can use the new common tuple type from any language. Revisiting that it did not get a performance improvement when some tuples
the first example, we can see that while useful, tuples can be overly were value types instead of reference types. This made us feel better
verbose in languages without syntax for a tuple: about our decision to use reference types for tuple.
class Program {
static void Main(string[] args) {
Tuple<string, int> t = new Tuple<string, int>("Hello", 4);
Tuples of Arbitrary Length
PrintStringAndInt(t.Item1, t.Item2); Theoretically, a tuple can contain as many elements as needed,
} but we were able to create only a finite number of classes to
static void PrintStringAndInt(string s, int i) { represent tuples. This raised two major questions: how many generic
Console.WriteLine("{0} {1}", s, i); parameters should the largest tuple have, and how should the larger
}
} tuples be encoded? Deciding on how many generic parameters to
Using the var keyword from C# ., we can remove the type have in the largest tuple ended up being somewhat arbitrary. Since
signature on the tuple variable, which allows for somewhat more Microsoft .NET Framework  introduces new versions of the Action
readable code. and Func delegates that take up to eight generic parameters, we
var t = new Tuple<string, int>("Hello", 4); chose to have System.Tuple work the same way. One nice feature
We’ve also added some factory methods to a static Tuple class of this design is that there’s a correspondence between these two
which makes it easier to build tuples in a language that supports types. We could add an Apply method to each tuple that takes a
type inference, like C#. correspondingly sized Action or Func of the same type and pass
var t = Tuple.Create("Hello", 4); each element from the tuple as an argument to the delegate. While
Don’t be fooled by the apparent lack of types here. The type of t we ultimately didn’t add this in order to keep the surface area of
is still Tuple<string, int> and the compiler won’t allow you to treat the type clean, it’s something that could be added in a later release
the elements as different types. or with extension methods. There is certainly a nice symmetry in
Now that we’ve seen how the tuple type works, we can take a look the number of generic parameters that belong to Tuple, Action,
at the design behind the type itself. and Func. An aside: After System.Tuple was finished being de-
signed, the owners of Action and Func created more instances of
Reference or Value Type? each type; they went up to  generic parameters. Ultimately, we
At first glance, there isn’t much to the tuple type, and it seems like decided not to follow suit: the decision on sizing was somewhat
something that could be designed and implemented over a long arbitrary to begin with, and we didn’t feel the time we would spend
weekend. However, looks are often deceiving, and there were some adding eight more tuple types would be worth it.
16 msdn magazine CLR Inside Out
Once we answered our first question, we still had to figure out a In F#, equality over tuples and arrays is structural. This means
way to represent tuples with more than eight elements. We decid- that two arrays or tuples are equal if all their elements are the
ed that the last element of an eight-element tuple would be called same. This differs from C#. By default, the contents of arrays and
“Rest” and we would require it to be a tuple. The elements of this tuples don’t matter for equality. It is the location in memory that
tuple would be the eighth, ninth, and so on, elements of the overall is important.
tuple. Therefore, users who wanted an eight-element tuple would Since this idea of structural equality and comparison is already
create two instances of the tuple classes. The first would be the part of the F# specification, the F# team had already come up
eight-element tuple, which would contain the first seven items of with a partial solution to the problem. However, it applied only
the tuple, and the second would be a one-element tuple that held to the types that the team created. Since it also needed structural
the last tuple. In C#, it might look like this:

The design team iterated on


Tuple.Create(1, 2, 3, 4, 5, 6, 7, Tuple.Create(8));
In languages like F# that already have concrete syntax for tuples,

a few different designs to help


the compiler handles this encoding for you.

Properties
For languages that provide syntax for interacting with tuples, solve structural equality and
comparison problems.
the names of the properties used to access each element aren’t
very interesting, since they are shielded from the developer. This
is a very important question for languages like C#, where you have
to interact with the type directly. We started with the idea of using equality over arrays, the compiler generated special code to test if
Item, Item, and so on as the property names for the elements, but it was doing equality comparison on an array. If so, it would do a
we received some interesting feedback from our framework design structural comparison instead of just calling the Equals method on
reviewers. They said that while these property names make sense Array. The design team iterated on a few different designs on how
when someone thinks about them at first glance, it gives the feeling to solve these sorts of structural equality and comparison prob-
that the type was auto-generated, instead of designed. They sug- lems. It settled on creating a few interfaces that structural types
gested that we name the properties First, Second, and so on, which needed to implement.
felt more accessible. Ultimately, we rejected this feedback for a few
reasons: first, we liked the experience of being able to change the IStructualEquatable and IStructuralComparable
element you access by changing one character of the property name; IStructualEquatable and IStructuralComparable interfaces pro-
second, we felt that the English names were difficult to use (typing vide a way for a type to opt in to structural equality or comparison.
Fourth or Sixth is more work than Item or Item) and would lead They also provide a way for a comparer to be used for each element
to a very weird IntelliSense experience, since property names would in the object. Using these interfaces, as well as a specially defined
be alphabetized instead of showing up in numerical order. comparer, we can have deep equality over tuples and arrays when
it is required, but we don’t have to force the semantics on all users
Interfaces Implemented of the type. The design is deceptively simple:
System.Tuple implements very few interfaces and no generic public interface IStructuralComparable {
interfaces. The F# team was very concerned about tuple being a Int32 CompareTo(Object other, IComparer comparer);
}
very lightweight type, because it uses so many different generic
instantiations of the type across its product. If Tuple were to public interface IStructuralEquatable {
Boolean Equals(Object other, IEqualityComparer comparer);
implement a large number of generic interfaces, they felt it would Int32 GetHashCode(IEqualityComparer comparer);
have an impact on their working set and NGen image size. In light of }

this argument, we were unable to find compelling reasons for Tuple These interfaces work by separating the act of iterating over ele-
to implement interfaces like IEquatable<T> and IComparable<T>, ments in the structural type, for which the implementer of the in-
even though it overrides Equals and implements IComparable. terface is responsible, from the actual comparison, for which the
caller is responsible. Comparers can decide if they want to pro-
Structural Equality, Comparison, and Equivalence vide a deep structural equality (by recursively calling the IStruc-
The most interesting challenge we faced when designing Tuple tualEquatable or IStructualComparable methods, if the elements
was to figure out how to support the following: implement them), a shallow one (by not doing so), or something
• structural equality completely different. Both Tuple and Array now implement both
• structural comparison of these interfaces explicitly.
•partial equivalence relations
We did as much design work around these concepts as we did Partial Equivalence Relations
for Tuple itself. Structural equality and comparison relate to what Tuple also needed to support the semantics of partial equivalence
Equals means for a type like Tuple that simply holds other data. relations. An example of a partial equivalence relation in the Microsoft
msdnmagazine.com July 2009 17
.NET Framework is the relationship between NaN and other floating- Worth the Effort
point numbers. For example: NaN<NaN is false, but the same holds Though it took a great deal more design iteration than anyone
for NaN>NaN and NaN == NaN and NaN != NaN. This is due to expected, we were able to create a tuple type that we feel is flexible
NaN being fundamentally incomparable, since it does not represent enough for use in a variety of languages, regardless of syntactic
any number. While we are able to encode this sort of relationship support for tuples. At the same time, we’ve created interfaces that
with operators, the same does not hold for the CompareTo method help describe the important concepts of structural equality and
on IComparable. This is because there is no value that CompareTo comparison, which have value in the Microsoft .NET Framework
can return that signals the two values are incomparable. outside of tuple itself.
F# requires that the structural equality on tuples also works with Over the past few months, the F# team has updated its compiler
partial equivalence relationships. Therefore in F#, [NaN, NaN] == to use System.Tuple as the underlying type for all F# tuples. This
[NaN, NaN] is false, but so is [NaN, NaN] != [NaN, NaN]. ensures that we can start building toward a common tuple type
Our first tentative solution was to have overloaded operators on across the Microsoft .NET ecosystem. In addition to this exciting
Tuple. This worked by using operators on the underlying types, if development, tuple was demoed at this year’s Professional Devel-
they existed, and by falling back to Equals or CompareTo, if they opers Conference as a new feature for Microsoft .NET Framework
did not. A second option was to create a new interface like ICom-  and received much applause from the crowd. Watching the video
parable but with a different return type so that it could signal cases and seeing how excited developers are to start using tuples has made
where things were not comparable. Ultimately, we decided that we all the time spent on this deceptively simple feature all the more
would hold off on building something like this until we saw more worthwhile. 
examples of partial equivalence being needed across the Microsoft
.NET Framework. Instead, we recommended that F# implement
this sort of logic in the IComparer and IEqualityComparer methods
MATT ELLIS is a Software Design Engineer on the Base Class Libraries team
that they passed into the IStructrual variants of CompareTo and responsible for diagnostics, isolated storage, and other little features like Tuple.
Equals methods by detecting these cases and having some sort of When he’s not thinking about frameworks or programming language design, he
out-of-band signaling mechanism when it encountered NaNs. spends his time with his wife Victoria and their two dogs, Nibbler and Snoopy.

Popular Microsoft Developer Titles


Free eBook with content from these titles at
informit.com/msdeveloper
GET THE BOOK BEFORE
IT IS PUBLISHED!

9780321562319
9780321508799 9780321564160 9780672330223

FOLLOW US! twitter.com/informit | FIND US ON FACEBOOK! informit.com/fb-msdeveloper

AVA I L A B L E N O W I N P R I N T, E B O O K , A N D S A FA R I B O O K S O N L I N E

For more information and online sample material visit informit.com


Available wherever technical books are sold.

20 msdn magazine CLR Inside Out


DUSTIN CAMPBELL BASIC INSTINCTS

Stay Error Free with Error Corrections

One of the most useful features of the Microsoft Visual Basic editing the editor, a small red bar is added to the squiggle underline that
experience in Microsoft Visual Studio is background compilation. indicates that a smart tag is available at that location, as shown in
You’re probably already familiar with this feature, as it is similar Figure 1.
in spirit to the spelling and grammar checker found in Microsoft If you hover the mouse over the smart tag indicator, the collapsed
Word. As you type code, the Visual Basic compiler runs in the back- smart tag will appear along with a tooltip description. This is
ground. When you type an error, it is underlined with a squiggle pictured in Figure 2.
in the editor and added to the Error List window. In Figure 3, clicking on the smart tag expands the error
Prior to Visual Studio , the only guidance for fixing errors correction UI with the options that are available for this particular
came primarily from the error messages themselves. When possible, code error.
the Visual Basic Compiler generates error messages that are intended The error correction UI makes it easy to fix errors quickly and
to explain not only what the error is, but also how to fix it. accurately. The preview windows clearly show what changes will
Let’s look at an example. Suppose that you
have the following Visual Basic code:
Module Module1
Sub Main()
Dim i As Integer = "1"c
End Sub
End Module
The code above produces a compiler error
because it isn’t clear which conversion from
Char to Integer is expected. Should the com- Figure 2 Collapsed Smart Tag
piler insert a conversion to produce the ASCII
value of “”c, or should it insert a conversion that produces the
numerical value of “”c? The compiler has no option but to generate
this error message:
“‘Char’ values cannot be converted to ‘Integer’. Use ‘Microsoft.
VisualBasic.AscW’ to interpret a character as a Unicode value or
‘Microsoft.VisualBasic.Val’ to interpret it as a digit.”
While the preceding error message goes to great lengths to
explain how the problem can be fixed, it requires more coding on
your part to actually fix it.

Introducing Error Corrections


To make fixing errors easier, the Error Correction UI was intro-
duced. In Visual Studio  and , when an error is typed in

Figure 3 Expanded Smart Tag with Available Error Corrections

This article is partly based on a prerelease version of Visual Studio.


All information is subject to change.
Send your questions and comments to instinct@microsoft.com.
Figure 1 Error Squiggle with a Smart Tag Indicator
July 2009 21
End Sub
End Module
When code does not compile because it is invalid (e.g., the
missing parenthesis in the call to Console.Write above), the Visual
Basic IDE will suggest error corrections that make your code valid
if applied. In Figure 5, the error correction offers to insert the
missing parenthesis.
Figure 4 No Correction Suggestions
Error corrections for invalid code aren’t limited to code within a
be made to your code, and it’s a simple matter of clicking on either method body. For example, suppose that you have a read-only prop-
of the two hyperlinks to apply a fix. erty and would like to convert it into a full property with a setter:
Class Person
Sometimes, a particular compiler is fixable in some cases, but Private _name As String
not all. In these doubtful instances, the Visual Basic IDE will Public ReadOnly Property Name() As String
optimistically create a smart tag indicator for that error, primarily Get
Return _name
to improve performance. It is much quicker to create a smart tag End Get
indicator for errors that can be fixed some of the time than to check End Property
End Class
each error to determine if it can definitely be fixed before creat-
To add a setter in the code above, you would
ing the indicator. When you hover over a smart tag indicator that
. Move the editor caret after End Get.
doesn’t have any error corrections, you will see the “no correction
. Press Enter.
suggestions” message shown in Figure 4.
. Type Set.
Applying Error Corrections with the Keyboard . Press Enter again.
That would leave you with the following invalid code:
Error corrections can be applied by using the keyboard as well
Class Person
as the mouse. If the editor caret is located on an error squiggle with Private _name As String
a smart tag indicator, you can press Shift+Alt+F to immediately Public ReadOnly Property Name() As String
expand the smart tag and show the error correction options. With Get
Return _name
the smart tag expanded, the Up-Arrow and Down-Arrow keys End Get
will navigate the options and you can apply an error correction by Set(ByVal value As String)
pressing the Enter key. If you dislike the Shift+Alt+F keyboard End Set
shortcut, Ctrl+. will also display the smart tag. (I personally find End Property
End Class
Ctrl+. much easier to remember and to press!)
To make the property valid, the ReadOnly keyword needs to be
removed. Now, deleting the keyword is trivial, but it is easier and
Fixing Invalid Code more efficient if you use the error correction to do the job:
Often, code will contain an error simply because it has an invalid
. Press Up to move the editor caret to the error squiggle.
structure. One common mistake is to write code that has unbal-
. Press Ctrl+. to display the suggestion error corrections shown
anced parentheses. Can you spot the error in the code below?
in Figure 6.
Module Module1
Sub Main() . Press Down to focus the first error correction.
Dim quad = Function(a, b, c) _
Function(x) (a * x * x) + (b * x) + c

Dim f = quad(1.0, -79.0, 1601.0)

Console.Write(f(42.0)

Figure 5 Error Correction for Inserting Parenthesis Figure 6 Error Corrections for a Read-only Property with a Setter
22 msdn magazine Basic Instincts
the following code, which contains a spelling mistake:
Class Class1
End Class

Class Class2
End Class

Module Module1
Sub Main()
Dim c As New Clss
End Sub
End Module
The code above mistakenly attempts to instantiate a type named
Clss, but none exists. In this situation, two error corrections are
available, one to change Clss to Class and another to change Clss
Figure 7 Error Correction to Convert an Integer to a String to Class, as shown in Figure 8.
Currently, the spell checker is fairly simple and works only for
. Press Enter to apply. type names. However, this is an area that the Visual Basic team
As you can see, not only do error corrections help you write the would like to improve in a future release.
correct code, they can be a big time saver!
Importing Namespaces
Using the Right Type Conversions After shipping Visual Studio , the Visual Basic team received
By default, Visual Basic leaves strict typing off. This allows you to several requests for additional error corrections. Chief among these
take advantage of the Visual Basic compiler’s support for implicit was an error correction for automatically adding Imports statements.
data type conversions and late binding: It can be pretty irritating to start using a type and realize that you’ve
Module Module1 forgotten to import its namespace. For example, suppose you were
Sub Main() using the System.IO.File class to open and read a file from disk:
Dim s As String = 42
End Sub Module Module1
End Module Sub Main()

Given the code above, the compiler will properly insert a type
conversion from Integer to String.
However, if you use strict typing, Visual Basic error correc-
tions will be available to help you insert the same type conversion
that the compiler would have, if strict typing were off. Figure 7
shows the error correction for the code above when strict typing
is enabled.

Correcting Spelling Mistakes


Sometimes a code error might simply be a spelling mistake. For
spelling errors in type names, Visual Basic will provide error cor-
rections that suggest existing types with similar names. Consider

Figure 8 Error Correction for Spelling Mistake


msdnmagazine.com July 2009 23
Figure 11 The Add Imports Validation Error Dialog
Figure 9 Error Correction for Adding Imports Statements
Fortunately, the logic to avoid these situations is built into the
Dim fileName = Path.Combine("C:\Temp", "Sizes.txt")
Using f = File.OpenRead(fileName)
error correction for adding Imports statements. The error correction
recognizes the problem and displays the Add Imports Validation
End Using
End Sub
Error dialog, pictured in Figure 11.
End Module This dialog gives you two options for tweaking the error correction,
This code results in an error if you haven’t imported the System. so the meaning of your code isn’t changed:
IO namespace yet. In Visual Studio , there are two possible . Import ‘System.Windows.Shapes’ and qualify the affected
error corrections for this error: identifiers.
. Import the namespace. . Do not import ‘System.Windows.Shapes’, but change ‘Rectangle’
. Qualify the reference. to ‘Windows.Shapes.Rectangle’.
This is shown in Figure 9. By choosing the first suggested error By picking the first option, an Imports statement is added
correction, we can easily import the System.IO namespace. for System.Windows.Shapes, and the reference to Path is fully
This begs the question: what if importing a namespace would qualified:
change the meaning of your code? Suppose that the code above Imports System.IO
Imports System.Windows.Shapes
will use the Windows Presentation Framework (WPF) to create a
System.Windows.Shapes.Rectangle, using data read from the Sizes. Module Module1
Sub Main()
txt file. With the WPF references added, you start by instantiating Dim fileName = System.IO.Path.Combine("C:\Temp", "Sizes.txt")
a new Rectangle:
Imports System.IO

Module Module1
Sub Main()
Dim fileName = Path.Combine("C:\Temp", "Sizes.txt")
Using f = File.OpenRead(fileName)
Dim r As New Rectangle
End Using
End Sub
End Module
Like before, the namespace hasn’t been imported yet, so you
use the error correction UI pictured in Figure 10 to import it for
you. However, this time there’s a problem. The System.Windows.
Figure 12 Generate Class
Shapes namespace also includes a type named Path, so importing
the namespace would change the meaning of your code.

Figure 10 Error Corrections for Importing System.Windows.


Shapes Figure 13 Generate Property or Field
24 msdn magazine Basic Instincts
Using f = File.OpenRead(fileName)
Dim r As New Rectangle
End Using
End Sub
End Module
Again, this error correction is available only in Visual Studio
. If you are using Visual Studio , you won’t see suggestions
for adding Imports statements.

Generating New Types and Members


Visual Basic isn’t done with error corrections yet! Visual
Studio  will introduce a new feature called Generate from
Usage, which allows you to easily generate new types and
members. For Visual Basic, this feature is implemented as a
set of error corrections, so you can access them through the
existing error corrections. In many cases, the Generate from
Usage error corrections offer suggestions for compiler errors
that previously had none. For example, look back at Figure 4.
In Visual Studio , that code has no error corrections avail-
able. But in Visual Studio , the new Generate from Usage
error corrections are suggested. Figure 12 shows the difference
in the error corrections between Visual Studio  and Visual
Studio  for the same code.
By choosing the first suggestion, a new class named Customer is
generated in a new file. Now, you can continue to write code that
uses members of the Customer class and use the error corrections
to stub out the members for you. For example, Figure 13 shows
the error corrections after you’ve assigned a value to a member of
a class that hasn’t been declared yet.
After choosing the first suggestion, the generated Customer
class looks like this:
Class Customer
Property Name As String
End Class
In Visual Studio , you will be able to leverage the Generate
from Usage error corrections for maximum efficiency.

Conclusion
Error corrections are an essential part of the Visual Basic coding
experience. Not only do they help you immediately spot and fix
errors, but you can use them to write code more efficiently. This
article only scratches the surface of the hundreds of suggestions
offered by Visual Basic. And with Visual Studio , there is an
error correction for most coding errors.
Don’t forget to use Ctrl+.! 

DUSTIN CAMPBELL is the Microsoft Visual Basic IDE Program Manager on the
Microsoft Visual Studio Languages Team. He works primarily on the editor and
debugger product features. As a programming language nut, he also contributes
to other languages in Visual Studio, such as C# and F#. Before joining Microsoft,
Dustin helped to develop the award-winning CodeRush and Refactor! tools at
Developer Express Inc. Dustin’s favorite color is blue.
msdnmagazine.com July 2009 25
CUTTING EDGE DINO ESPOSITO

Comparing Web Forms and ASP.NET MVC


I first saw Microsoft ASP.NET in action in , when
it was tentatively named ASP+. At that time, build-
Ten years ago, model essentially abstracted a number of features to
provide a simulated stateful model for Web developers.
ing a Web application on the Microsoft platform was
a matter of assembling a bunch of Active Server
ASP developers As a result, you didn’t have to be a Web expert with a lot
of HTML and JavaScript knowledgeto write effective
Pages (ASP).
In a typical ASP page, you find HTML literals
were looking Web applications.
To simulate stateful programming over the Web,
interspersed with code blocks. In code blocks, script
code (mostly VBScript) is used to incorporate data
for exactly the ASP.NET Web Forms introduced features such as
viewstate, postbacks, and an overall event-driven
generated by COM objects, such as components
from the ActiveX Data Objects (ADO) framework.
set of features paradigm.For example, the developer can, say,
double-click on a button to automatically generate
Undoubtedly, the introduction of ASP.NET in the
late s was a big step forward, as it represented
that Web Forms a stub of code that will handle the user’s clicking to
the server. To get started writing an ASP.NET Web
a smart way to automate the production of HTML
displayed in a client’s browser.
ended up application, you just need to know the basics of .NET
development, the programming interface of some
ASP.NET simplified a number of everyday tasks
and, more importantly, enabled developers to work at
providing. ad hoc components, like server controls. Server
controls that generate HTML programmatically,
a higher level of abstraction.This allowed them to fo- and the runtime pipeline, contribute significantly
cus more on the core functions of the Web application rather than on to a fast development cycle.
common tasks around Web page design. At the end of the day, key features of ASP.NET Web Forms
Based on server controls, ASP.NET allows developers to build real- are just the componentization of some ASP best practices. For
world Web sites and applications with minimal HTML and JavaScript example, postback, auto-population of input fields, authentica-
skills. The whole point of ASP.NET is productivity, achieved through tion and authorization before page rendering, server controls, and
powerful tools integrated in the runtime as well as the provision of compilation of the page, are not features devised and created from
development facilities, such as server controls, user controls, postback scratch for ASP.NET Web Forms. All of them evolved from ASP
events, viewstate, forms authentication, and intrinsic objects. The model best practices. Ten years ago, ASP developers (including myself)
behind ASP.NET is called Web Forms and it was clearly inspired by were looking for exactly the set of features that Web Forms end-
the desktop Windows Forms model (in turn deeply inspired by the ed up providing. Furthermore, ASP.NET Web Forms generally
Visual Basic Rapid Application Development philosophy). exceeded our expectations by providing a full abstraction layer
So why did Microsoft release “another” ASP.NET framework, atop the whole Web stack: JavaScript, CSS, HTML.
called ASP.NET MVC? In this article, I’ll explore the pros and cons To write an ASP page, you did need to know quite a bit about the
of both ASP.NET Web Forms and ASP.NET MVC. Web and script languages. To write an ASP.NET page, in contrast,
you need to know primarily about .NET and its compiled languages.
Benefits of ASP.NET Web Forms Productivity and rapid development of data-driven, line-of-business
As noted, ASP.NET Web Forms is stable and mature, and it is applications have been the selling points of ASP.NET Web Forms.
supported by heaps of third party controls and tools. Until now, that is.
One of the keys to the rapid adoption of ASP.NET is certainly the
point-and-click metaphor taken from Windows desktop develop- Drawbacks of Web Forms
ment. With Web Forms, Microsoft basically extended the Visual Basic Like many other things in this imperfect world, ASP.NET Web
programming model to the Web. Desktop development, however, is Forms is not free of issues. Years of experience prove beyond any
stateful whereas the Web is inherently stateless. So the Web Forms reasonable doubt that separation of concerns (SoC) has not been
a natural fit with the Web Forms paradigm.
Send your questions and comments for Dino to cutting@microsoft.com.
Automated testing of an ASP.NET Web Forms application is
hard, and not just because of a lack of SoC. ASP.NET Web Forms is
28 msdn magazine
/update/2009/07
www.componentsource.com

BEST SELLER Janus WinForms Controls Suite from $757.44


Add Outlook style interfaces to your WinForms applications.
• Janus GridEX for .NET (Outlook style grid)
• Janus Schedule for .NET and Timeline for .NET (Outlook style calendar view and journal)
• Janus ButtonBar for .NET and ExplorerBar for .NET (Outlook style shortcut bars)
• Janus UI Bars and Ribbon Control (menus, toolbars, panels, tab controls and Office 2007 ribbon)
• Now includes Office 2007 visual style for all controls

BEST SELLER ContourCube from $825.00


OLAP component for interactive reporting and data analysis.
• Embed Business Intelligence functionality into database applications
• Zero report coding - design reports with drag and drop
• Self-service interactive reporting - get hundreds of reports by managing rows/columns
• Royalty free - only development licenses are needed
• Provides extremely fast processing of large data volumes

SELLER
BESTVERSION
NEW TX Text Control .NET and .NET Server from $494.10
Word processing components for Visual Studio .NET.
• Add professional word processing to your applications
• Royalty-free Windows Forms text box
• True WYSIWYG, nested tables, text frames, headers and footers, images, bullets,
structured numbered lists, zoom, dialog boxes, section breaks, page columns
• Load, save and edit DOCX, DOC, PDF, PDF/A, RTF, HTML, TXT and XML

BEST SELLER FusionCharts from $195.02


Interactive and animated charts for ASP and ASP.NET apps.
• Liven up your Web applications using animated Flash charts
• Create AJAX-enabled charts that can change at client-side without invoking server requests
• Implement drill-down functionality to show optimum amount of data at a time
• Export charts as images/PDF and data as CSV for use in reporting
• Also create gauges, financial charts, Gantt charts, funnel charts and over 450 maps

We accept purchase orders.


© 1996-2009 ComponentSource. All Rights Reserved. All prices correct at the time of press. Online prices may vary from those shown due to daily fluctuations & online discounts. Contact us to apply for a credit account.

US Headquarters
ComponentSource
European Headquarters
ComponentSource
Asia / Pacific Headquarters
ComponentSource Sales Hotline - US & Canada:
(888) 850-9911
650 Claremore Prof Way 30 Greyfriars Road 3F Kojimachi Square Bldg
Suite 100 Reading 3-3 Kojimachi Chiyoda-ku
Woodstock Berkshire Tokyo
GA 30188-5188 RG1 1PE Japan
USA United Kingdom 102-0083 www.componentsource.com
based on a monolithic runtime environment that can be extended, by invoking a method on a controller class. No postbacks are ever
to some extent, but it is not a pluggable and flexible system. It’s required to service a user request. No viewstate is ever required to
nearly impossible to test an ASP.NET application without spinning persist the state of the page. No arraysof black-box server controls
up the whole runtime. exist to produce the HTML for the browser.
To achieve statefulness, the last known state of each server page With ASP.NET MVC, you rediscover the good old taste of the
is stored within the client page as a hidden field—the viewstate. Web—stateless behavior, full control over every single bit of HTML,
Though viewstate has too often been held up as an example of what’s total script and CSS freedom.
wrong with Web Forms, it is not the monster it’s made out to be. In The HTML served to the browser is generated by a separate, and
fact, using a viewstate-like structure in classic ASP was a cutting- replaceable, engine. There’s no dependency on ASPX physical server
edge solution. From an ASP perspective, simulation of statefulness files. ASPX files may still be part of your project, but they now serve
(that is, postback, viewstate, controls) was a great achievement, and as plain HTML templates, along with their code-behind classes.
the same was also said at first about how Web Forms isolated itself The default view engine is based on the Web Forms rendering
from HTML and JavaScript details. engine, but you can use other pluggable engines such as nVelocity
For modern Web pages, abstraction from HTML is a serious issue or XSLT. (For more details, have a look at the MVCContrib Web
as it hinders accessibility, browser compatibility, and integration with site at mvccontrib.codeplex.com.)
popular JavaScript frameworks like jQuery, Dojo, and PrototypeJS. The The runtime environment is largely the same as in ASP.NET Web
postback model that defaults each page to post to itself, makes it harder Forms, but the request cycle is simpler and more direct. An essen-
for search engines to rank ASP.NET pages high. Search engines and tial part of the Web Forms model, the page lifecycle, is no longer
spiders work better with links with parameters, better if rationalized necessary in ASP.NET MVC. It should be noted, though, that the
to human-readable strings. The postback model goes in the opposite default view engine in ASP.NET MVC is still based on the Web
direction. Also, an excessively large viewstate is problematic because Forms rendering engine. This is the trick that allows you to use
the keyword the rank is based on may be located past the viewstate, master pages and some server controls in ASP.NET MVC views.
and therefore far from the top of the document. Some engines recog- As long as the view engine is based on Web Forms, the view is an
nize a lower rank in this case. ASPX file with a regular code-behind class where you can handle
classic events such as Init, Load, PreRender, plus control-specific
Benefits of ASP.NET MVC events such as RowDataBound for a GridView control. If you switch
Some of the recognized issues with the Web Forms model can be fixed off the default view engine, you no longer need Page_Loador other
within ASP.NET .. You can disable or control the size of the viewstate. events of the standard page lifecycle. Figure 1 compares the run-
(Even though very few developers seem to have noticed, the size of the time stack for Web Forms and ASP.NET MVC.
viewstate decreased significantly in the transition from ASP.NET . to It should be noted that the “Page lifecycle” boxes in Figure 1 have
ASP.NET . when Microsoft introduced a much more efficient seri- been collapsed for readability and include several additional events
alization algorithm. In ASP.NET ., you should also expect improve- each. Anyway, the run-time stack of ASP.NET MVC is simpler and
ments in the way in which viewstate can be disabled and controlled.) the difference is due to the lack of a page lifecycle. However, this
You can use an ad hoc HTTP module to do URL rewriting or, better makes it problematic to maintain the state of visual elements across
yet, you can use the newest Web routing API from ASP.NET . SP. In page requests. State can be stored into Session or Cache, but this
ASP.NET ., you can minutely control the ID of elements, including decision is left to the developer.
scoped elements. Likewise, in ASP.NET . the integration of external
JavaScript frameworks will be simpler and more effective. Finally, the
history management API in ASP.NET . SP made AJAX and postback
work together while producing a search-engine-friendly page.
In many aspects, the Web Forms model in ASP.NET . is a
better environment that tackles some of the aforementioned flaws.
So what’s the point of ASP.NET MVC?
You’ll find a good introduction to ASP.NET MVC in the March
 issue MSDN Magazine (msdn.microsoft.com/en-us/magazine/
cc337884.aspx), where Chris Tavares explains the basics of ASP.NET
development without Web Forms. In summary, ASP.NET MVC is
a completely new framework for building ASP.NET applications,
designed from the ground up with SoC and testability in mind.
When you write an ASP.NET MVC application, you think in terms
of controllers and views. You make your decisions about how to
pass data to the view and how to expose your middle tier to the
controllers. The controller chooses which view to display based
on the requested URL and pertinent data. Each request is resolved Figure 1 The Run-Time Stack at a Glance
30 msdn magazine Cutting Edge
Web was still to come. Adapting MVC to the Web moved it toward
the model in Figure 2, which is also known as Model. In general,
when you talk or read about MVC be aware that there are quite a
few slightly different flavors of it within the literature.
Drawbacks of ASP.NET MVC
So ASP.NET MVC brings to the table a clean design with a neat
separation of concerns, a leaner run-time stack, full control over
HTML, an unparalleled level of extensibility, and a working environ-
ment that enables, not penalizes, test-driven development (TDD).
Is ASP.NET MVC, therefore, a paradise for Web developers?
Just like with Web Forms, what some perceive as a clear strength
of ASP.NET MVC, others may see as a weakness. For example, full
control over HTML, JavaScript, and CSS, ASP.NET MVC means
that you enter the Web elements manually. Much of this pain can be
Figure 2 The Sequence Diagram of an ASP.NET MVC Request mitigated, however, with some of the more recent JavaScript libraries
and even different view engine. In general, though, there’s no sort of
Figure 2 illustrates the sequence of an ASP.NET MVC component model to help you with the generation of HTML, as there
request. is in the Web Forms approach. Currently, HTML helpers and user
The MVC acronym stands for Model-View-Controller. However, controls are the only tools you can leverage to write HTML more
you should note that the pattern depicted in Figure 2 doesn’t exactly quickly. As a result, , some ASP.NET developers may see ASP.NET
match the classic formulation of the MVC pattern. In particular, in MVC as taking a step backward in terms of usability and produc-
the original MVC paper, Model and View are tied together through tivity. Another point to be made, regarding the impact of ASP.NET
an Observer relationship. The MVC pattern, though, is deliberately MVC on everyday development, is that it requires some upfront fa-
loosely defined and, even more importantly, was devised when the miliarity with the MVC pattern. You need to know how controllers

This Makes You


Look Better.
Introducing DataParts, Data Visualization
Tools For SharePoint 2007
DataParts is a powerful new way to add interactive
business intelligence to SharePoint portals. With
DataParts, visualizing and analyzing data becomes
remarkably easy – and code free. DataParts includes
our complete suite of advanced lists, card views,
charts, digital panels and gauges as web parts that
can be easily configured in just minutes for the type
of data desired.

WSS 3.0 and Visit SoftwareFX.com for free trial versions, interactive
MOSS 2007 demos and more information about our latest products.
32 msdn magazine Cutting Edge
SharePoint is a trademark or a registered trademark of Microsoft Corporation. DataParts is a registered trademark of Software FX, Inc. Other names are trademarks or registered trademarks of their respective owners.
and views work together in the ASP.NET implementation. In other Correctly, Microsoft has not positioned ASP.NET MVC as a
words, ASP.NET MVC is not something you can easily learn by ex- replacement for ASP.NET Web Forms. Web Forms is definitely a
perimenting. In my experience, this may be the source of decreased pattern that works for Web applications. At the same time, Ruby-
productivity for the average Web Forms developer. on-Rails has proved that MVC can also be a successful pattern for
Web applications; and ASP.NET MVC confirms this.
The Right Perspective In the end, Web Forms and ASP.NET MVC have pros, cons, and
As an architect or developer, it is essential that you understand structural differences that affect various levels. Generalizing, I’d say
the structural differences between the frameworks so that you can that Web Forms embraces the RAD philosophy whereas ASP.NET
make a thoughtful decision. All in all, ASP.NET Web Forms and MVC is TDD-oriented. Further, Web Forms goes toward an abstrac-
ASP.NET MVC are functionally equivalent in the sense that a skilled tion of the Web that simulates a stateful environment, while ASP.NET
team can successfully use either to build any Web solution. MVC leverages the natural statelessness of the Web and guides you
The skills, education, and attitude of the team, though, are the towards building applications that are loosely coupled and inherently
key points to bear in mind. As you may have figured out yourself, testable, search-engine friendly, and with full control of HTML.
most of the features presented as a plus for either framework may
also be seen as a minus, and vice versa. Full control over HTML, What’s the Perfect Model for ASP.NET?
for example, may be a lifesaver to one person but a nightmare to After using Web Forms for years, I recognize a number of its
another. Personally, I was shocked the first time I saw the content drawbacks, and ASP.NET MVC addresses them quite well: test-
of a nontrivial view page in ASP.NET MVC. But when I showed ability, HTML control, and separation of concerns. But though I
the same page to a customer whose application was still using a see ASP.NET MVC as an equally valid option at this time, I don’t
significant number of ASP pages, well, he was relieved. If you have believe it to be the silver bullet for every Web application. In my
accessibility as a strict requirement, you probably want to take full opinion, ASP.NET MVC today lacks some level of abstraction
control over the HTML being displayed. And this is not entirely for creating standard pieces of HTML. HTML helpers are just an
possible with Web Forms. On the other hand, if you’re building a interesting attempt to speed up HTML creation. I hope to see in the
heavy data-driven application, you’ll welcome the set of data-bound near future a new generation of MVC-specific server controls, as
controls and statefulness offered by Web Forms. easy and quick to learn and use as Web Forms server controls, but

This Makes Your


Life Easier.
Introducing VTC, The Virtual Training Center
For SharePoint 2007
With VTC, IT and help desk personnel will no longer be
overloaded with SharePoint questions and training tasks.
VTC delivers a complete program of expertly produced,
self-paced tutorial modules designed to empower every
user and maximize the value of every SharePoint feature.
VTC installs in minutes on your server – providing instant
on-demand access for everyone in your organization.

msdnmagazine.com July 2009 33


Data visualization for every need, every platform
totally unbound from the postback and viewstate model. My hopes We should also note that control over HTML and SEO-friendly
are for a System.Web.Mvc.GridView control that saves me from URLs, both advantages of ASP.NET MVC, can be achieved to some
writing a loop to generate an HTML table, while offering column extent in Web Forms. ASP.NET . SP, in particular, includes the
templates, server-side data-binding events, and styling options. URL Routing and History API for SEO. CSS adapters, instead,
What would be the difference between such an MVC GridView are the tools to leverage to try to control HTML in Web Forms.
and today’s Web Forms GridView? The MVC GridView would only Integration with JavaScript and AJAX frameworks is, frankly, no
emit HTML plus, optionally, some row-specific JavaScript, but it longer an issue in Web Forms.
wouldn’t manage things like paging and sorting. Paging and sorting
will be delegated to other specific controls or plain links created by Undisputable Facts
the developer. In this way, the MVC GridView could bring some ASP.NET Web Forms and ASP.NET MVC are not competitors
RAD flavor to ASP.NET MVC, speeding up the development of in the sense that one is supposed to replace the other. You have
pages without precluding handcrafted pages. to choose one, but different applications may force you to make
Going back to the root of the problem, the key difference between different choices. In the end, it’s really like choosing between a car
Web Forms and ASP.NET MVC is the underlying pattern. Web Forms and a motorcycle when making a trip. Each trip requires a choice,
is a model based on the “Page Controller” pattern. Web Forms are and having both vehicles available should be seen as an opportunity,
UI-focused and centered around the concept of a page: the page gets not as a curse. Here are some facts about the frameworks:
input, posts back, and determines the output for the browser. The • Web Forms is hard to test.
development environment was therefore devised to enable rapid pro- • ASP.NET MVC requires you to manage the generation of
totyping, via wizards and rich designers. Any user action ends up in a HTML at a more detailed level.
method on the code-behind class of each page. At that point, though, • ASP.NET MVC is not the only way to get SoC in ASP.NET.
nothing prevents you from using proper SoC and nothing really stops • Web Forms allows you to learn as you go.
architects from imposing patterns like MVC, Model-View-Presenter • Viewstate can be controlled or disabled.
(MVP) or even Model-View-View-Model (MVVM). • Web Forms was designed to abstract the Web machinery.
The Web Forms architecture does not encourage SoC, but it doesn’t • ASP.NET MVC exposes Web architecture.
prevent it either. The Web Forms architecture makes it seductive • ASP.NET MVC was designed with testability and Dependency
to opt for drag-and-drop of controls, to code logic right in event Injection in mind.
stubs without further separation, and to drop data sources right • ASP.NET MVC takes you towards a better design of the code.
on the page, which couples the UI directly to the database. MVC • ASP.NET MVC is young and lacks a component model.
is neither prohibited nor a blasphemy in Web Forms; it’s just that • ASP.NET MVC is not anti-Web Forms.
very few developers practice it because it requires working around ASP.NET MVC was not created to replace Web Forms but to
much of the Web Forms infrastructure. partner it. ASP.NET MVC turns some of the weaker elements of
Testability is a different story. ASP.NET Web Forms doesn’t prevent Web Forms into its own internal strengths. However, problems such
unit testing, but it requires much disciple and repetitive boilerplate as lack of testability, SoC, SEO, and HTML control can be avoided
coding to do so.As long as you code your way to SoC you can test or reduced in Web Forms with some discipline and good design,
and reuse presentation and business logic. Sure, you likely don’t have though the framework itself doesn’t provide enough guidance.
the Visual Studio project template to create a test project for you. You
need to become familiar with testing and mocking frameworks and At the End of the Day
manage those projects yourself. But this can be done. From a testability We have seen that there are pros and cons in both Web Forms
perspective, though, a real difference exists between Web Forms and and ASP.NET MVC. Many developers, however, seem to favor ASP.
ASP.NET MVC. In Web Forms, you just don’t have the flexibility of NET MVC because it represents the only way to get SoC and test-
ASP.NET MVC. This is a true limitation. ASP.NET MVC is designed ability into their applications. Is it really the only way? No. However,
with testability in mind,which means that the framework architec- ASP.NET MVC makes it easier and natural to achieve SoC and
ture guides the developer to write code that is inherently testable, write more testable code. ASP.NET MVC doesn’t magically trans-
that is isolated from the context or connected to it via contracted form every developer into an expert architect and doesn’t prevent
interfaces. Even more importantly, in ASP.NET MVC intrinsic ob- developers from writing bloated and poorly designed code. At the
jects are mockable as they expose interface and base classes. From a end of the day, both Web Forms and ASP.NET MVC help to build
testing standpoint, the best you can do in Web Forms is to move your applications that are designed and implemented to deal eff ective-
logic into separate and easily testable classes. Then, you test the pre- ly with the complexity of real-world solutions. No software magic
sentation (ASPX and code-behind) by sending HTTP requests and exists, and none is yet supported by the ASP.NET platform. „
checking the results. You can’t do this in Web Forms without spin-
ning up the whole ASP.NET runtime, however. In ASP.NET MVC,
DINO ESPOSITO is an architect at IDesign and the co-author of “Microsoft .NET:
the majority of testing is to assure that the data being passed into the Architecting Applications for the Enterprise” (Microsoft Press, ). Based in
view is correct. In addition, you can mock up intrinsic objects and Italy, Dino is a frequent speaker at industry events worldwide. You can join his
run your tests in a genuinely isolated environment. blog at weblogs.asp.net/despos.
34 msdn magazine Cutting Edge
S I LV E R L I G H T

Composite Web Apps


with Prism
Shawn Wildermuth

Your first experience with Silverlight was probably some- these for Windows Presentation Foundation (WPF) applications,
thing small: a video player, a simple charting application, or even a and Prism has been updated to support Silverlight as well. The
menu. These types of applications are simple and straightforward Prism package is a mix of framework and guidance for building
to design, and segmenting them into rigorous layers with separate applications. The framework, called the Component Application
responsibilities is overkill. Library (CAL), enables the following:
Problems surface, however, when you try to apply a tightly coupled • Application modularity: Build applications from partitioned
style to large applications. As the number of moving parts grows, components.
the simple style of application development falls apart. Part of the • UI composition: Allows loosely coupled components to form
remedy is layering (see my article “Model-View-ViewModel In user interfaces without discrete knowledge of the rest of the
Silverlight  Apps” at msdn.microsoft.com/en-us/magazine/dd458800.aspx), application.
but a tightly coupled architecture is just one of a number of prob- • Service location: Separate horizontal services (for example,
lems that need to be solved in large Silverlight projects. logging and authentication) from vertical services (business
In this article, I show you how to build an application using the logic) to promote clean layering of an application.
composition techniques of the Composite Application Library The CAL is written with these same design principles in mind,
from the Prism project. The example I develop is a simple editor and for application developers it is a buffet-style framework—take
of database data. what you need and leave the rest. Figure 1 shows the basic layout
of the CAL in relation to your own application.
Why Prism? The CAL supports these services to aid you in composing your
As requirements change and a project matures, it is helpful if you application from smaller parts. This means the CAL handles which
can change parts of the application without having these changes pieces are loaded (and when) as well as providing base function-
cascade throughout the system. Modularizing an application ality. You can decide which of these capabilities help you do your
allows you to build application components separately (and loosely job and which might get in the way.
coupled) and to change whole parts of your application without
affecting the rest of the code.
Also, you might not want to load all the pieces of your application This article discusses:
at once. Imagine a customer management application in which users • Silverlight 2
log on and can then manage their prospect pipeline as well as check e- • Application composition
mail from any of their prospects. If a user checks e-mail several times • Dependency injection
a day but manages the pipeline only every day or two, why load the
Technologies discussed:
code to manage the pipeline until it is needed? It would be great if an
application supported on-demand loading of parts of the application, Silverlight 2, Prism
a situation that can be addressed by modularizing the application. Code download available at:
The patterns & practices team at Microsoft created a project called code.msdn.microsoft.com/mag200907Prism
Prism (or CompositeWPF) that is meant to address problems like
July 2009 35
Get the richest set of controls in one great suite. Studio Enterprise 2009
will amaze you with enhanced performance and presentation.

NEW Studio for Silverlight


• Add style to your UI with built-in support for the most • Import and export RTF files with C1RichTextBox
popular Microsoft Silverlight Toolkit themes
• Get ahead of the pack with access to the best resources,
• Create graphical navigation with 3D effects using the including 40+ samples with source code for quick learning
new C1CoverFlow control & online forums

NEW Studio for ASP.NET


• Experience more interaction on the client-side with new • Eliminate the learning curve with new C1GridView
lightweight, high-performance controls control that mimics the Microsoft ASP.NET GridView
control with added features like row filtering, virtual
• Create themed and animated apps using dozens of scrolling & more
built-in styles & effects

NEW Studio for iPhone


• Build iPhone apps using ASP.NET

• Give end-users a familiar experience across Web sites


while taking advantage of the iPhone's unique interface

GET STARTED TODAY • DOWNLOAD YOUR FREE TRIAL @


componentone.com/amazingweb

ComponentOne Sales
1.800.858.2739 or 1.412.681.4343
Reduce the size of XAP
files by up to 70%
TRY IT FREE ONLINE @
labs.componentone.com

Grids • Char ting • Repor ting • Scheduling • Menus and Toolbars • Ribbon • Data Input • Editors • PDF
WinForms • WPF • ASP.NET • Silverlight • iPhone • Mobile • ActiveX
© 1987-2009 ComponentOne. All rights reserved. iPhone and iPod are trademarks of Apple Inc. Guitar Hero is a registered trademark of
Activision Publishing, Inc. All other product and brand names are trademarks and/or registered trademarks of their respective holders.
But why is this important? For one thing, modularizing your
code should make it easier to test. Being able to swap out a project’s
dependencies enables cleaner testing so that only the code to be
tested can be the source of a test failing, instead of code some-
where in the nested chain of dependencies. Here’s a concrete
example. Imagine you have a component that other developers
use to look up addresses for particular companies. Your com-
ponent depends on a data access component that retrieves the
data for you. When you test your component, you start by test-
Figure 1 Composite Application Library ing it against the database, and some of the tests fail. But because
the schema and builds of the database are constantly changing,
My example in this article uses as much of the CAL as possible. you don’t know whether your tests are failing because of your
It is a shell application that uses the CAL to load several modules own code or the data access code. With your component’s hard
at run time, place views in regions (as shown in Figure 2), and sup- dependency on the data access component, testing the applica-
port services. But before we get to that code, you need to under- tion becomes unreliable and causes churn while you track down
stand some basic concepts about dependency injection (also called failures in your code or in other’s code.
Inversion of Control, or IoC). Many of the CAL’s features rely on Your component might look something like this:
dependency injection, so understanding the basics will help you public class AddressComponent
develop the architecture of your Silverlight project with Prism. {
DataAccessComponent data = new DataAccessComponent();

Introducing Dependency Injection public AddressComponent()


{
In typical development, a project starts with an entry point (an }
executable, a default.aspx page, and so on). You might develop
...
your application as one giant project, but in most cases some level }
of modularity exists in that your application loads a number of
Instead of a hard-wired component, you could accept an inter-
assemblies that are part of the project. The main assembly knows
face that represents your data access, as shown here:
what assemblies it needs and creates hard references to those pieces. public interface IDataAccess
At compile time, the main project knows about all the referenced {
...
assemblies, and the user interface consists of static controls. The }
application is in control of what code it needs and usually knows all
public class AddressComponent
the code it might use. This becomes a problem, however, because {
development takes place inside the main application project. As a IDataAccess data;
monolithic application grows, build time and conflicting changes public AddressComponent(IDataAccess da)
can slow down development. {
data = da;
Dependency injection aims to reverse this situation by provid- }
ing instructions that set up dependencies at run time. Instead of
...
the project controlling these dependencies, a piece of code called }
a container is responsible for injecting them.
Ordinarily, an interface is used so you can create a version
that allows you to adjust your code. This approach is often called
“mocking.” Mocking means creating an implementation of the
dependency that does not actually represent the real version.
Literally, you’re creating a mock implementation.
This approach is better because the dependency (IDataAccess)
can be injected into the project during construction of the object.
The implementation of the IDataAccess component will depend
on the requirements (testing or real).
That’s essentially how dependency injection works, but how
is the injection handled? The job of the container is to handle
creation of the types, which it does by allowing you to register
types and then resolving them. For example, assume you have
a concrete class that implements the IDataAccess interface.
During start up of the application, you can tell the container
to register the type. Anywhere else in your application where
Figure 2 Composite Application Architecture you need the type, you can ask the container to resolve the
38 msdn magazine Silverlight
Figure 3 Type Resolution by the Container You use the LifetimeManager class by specifying it when regis-
public class AddressComponent : IAddressComponent
tering a type. Here’s an example:
{ container.RegisterType<IAddressComponent, AddressComponent>(
IDataAccess data; new ContainerControlledLifetimeManager());

public AddressComponent(IDataAccess da)


In the CAL, the IoC container is based on the Unity framework
{ from the patterns & practices group. I’ll use the Unity container in
data = da; the following examples, but there are also a number of open source
}
} alternatives to the Unity IoC container, such as Ninject, Spring.NET,
...
Castle, and StructureMap. If you are familiar with and already using
an IoC container other than Unity, you can supply your own con-
public void App_Startup() tainer (although it takes a little more eff ort).
{
container.RegisterType<IAddressComponent, AddressComponent>();
container.RegisterType<IDataAccess, DbDataAccess>(); Startup Behavior
}
Ordinarily in a Silverlight application, the startup behavior is
public void GetAddresses() simply to create the main XAML page’s class and assign it to the
{
// When we ask the container to create the AddressComponent, application’s RootVisual property. In a composite application,
// it sees that a constructor takes a IDataAccess object this work is still required, but instead of creating the XAML page
// so it automatically resolves that dependency
IAddressComponent addr = container.Resolve<IAddressComponent>(); class, a composite application typically uses a bootstrapping class
} to handle startup behavior.
To start, you need a new class that derives from the
UnityBootstrapper class. This class is in the Microsoft.Practices.
type, as shown here: Composite.UnityExtensions assembly. The bootstrapper contains
public void App_Startup() overridable methods that handle different parts of startup behavior.
{
container.RegisterType<IDataAccess, DbDataAccess>(); Often, you will not override every startup method, only the ones
} necessary. The two methods you must override are CreateShell
... and GetModuleCatalog.
The CreateShell method is where the main XAML class is created.
public void GetData()
{ This is typically called the shell because it is the visual container for the
IDataAccess acc = container.Resolve<IDataAccess>(); application’s components. My example includes a bootstrapper that
}
creates a new instance of the Shell class and assigns it to RootVisual
Depending on the situation (testing or production), you can
before returning this new Shell class, as shown here:
swap out the implementation of IDataAccess simply by changing public class Bootstrapper : UnityBootstrapper
the registration. Additionally, the container can handle construc- {
protected override DependencyObject CreateShell()
tion injection of dependencies. If an object that needs to be created {
by the container’s constructor takes an interface that the container Shell theShell = new Shell();
App.Current.RootVisual = theShell;
can resolve, it resolves the type and passes it to the constructor, as return theShell;
shown in Figure 3. }
Notice that the AddressComponent’s constructor takes an protected override IModuleCatalog GetModuleCatalog()
implementation of IDataAccess. When the constructor creates {
...
the AddressComponent class during resolution, it automatically }
creates the instance of IDataAccess and passes it to the Address- }
Component. The GetModuleCatalog method, which I’ll explain in the next
When you register types with the container, you also tell the section, returns the list of modules to load.
container to deal with the lifetime of the type in special ways. For Now that you have a bootstrapper class, you can use it in your
example, if you are working with a logging component, you might Silverlight application’s startup method. Usually, you create a new
want to treat it as a singleton so that every part of the application instance of the bootstrapper class and call its Run method, as
that needs logging does not get its own copy (which is the default shown in Figure 4.
behavior). To do this, you can supply an implementation of the The bootstrapper is also involved in registering types with
abstract LifetimeManager class. Several lifetime managers are sup- the container that different parts of the application require. To
ported. ContainerControlledLifetimeManager is a singleton per accomplish this, you override the ConfigureContainer method
process and PerThreadLifetimeManager is a singleton per thread. of the bootstrapper. This gives you a chance to register any types
For ExternallyControlledLifetimeManager, the container holds a that are going to be used by the rest of the application. Figure 5
weak reference to the singleton. If the object is released externally, shows the code.
the container creates a new instance, otherwise it returns the live Here, the code registers an interface for a class that implements
object contained in the weak reference. the IShellProvider interface, which is created in our example and
40 msdn magazine Silverlight
Figure 4 Creating an Instance of the Bootstrapper needs to supports the IModule interface. The IModule interface
public partial class App : Application
requires a single method called Initialize that allows the module
{ to set itself up to be used in the rest of the application. The example
public App()
includes a ServerLogger module that contains the logging capabili-
{ ties for our application. The ServerLoggingModule class supports
this.Startup += this.Application_Startup;
this.Exit += this.Application_Exit;
the IModule interface as shown here:
this.UnhandledException += this.Application_UnhandledException; public class ServerLoggerModule : IModule
{
InitializeComponent(); public void Initialize()
} {
...
private void Application_Startup(object sender, StartupEventArgs e) }
{ }
Bootstrapper boot = new Bootstrapper(); The problem is that we don’t know what we want to initialize
boot.Run();
} in our module. Since it’s a ServerLogging module, it seems logical
that we want to register a type that does logging for us. We want
...
} to use the container to register the type so that whoever needs the
logging facility can simply use our implementation without know-
ing the exact type of logging it performs.
We get the container by creating a constructor that takes the
is not part of the CAL framework. That way we can use it in our
IUnityContainer interface. If you remember the discussion of
implementation of the CreateShell method. We can resolve the
dependency injection, the container uses constructor injection
interface and then use it to create an instance of the shell so we
to add types that it knows about. IUnityContainer represents the
can assign it to RootVisual and return it. This methodology may
container in our application, so if we add that constructor, we can
seem like extra work, but as you delve into how the CAL helps you
then save it and use it in our initialization like so:
build your application, it becomes clear how this bootstrapper is
public class ServerLoggerModule : IModule
helping you. {
IUnityContainer theContainer;

Modularity public ServerLoggerModule(IUnityContainer container)


{
In a typical .NET environment, the assembly is the main unit theContainer = container;
of work. This designation allows developers to work on their code }
separately from each other. In the CAL, each of these units of work public void Initialize()
is a module, and for the CAL to use a module, it needs a class that {
theContainer.RegisterType<ILoggerFacade, ServerBasedLogger>(
can communicate the module’s startup behavior. This class also new ContainerControlledLifetimeManager());
}
}

Figure 5 Registering Types Once initialized, this module is responsible for the logging
implementation for the application. But how does this module
public class Bootstrapper : UnityBootstrapper get loaded?
{
protected override void ConfigureContainer() When using the CAL to compose an application, you need
{ to create a ModuleCatalog that contains all the modules for the
Container.RegisterType<IShellProvider, Shell>();
base.ConfigureContainer(); application. You create this catalog by overriding the bootstrapper’s
} GetModuleCatalog call. In Silverlight, you can populate this catalog
protected override DependencyObject CreateShell() with code or with XAML.
{ With code, you create a new instance of the ModuleCatalog class
// Get the provider for the shell
IShellProvider shellProvider = Container.Resolve<IShellProvider>(); and populate it with the modules. For example, look at this:
protected override IModuleCatalog GetModuleCatalog()
// Tell the provider to create the shell {
UIElement theShell = shellProvider.CreateShell(); var logModule = new ModuleInfo()
{
// Assign the shell to the root visual of our App ModuleName = "ServerLogger",
App.Current.RootVisual = theShell; ModuleType =
"ServerLogger.ServerLoggerModule, ServerLogger, Version = 1.0.0.0"
// Return the Shell };
return theShell;
} var catalog = new ModuleCatalog();
catalog.AddModule(logModule);
protected override IModuleCatalog GetModuleCatalog()
{ return catalog;
... }
} Here, I simply add a single module called ServerLogger, the type
}
defined in the ModuleInfo’s ModuleType property. In addition, you
msdnmagazine.com July 2009 41
can specify dependencies between modules. Because some modules Figure 6 A Sample Catalog.xaml File
might depend on others, using dependencies helps the catalog <m:ModuleCatalog
know the order in which to bring in the dependencies. Using the xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
ModuleInfo.DependsOn property, you can specify which named xmlns:sys="clr-namespace:System;assembly=mscorlib"
modules are required to load another module. xmlns:m="clr-namespace:Microsoft.Practices.Composite.Modularity;
assembly=Microsoft.Practices.Composite">
You can load the catalog directly from a XAML file, as shown <m:ModuleInfoGroup InitializationMode="WhenAvailable">
here: <m:ModuleInfo ModuleName="GameEditor.Client.Data"
ModuleType="GameEditor.Client.Data.GameEditorDataModule,
protected override IModuleCatalog GetModuleCatalog()
GameEditor.Client.Data, Version=1.0.0.0"/>
{
<m:ModuleInfo ModuleName="GameEditor.GameList"
var catalog = ModuleCatalog.CreateFromXaml(new Uri("catalog.xaml",
ModuleType="GameEditor.GameList.GameListModule,
UriKind.Relative));
GameEditor.GameList, Version=1.0.0.0"
return catalog;
InitializationMode="WhenAvailable">
}
<m:ModuleInfo.DependsOn>
The XAML file contains the same type information you can cre- <sys:String>GameEditor.Client.Data</sys:String>
</m:ModuleInfo.DependsOn>
ate with code. The benefit of using XAML is that you can change it </m:ModuleInfo>
on the fly. (Imagine retrieving the XAML file from a server or from </m:ModuleInfoGroup>
</m:ModuleCatalog>
another location based on which user logged on.) An example of
a catalog.xaml file is shown in Figure 6.
In this XAML catalog, the group includes two modules and the
For Silverlight applications, this is a way to compose your
second module depends on the first. You could use a specific XAML
applications from multiple .xap files, which allows you to version
catalog based on roles or permissions, as you could with code.
different sections of your composed application separately.
Once the catalog is loaded by the bootstrapper, it attempts to
When creating Silverlight modules to be housed in a .xap file,
create instances of the module classes and allow them to initialize
you create a Silverlight Application (not a Silverlight Library).
themselves. In the code examples here, the types have to be refer-
Then you reference all the module projects you want to put in
enced by the application (therefore, already loaded into memory)
the .xap file. You need to remove the app.xaml and page.xaml
for this catalog to work.
files because this .xap file will not be loaded and run like a typi-
This is where this facility becomes indispensible to Silverlight.
cal .xap file. The .xap file is just a container (could be a .zip file,
Although the unit of work is the assembly, you can specify a .xap
it doesn’t matter). Also, if you are referencing projects that are
file that contains the module or modules. To do this, you specify
already referenced in the main project, you can change those
a Ref value in ModuleInfo. The Ref value is a path to the .xap file
references to Copy Local=false in the properties because you
that contains the module:
don’t need the assemblies in the .xap file (the main application
protected override IModuleCatalog GetModuleCatalog()
{ has already loaded them, so the catalog will not try to load them
var logModule = new ModuleInfo() a second time.)
{
ModuleName = "ServerLogger", But loading a huge application with multiple calls across
ModuleType = the wire does not seem like it would help performance. That is
"ServerLogger.ServerLoggerModule, ServerLogger, Version= 1.0.0.0",
Ref = "ServerLogger.xap" where the ModuleInfo’s InitializationMode property comes into
}; play. InitializationMode supports two modes: WhenAvailable,
var catalog = new ModuleCatalog(); in which the .xap file is loaded asynchronously and then initial-
catalog.AddModule(logModule);
ized (this is the default behavior), and OnDemand, in which
return catalog; the .xap is loaded when explicitly requested. Since the module
}
catalog does not know the types in the modules until initial-
When you specify a .xap file, the bootstrapper knows that the ization, resolving types that are initialized with OnDemand
assembly is not available and goes out to the server and retrieves will fail.
the .xap file asynchronously. Once the .xap file is loaded, Prism On-demand support for modules and groups allows you to
loads the assembly and creates the module type and initializes load certain functionality in a large application as needed. Startup
the module. time is accelerated, and other required code can be loaded as users
For .xap files that contain multiple modules, you can create a interact with an application. This is a great feature to use when you
ModuleGroup that contains a set of ModuleInfo objects and set have authorization to separate parts of an application. Users who
the Ref of the ModuleGroup to load all those modules from a need only a few parts of the application do not have to download
single .xap file: code they’ll never use.
var modGroup = new ModuleInfoGroup();
modGroup.Ref = "MyMods.xap"; To load a module on demand, you need access to an IModule-
modGroup.Add(logModule); Manager interface. Most often, you request this in the construc-
modGroup.Add(dataModule);
modGroup.Add(viewModule); tor of the class that needs to load a module on demand. Then you
var catalog = new ModuleCatalog();
use IModuleManager to load the module by calling LoadModule,
catalog.AddGroup(modGroup); as shown in Figure 7.
42 msdn magazine Silverlight
Willie the Wintellectual

memory leaks?
poor performance
& scalability?
unexplained
crashes?

www.wintellect.com • 877-968-5528 (consulting) | 866-968-5528 (training) • wintellect is know how


Figure 7 Calling LoadModule This facility allows you to define regions in your application
public class GameListViewModel : IGameListViewModel
where views can appear and then have the modules define how
{ to place views into the region, allowing the shell to be completely
IModuleManager theModuleManager = null;
ignorant of the views.
public GameListViewModel(IModuleManager modMgr)
{
The behavior of the regions might be different depending on the
theModuleManager = modMgr; control type being hosted. The example uses a ScrollViewer so that
}
one and only one view can be added to the region. In contrast, Item-
void theModel_LoadGamesComplete(object sender,
LoadEntityCompleteEventArgs<Game> e)
Control regions allow for multiple views. As each view is added, it
{ shows up as a new item in the ItemsControl. That facility makes it
...
easier to build functionality like a dashboard.
// Since we now have games, let's load the detail pane
theModuleManager.LoadModule("GameEditor.GameDetails");
If you are using an MVVM pattern to define your views, you can
} mix the regions and the service location aspects of the container
}
to make your views and view models ignorant of each other and
then let the module join them at run time. For example, if I change
Modules are simply the unit of modularization in your applica- the GameListModule, I can register views and views models with
tions. In Silverlight, treat a module much like you would a library the container and then join them before applying the view to the
project, but with the extra work of module initialization, you can region, as shown in Figure 8.
decouple your modules from the main project. This approach allows you to use UI composition while maintaining
the strict separation of MVVM.
UI Composition
In a typical Explorer application, the left pane displays a list or Event Aggregation
tree of information and the right side contains details about the item After you have multiple views in your applications through UI
selected in the left pane. In the CAL, these areas are called regions. composition, you face a common problem. Even though you have
The CAL supports defining regions directly in XAML by using built independent views to support better testing and development,
an attached property on the RegionManager class. This property there are often touch points where the views cannot be completely
allows you to specify regions in your shell and then indicate what isolated. They are logically coupled because they need to commu-
views should be hosted in the region. For example, our shell has two nicate, but you want to keep them as loosely coupled as possible
regions, LookupRegion and DetailRegion, as shown here: regardless of the logical coupling.
<UserControl
...
xmlns:rg=
"clr-namespace:Microsoft.Practices.Composite.Presentation.Regions; Figure 8 Joining Views in a Region
assembly=Microsoft.Practices.Composite.Presentation">
... public class GameListModule : IModule
<ScrollViewer rg:RegionManager.RegionName="LookupRegion" /> {
<ScrollViewer rg:RegionManager.RegionName="DetailRegion" /> IRegionManager regionManager = null;
</UserControl> IUnityContainer container = null;
A RegionName can be applied to the an ItemsControl and its public GameListModule(IUnityContainer con, IRegionManager mgr)
derived controls (for example, ListBox); Selector and its derived {
regionManager = mgr;
controls (for example, TabControl); and ContentControl and its container = con;
derived controls (for example, ScrollViewer). }
Once you define regions, you can direct modules to load their public void Initialize()
views into the regions by using the IRegionManager interface, as {
RegisterServices();
shown here:
// Build the View
public class GameListModule : IModule
var view = container.Resolve<IGameListView>();
{
IRegionManager regionManager = null; // Get an Implemenation of IViewModel
var viewModel = container.Resolve<IGameListViewModel>();
public GameListModule(IRegionManager mgr)
{ // Marry Them
regionManager = mgr; view.ApplyModel(viewModel);
}
// Show it in the region
public void Initialize() regionManager.AddToRegion("LookupRegion", view);
{ }
// Build the View void RegisterServices()
var view = new GameListView(); {
container.RegisterType<IGameListView, GameListView>();
// Show it in the region container.RegisterType<IGameListViewModel, GameListViewModel>();
regionManager.AddToRegion("LookupRegion", view); }
} }
}

44 msdn magazine Silverlight


To enable loose coupling and cross-view communication, the To get started with DelegateCommand, you need to define the
CAL supports a service called event aggregation. Event aggre- DelegateCommand in your ViewModel so you can data-bind to it.
gation allows access to different parts of the code to publishers In the ViewModel, you would create a new DelegateCommand. The
and consumers of global events. Such access provides a straight- DelegateCommand expects the type of data to be sent to it (often
forward way of communicating without being tightly coupled Object if no data is used) and one or two callback methods (or lambda
and is accomplished using the CAL’s IEventAggregator interface. functions). The first of these methods is the action to execute when
IEventAggregator allows you to publish and subscribe to events the command is fired. Optionally, you can specify a second callback
across the different modules of your application. to be called to test whether the command can be fired. The idea is to
Before you can communicate, you need a class that derives from enable the disabling of objects in the UI (buttons, for example) when
EventBase. Typically, you create a simple event that derives from the it is not valid to fire the command. For example, our GameDetails-
CompositePresentationEvent<T> class. This generic class allows ViewModel contains a command to support saving data:
you to specify the payload of the event you are going to publish. // Create the DelegateCommand
In this case, the GameListViewModel is going to publish an event SaveCommand = new DelegateCommand<object>(c => Save(), c => CanSave());

after a game is selected so that other controls that want to change When SaveCommand is executed, it calls the Save method on our
their context as the user selects a game can subscribe to that event. ViewModel. The CanSave method is then called to make sure the com-
Our event class looks like the following: mand is valid. This allows the DelegateCommand to disable the UI if
public class GameSelectedEvent : CompositePresentationEvent<Game> necessary. As the state of the view changes, you can call the Delegate-
{
}
Command.RaiseCanExecuteChanged method to force a new inspec-
Once the event is defined, the event aggregator can publish the event tion of the CanSave method to enable or disable the UI as necessary.
by calling its GetEvent method. This retrieves the singleton event that To bind this to XAML, use the Click.Command attached prop-
is going to be aggregated. The first one who calls this method cre- erty that is in the Microsoft.Practices.Composite.Presentation.
ates the singleton. From the event, you can call the Publish method Commands namespace. Then bind the value of the command to
to create the event. Publishing the event is like firing an event. You be the command you have in your ViewModel, like so:
do not need to publish the event until it needs to send information. <Button Content="Save"
cmd:Click.Command="{Binding SaveCommand}"
For example, when a game is selected in the GameList, our example Style="{StaticResource ourButton}"
publishes the selected game using the new event: Grid.Column="1" />

// Fire Selection Changed with Global Event Now when the Click event is fired, our command is executed. If
theEventAggregator.GetEvent<GameSelectedEvent>().Publish(o as Game);
you want, you can specify a command parameter to be sent to the
In other parts of your composed application, you can subscribe command so you can reuse it.
to the event to be called after the event is published. The Subscribe As you might be wondering, the only command that exists
method of the event allows you to specify the method to be called in the CAL is the Click event for a button (or any other selec-
when the event is published, an option that allows you to request tor). But the classes you can use to write your own commands
threading semantics for calling the event (for example, the UI are fairly straightforward. The sample code includes a command
thread is commonly used), and whether to have the aggregator for SelectionChanged on a ListBox/ComboBox. This command
hold a reference to the passed in information so that it is not sub- is called the SelectorCommandBehavior and derives from the
ject to garbage collection: CommandBehaviorBase<T> class. Looking at the custom com-
// Register for the aggregated event
aggregator.GetEvent<GameSelectedEvent>().Subscribe(SetGame, mand behavior implementation will provide you with a starting
ThreadOption.UIThread, place to write your own command behaviors.
false);
As a subscriber, you can also specify filters that are called only in spe- Wrapping Up
cific situations. Imagine an event that returns the state of an application There are definite pain points when developing large Silverlight
and a filter that is called only during certain data states. applications. By building your applications with loose coupling
The event aggregator allows you to have communication between and modularity, you gain benefits and can be agile in responding
your modules without causing tight coupling. If you subscribe to to change. The Prism project from Microsoft provides the tools and
an event that is never published or publish an event that is never guidance to allow that agility to come to the surface of your proj-
subscribed, your code will never fail. ect. While Prism is not a one-size-fits-all approach, the modularity
of the CAL means you can use what fits in your specific scenarios
Delegate Commands and leave the rest. „
In Silverlight (unlike WPF), a true commanding infrastructure
does not exist. This forces the use of code-behind in views to
accomplish tasks that would be accomplished more readily directly
SHAWN WILDEMUTH is a Microsoft MVP (C#) and the founder of Wildermuth
in XAML using the commanding infrastructure. Until Silverlight Consulting Services. He is the author of several books and numerous articles. In
supports this facility, the CAL supports a class that helps solve this addition, Shawn currently runs the Silverlight Tour, teaching Silverlight  around
problem: the DelegateCommand. the country. He can be contacted at shawn@wildermuthconsulting.com.
msdnmagazine.com July 2009 45
R E S Tf u l X H T M L

Building XHTML-
Based RESTful Services
with ASP.NET MVC
Aaron Skonnard

A RESTful service is a web of resources that programs When a user clicks on an <a> element in the rendered page, the
can navigate. When designing a RESTful service, you have to think browser knows to issue an HTTP GET request for the target resource
carefully about how your web will work. This means designing and render the response. When a browser encounters a <form>
resource representations with links that facilitate navigation, de- element, it knows how to render the form description into a user
scribing service input somehow, and considering how consumers interface that the user can fill out and submit using either a GET or
will navigate around your service at run time. Getting these things POST request. When the user presses a submit button, the browser
right is often overlooked, but they’re central to realizing the full encodes the data and sends it using the specified request. These two
potential REST has to offer. features are largely responsible for the success of the Web.
Today, humans navigate sites using Web browsers that know how Using links in conjunction with the universal HTTP interface
to render HTML and other popular content types. HTML provides makes it possible to redirect requests to new locations over time
the syntax and semantics for establishing links between resources and change certain aspects of security on the fly without changing
(<a> element) and for describing and submitting application input the client code. A standard approach for forms means that you can
(<form> and <input> elements). add or remove input properties and change default values, again
without changing the client code. Both features are very useful for
building applications that evolve over time.
This article discusses: Your RESTful services should also somehow provide these two
• REST features through whatever resource representation you decide to
• XHTML use. For example, if you’re designing a custom XML dialect for your
• ASP.NET MVC service, you should probably come up with your own elements
for establishing links and describing service input that will guide
Technologies discussed:
consumers through your web. Or you can simply use XHTML.
REST, XHTML, ASP.NET Most developers don’t immediately consider XHTML as an
Code download available at: option for “services,” but that’s actually one of the ways it was intend-
msdn.microsoft.com/mag200907REST
ed to be used. XHTML documents are by definition well-formed
XML, which allows for automated processing using standard XML
48 msdn magazine
APIs. And since XHTML is also HTML, it comes with <a>, <form>, probably find that XLinq provides the most natural programming
and <input> elements for modeling link navigation and service model for consuming XHTML programmatically. In addition, you
input as I described earlier. The only thing that’s a little strange at can go one step further by enhancing XLinq with some helpful
first is how you model user-defined data structures—however, you XHTML-focused extension methods that make the programming
can model classes and fields with <div> and <span> elements and model even easier.
collections of entities with <ol> and <li> elements. I’ll walk through Throughout this article, I’ll use a set of XLinq extension methods
how to do this in more detail later in the article. that I’ve included in the downloadable sample code. These exten-
To summarize, there are several reasons to consider XHTML sions give you a good idea of what’s possible. The following code
as the default representation for your RESTful services. First, you shows how to consume the bookmark XHTML shown previously
can leverage the syntax and semantics for important elements like using XLinq and some of these extensions:
<a>, <form>, and <input> instead of inventing your own. Second, var bookmarkDetails = bookmarkDoc.Body().Struct("bookmark");
Console.WriteLine(bookmarkDetails["bookmark-id"].Value);
you’ll end up with services that feel a lot like sites because they’ll Console.WriteLine(bookmarkDetails["bookmark-url"].Value);
be browsable by both users and applications. The XHTML is still Console.WriteLine(bookmarkDetails["bookmark-title"].Value);
interpreted by a human—it’s just a programmer during development Now, if you want to improve how this bookmark renders in
instead of a user at runtime. This simplifies things throughout the a browser, you can add a Cascading Stylesheet (CSS) to control
development process and makes it easier for consumers to learn browser-specific rendering details or add some additional UI
how your service works. And finally, you can leverage standard Web elements and text that don’t compromise the consumer’s ability to
development frameworks to build your RESTful services. extract the data of interest. For example, the following XHTML
ASP.NET MVC is one such framework that provides an inherently will be easier for humans to process, but you can still use the
RESTful model for building XHTML-based services. This article previous .NET code sample to process the information without
walks through some XHTML design concepts and then shows you any modification:
how to build a complete XHTML-based RESTful service that you <h1>Bookmark Details: 3</h1>
<div class="bookmark">
can download from the MSDN Magazine site. BookmarkID: <span class="bookmark-id">25</span><br />
Title: <span class="bookmark-title">Aaron's Blog</span><br />

XHTML: Representing Data and Links Url: <a class="bookmark-url-link" href="http://pluralsight.com/aaron"


>http://pluralsight.com/aaron</a><br />
Before I dive into the details of ASP.NET MVC, let’s first look at Username: <span class="bookmark-username">skonnard</span></a><br />
</div>
how you can represent common data structures and collections in
Collections of resources aren’t hard to model either. You can
XHTML. This approach isn’t the only way to accomplish this, but
represent a list of bookmarks with a combination of <ol>, <li>,
it’s a fairly common practice in XHTML-based services today.
and <a> elements as shown here:
Throughout this article, I’ll describe how to implement a simple <ol class="bookmark-list">
bookmark service. The service allows users to create, retrieve, <li><a class="bookmark-link" href="/bookmarks/1">Pluralsight Home</
update, and delete bookmarks and navigate a web of bookmarks a></li>
<li><a class="bookmark-link" href="/bookmarks/2">Pluralsight On-
in a variety of ways. Suppose you have a C# class representing a Demand!</a></li>
bookmark that looks like this: <li><a class="bookmark-link" href="/bookmarks/3">Aaron's Blog</a></li>
<li><a class="bookmark-link" href="/bookmarks/4">Fritz's Blog</a></li>
public class Bookmark <li><a class="bookmark-link" href="/bookmarks/5">Keith's Blog</a></li>
{ </ol>
public int Id { get; set; }
public string Title { get; set; } The following code shows how to print this list of bookmarks
public string Url { get; set; } to the console:
public string User { get; set; }
var bookmarks = bookmarksDoc.Body().Ol("bookmark-list").Lis();
}
bookmarks.Each(bm => Console.WriteLine("{0}: {1}",
The first question is how can you represent a Bookmark instance bm.Anchor().AnchorText, bm.Anchor().AnchorLink));
in XHTML? One approach is to combine <div>, <span>, and <a> Notice how each <li> contains an <a> element that links to a spe-
elements, where <div> elements represent structures, <span> cific bookmark. If you were to navigate one of the anchor elements,
elements represent fields, and <a> elements represent identity you would retrieve the bookmark details representation shown
and links to other resources. In addition, you can annotate these earlier. As you begin to define links between resources like this,
elements with the XHTML “class” attribute to provide additional your service starts becoming a web of linked resources.
type metadata. Here’s a complete example: It’s pretty obvious how humans can navigate between resources
<div class="bookmark"> using a Web browser, but how about consuming applications? A
<span class="bookmark-id">25</span>
<span class="bookmark-title">Aaron's Blog</span>
consuming application just needs to programmatically locate the
<a class="bookmark-url-link" href="http://pluralsight.com/aaron" anchor element of interest and then issue a GET request targeting
>http://pluralsight.com/aaron</a>
<span class="bookmark-username">skonnard</span>
the URI specified in the “href ” attribute. These details can also
</div> be hidden behind an XLinq extension method that encapsulates
The next question is how will consumers process this informa- anchor navigation.
tion? Since it’s well-formed XML, consumers can use any XML API The following code shows how to navigate to the first book-
to extract the bookmark information. Most .NET programmers will mark in the bookmark list and then to the target bookmark URL.
msdnmagazine.com July 2009 49
The resulting XHTML is printed to the console: Although today’s browsers support GET and POST only for the
var bookmarkDoc = bookmarks.First().Anchor().Navigate(); form method, nothing is stopping you from also specifying PUT or
var bookmarkDetails = bookmarkDoc.Body().Struct("bookmark");
var targetDoc = bookmarkDetails.Anchor("bookmark-url-link").Navigate(); DELETE as the form “method” when targeting nonbrowser con-
Console.WriteLine(targetDoc); sumers. The Submit extension method performs equally well for
Once you start thinking about building consumers that navigate any HTTP method you specify.
your service as a web of resources, you’re officially starting to think
in a more RESTful way. Understanding the ASP.NET MVC Architecture
The ASP.NET MVC architecture is based on the popular model-
XHTML: Representing Input with Forms view-controller design pattern that has been around for decades.
Now let’s say a consumer wants to create a new bookmark in the Figure 1 illustrates the various ASP.NET MVC components and
system. How does the consumer figure out what data to send and how how they relate to one another. ASP.NET MVC comes with a rout-
to send it without WSDL? The answer is easy: XHTML forms. ing engine that sits in front of the other MVC components. The
The consumer first issues a GET request to the URI for retrieving routing engine receives incoming HTTP requests and routes them
the create bookmark form. The service returns a form that looks to a controller method. The routing engine relies on a centralized
something like this: set of routes that you define in Global.asax.
<h1>Create Bookmark</h1> The centralized routes define mappings between URL patterns
<form action="/bookmark/create" class="create-bookmark-form" method="post">
<p> and specific controller methods and arguments. When you generate
<label for="bookmark-title">Title:</label><br /> links, you use these routes to generate the links appropriately.
<input id="bookmark-title" name="bookmark-title" type="text" value="" />
</p> This makes it easy to modify your URL design throughout the
<p> development process in one central location.
<label for="bookmark-url">Url:</label><br />
<input id="bookmark-url" name="bookmark-url" type="text" value="" /> It’s the job of the controller to extract information from the incom-
</p> ing request and to interact with the user-defined model layer. The
<p><input type="submit" value="Create" name="submit" /></p>
</form> model layer can be anything ( Linq to SQL, ADO.NET Entity Frame-
The form describes how to build an HTTP POST request for work, NHibernate, and so on)—it’s the layer that performs business
creating a new bookmark. The form indicates that you need to logic and talks to the underlying database. Notice how the model is
provide the bookmark-title and bookmark-url fields. In this example, not within the System.Web.Mvc namespace. Once the controller has
bookmark-id will be autogenerated during creation, and bookmark- finished using the model, it creates a view, supplying the view with
username will be derived from the logged-in user identity. The form model data for the view to use while rendering the output.
also tells you what you need to send and how to send it. In the following sections, I’ll walk through the process of imple-
When this form is rendered in a browser, a human can simply menting a complete bookmark service using the ASP.NET MVC
fill out the form and click Submit to create a new bookmark. A architecture. The service supports multiple users and both public
consuming application basically does the same thing by submitting and private bookmarks. Users can browse all public bookmarks and
the form programmatically. Again, this process can be made easier filter them based on username or tags, and they can fully manage
by using some form-based extension methods, shown here: (CRUD) their own collection of private bookmarks.
var createBookmarkForm = createBookmarkDoc.Body().Form("create-bookmark- To get started, you need to create an ASP.NET MVC project.
form"); You’ll find the ASP.NET MVC Web Application project template
createBookmarkForm["bookmark-title"] = "Windows Live";
createBookmarkForm["bookmark-url"] = "http://live.com/"; in the list of Web project types. The default project template gives
createBookmarkForm.Submit(); you a sample MVC starter application that you can actually run
When this code runs, the Submit method generates an HTTP by pressing F.
POST request targeting the “action” URL, and the input fields Notice how the solution structure provides directories for Models,
are formatted together as a URL-encoded string (application/x- Views, and Controllers—this is where you place the code for these
www-form-urlencoded). In the end, it’s no different from using different components. The default template comes with two controllers:
the browser—the result is a new bookmark. one for managing user accounts (AccountController), and another
for supporting requests to the home directory (HomeController).
Both of these are used in the bookmark service.

Implementing the Model


The first thing you should focus on is the model for the book-
mark service. I’ve built a simple SQL Server database that contains
three tables for managing bookmark information—Bookmark,
Tag, and BookmarkTag (see Figure 2)—and they’re pretty self-
explanatory.
The only caveat is that the example relies on the built-in ASP.NET
Figure 1 ASP.NET MVC Architecture forms authentication and membership service, which is provided
50 msdn magazine RESTful XHTML
20 Minutes
to 4 Seconds...
SpreadsheetGear for .NET reduced the time to generate a critical Excel Report
“from 20 minutes to 4 seconds” making his team “look like miracle workers”
Luke Melia, Software Development Manager
at Oxygen Media in New York

ASP.NET Excel Reporting


Easily create richly formatted Excel reports without Excel using the new
generation of spreadsheet technology built from the ground up for scalability
and reliability.

Excel Compatible Windows Forms Control


Add powerful Excel compatible viewing, editing, formatting, calculating,
charting and printing to your Windows Forms applications with the easy to
use WorkbookView control.

Create Dashboards from Excel Charts and Ranges


You and your users can design dashboards, reports, charts, and models in
Excel rather than hard to learn developer tools and you can easily deploy
them with one line of code.

Download the FREE fully functional 30-Day


evaluation of SpreadsheetGear for .NET today at
www.SpreadsheetGear.com.
Toll Free USA (888) 774-3273 | Phone (913) 390-4797 | sales@spreadsheetgear.com
even more of the underlying data manipulation details. Figure 3
shows the definition for the BookmarksRepository class used in
the bookmark service.

Implementing the Controller


The controller is the piece of code responsible for managing
the HTTP request life cycle. When a request arrives, the ASP.NET
MVC routing engine determines which controller to use (based on
the URL) and then routes the request to the controller by calling a
specific method on it. Hence, when you write a controller, you’re
writing the entry points that will be called by the routing engine.
For the bookmark service, we want to allow authenticated users
to create, edit, and delete bookmarks. When creating bookmarks,
users should be able to mark them as public (shared) or private.
Figure 2 Bookmark Service Linq to SQL Model All users should be able to browse public bookmarks and filter
them by username or tags. Private bookmarks, however, should
by the default AccountController that comes with the project, to be visible only to the owner. Consumers should also be able to
manage the service user accounts, Hence, user account informa- retrieve the details for a particular bookmark, assuming they are
tion will be stored in a different database (aspnetdb.mdf), separate authorized to do so. We should also make it possible to browse all
from the bookmark information. The username is simply stored users and tags in the system as a way to navigate the public book-
in the Bookmark table. marks associated with them.
It’s the job of the model to provide business objects and logic on Figure 4 shows the methods the BookmarkController class needs
top of the database. For this example, I’ve defined the Linq to SQL to support the service requirements I just described. The first three
model shown in Figure 2. This model, defined in BookmarksModel. query methods make it possible to retrieve all public bookmarks,
dbml, generates the C# code found in BookmarksModel.designer. bookmarks by user, or bookmarks by tag. The class also includes
cs. You’ll find classes named Bookmark, Tag, and BookmarkTag. methods for retrieving users and tags and for retrieving the de-
You’ll also find a BookmarksModelDataContext class, which tails of a specific bookmark instance. All these methods respond
bootstraps the entities.
At this point, you can decide to work directly with the Linq to Figure 4 BookmarkController Class
SQL classes as your MVC model layer, or you can go a step fur-
[HandleError]
ther by defining a higher-level repository class that defines the public class BookmarkController : Controller
logical business operations and shields the controller/view from {
// underlying model
BookmarksRepository bmRepository = new BookmarksRepository();

Figure 3 BookmarksRepository Class // query methods


public ActionResult BookmarkIndex() { ... }
public class BookmarksRepository public ActionResult BookmarksByUserIndex(string username) { ... }
{ public ActionResult BookmarksByTagIndex(string tag) { ... }
// generated Linq to SQL DataContext class public ActionResult UserIndex() { ... }
BookmarksModelDataContext db = new BookmarksModelDataContext(); public ActionResult TagIndex() { ... }
public ActionResult Details(int id) { ... }
// query methods
public IQueryable<Bookmark> FindAllBookmarks() { ... } // create boomark
public IQueryable<Bookmark> FindAllPublicBookmarks() { ... } [Authorize]
public IQueryable<Bookmark> FindBookmarksByUser(string username) public ActionResult Create() { ... }
{ ... } [Authorize]
public IQueryable<Bookmark> FindPublicBookmarksByUser(string [AcceptVerbs(HttpVerbs.Post)]
username) { ... } public ActionResult Create(FormCollection collection) { ... }
public IQueryable<Bookmark> FindBookmarksByTag(string tag) { ... }
public Bookmark FindBookmarkById(int id) { ... } // update bookmark
public IQueryable<string> FindUsers(){ ... } [Authorize]
public IQueryable<Tag> FindTags() { ... } public ActionResult Edit(int id) { ... }
public Tag FindTag(string tag) { ... } [Authorize]
[AcceptVerbs(HttpVerbs.Put | HttpVerbs.Post)]
// insert/delete methods public ActionResult Edit(int id, FormCollection collection)
public void AddBookmark(Bookmark bm) { ... }
public void AddTag(Tag t) { ... } // delete bookmark
public void AddTagForBookmark(string tagText, Bookmark bm) { ... } [Authorize]
public void DeleteBookmark(Bookmark bm) { ... } public ActionResult Delete(int id) { ... }
[Authorize]
// persistence [AcceptVerbs(HttpVerbs.Delete | HttpVerbs.Post)]
public void Save() { ... } public ActionResult Delete(int id, FormCollection collection) { ... }
} }

52 msdn magazine RESTful XHTML


$YNAMIC0$&x0ROVEN.%4#OMPONENTSFOR2EAL 4IME0$&S

%ASY TO USE (IGHLYEFFICIENT


)NDUSTRYLEADINGSUPPORT (UGEFEATURESET
4RYIT&2%%TODAY ,AYOUTREPORTSIN$YNAMIC0$&$ESIGNERWITHITS6ISUAL3TUDIOLOOKANDFEEL
%XPERIENCEOURFULLYFUNCTIONAL NEVEREXPIRINGEVALUATION
ANDCOMMUNITYEDITIONS
$YNAMIC0$&'ENERATORVFOR.%4
$YNAMIC0$&3UITEVFOR.%4 s,INEARIZE&AST7EB6IEWs*AVA3CRIPT
/UREASY TO USETOOLSINTEGRATEWITH!30.%4 s%NCRYPTION3ECURITYs0$&8 As)##0ROFILES
AND!$/.%4ALLOWINGFORTHEQUICK REAL TIME s)NTERACTIVE&ORM&IELDSs&LEXIBLE4EMPLATES
GENERATIONOF0$&DOCUMENTSANDREPORTS s/VER2EADY 4O 5SE0AGE%LEMENTS)NCLUDING
"AR#ODESAND#HARTINGs$IGITAL3IGNATURES
&OREASYMAINTENANCEANDDEPLOYMENT
OURMOSTPOPULARTOOLSARENOWAVAILABLE
ASABUNDLEDSUITE
$YNAMIC0$&-ERGERVFOR.%4
s-ERGEs3TAMPs!PPENDs3PLITs0ASSWORD3ECURITY
s&ORM &ILLs/UTLINEAND!NNOTATION0RESERVATION
s0LACE 2OTATE 3CALEAND#LIP0AGESs$ECRYPTION

$YNAMIC0$&2EPORT7RITERVFOR.%4
s'5)2EPORT$ESIGNERs%VENT$RIVENs2ECURSIVE3UB 2EPORTS
s5SE0$&4EMPLATESs!UTOMATIC0AGINATIONs0LACEHOLDERS
s2ECORD3PLITTINGAND%XPANSIONs#OLUMN3UPPORT
s&ULL$YNAMIC0$&-ERGERAND'ENERATOR)NTEGRATION

/UR%NTERPRISE%DITIONSNOWINCLUDE$YNAMIC0$&7EB#ACHEAND&IRE-AIL

CE4E3OFTWAREHASBEENDELIVERINGQUALITYSOFTWAREAPPLICATIONSANDCOMPONENTSTOOURCUSTOMERSFOROVERYEARS/UR
$YNAMIC0$&PRODUCTLINEHASPROVENOURCOMMITMENTTODELIVERINGINNOVATIVESOFTWARECOMPONENTSANDOURABILITYTO
RESPONDTOTHECHANGINGNEEDSOFSOFTWAREDEVELOPERS7EBACKOURPRODUCTSWITHAFIRSTCLASSSUPPORTTEAMTRAINEDTO
PROVIDETIMELY ACCURATEANDTHOROUGHRESPONSESTOANYSUPPORTNEEDS
to HTTP GET requests, but each one will be bound to a different Figure 5 The Create Method That Responds to a Form
URI template when the routes are defined. Submission Request
The rest of the methods are for creating, editing, and deleting [Authorize]
bookmark instances. Notice that there are two methods for each [AcceptVerbs(HttpVerbs.Post)]
public ActionResult Create(FormCollection collection)
logical operation—one for retrieving the input form, and another for {
responding to the form submission—and each one of these meth- try
{
ods requires authorization. The Authorize attribute ensures that Bookmark bm = new Bookmark();
the caller is authenticated and authorized to access the controller bm.Title = collection["bookmark-title"];
bm.Url = collection["bookmark-url"];
method. (The attribute also allows you to specify users and roles.) If bm.Shared = collection["bookmark-shared"].Contains("true");
an unauthenticated or unauthorized user attempts to access a con- bm.LastModified = DateTime.Now;
bm.Username = HttpContext.User.Identity.Name;
troller method annotated with [Authorize], the authorization filter bm.Tags = collection["bookmark-tags"];
automatically redirects the user to the AccountController’s Logon
bmRepository.AddBookmark(bm);
method, which presents the “Logon” view to the consumer. bmRepository.Save();
You use the AcceptVerbs attribute to specify which HTTP verbs
... // create any new tags that are necessary
a particular controller method will handle (the default is GET). A
single method can handle multiple verbs by ORing the HttpVerb return RedirectToAction("BookmaryByUserIndex",
new { username = HttpContext.User.Identity.Name });
values together. The reason I’ve bound the second Edit method to }
both PUT and POST is to accommodate browsers. This configu- catch
{
ration allows browsers to invoke the operation using POST, while return View("Error");
nonbrowser consumers can use PUT (which is more correct). I’ve }
}
done something similar on the second Delete method, binding it to
both DELETE and POST. I still have to be careful that my method
implementations ensure idempotency, which is a requirement for
both PUT and DELETE. Figure 5 shows the second Create method, which responds to
Let’s look at how a few of these methods have been implemented. the form submission request. It creates a new Bookmark entity from
First is BookmarkIndex: the bookmark information found in the incoming FormCollection
public ActionResult BookmarkIndex() object and then saves it to the database. It also updates the database
{
var bms = bmRepository.FindAllPublicBookmarks().ToList(); with any new tags that were associated with the bookmark and
return View("BookmarkIndex", bms); then redirects users to their lists of personal bookmarks to indicate
}
success.
This implementation simply retrieves the list of public Bookmark
I don’t have space to cover the entire controller implementation,
entities from the repository and then returns a view called
but these code samples should give you a taste for the kind of code
BookmarkIndex (passing in the list of Bookmark entities). The
you write in the controller.
view is responsible for displaying the list of Bookmark entities
supplied to it by the controller.
The Details method looks up the target bookmark and returns a Designing URIs with Routes
 Not Found error if it doesn’t exist. Then it ensures that the user The next thing you need to do is define URL routes that map to the
is authorized to view the bookmark. If so, it returns the Details view, various BookmarkController methods. You define your application
supplying the identified Bookmark entity. Otherwise it returns an routes in Global.asax within the RegisterRoutes method. When
Unauthorized response to the consumer. you first create an MVC project, your Global.asax will contain the
public ActionResult Details(int id) default routing code shown in Figure 6.
{ The single call to MapRoute creates a default routing rule that
var bm = bmRepository.FindBookmarkById(id);
if (bm == null) acts like a catchall for all URIs. This route outlines that the first
throw new HttpException(404, "Not Found"); path segment represents the controller name, the second path
if (!bm.Shared)
{ segment represents the action name (controller method), and
if (!bm.Username.Equals(HttpContext.User.Identity.Name)) the third path segment represents an ID value. This single rule
return new HttpUnauthorizedResult();
} can handle the following URIs and route them to the appropriate
return View("Details", bm); controller method:
}
/Account/Logon
As a final example, let’s look at the two Create methods. The first /Bookmark/Create
is actually quite simple—it returns the Create view to present the /Bookmark/Details/25
/Bookmark/Edit/25
form description for creating a new bookmark:
[Authorize]
Figure 7 shows the routes I’m using for this bookmark service
public ActionResult Create() example. With these additional routes in place, consumers can
{
return View("Create");
browse to “/users” to retrieve the list of users, “/tags” to retrieve the
} list of tags, or “/bookmarks” to retrieve the list of public bookmarks.
54 msdn magazine RESTful XHTML
Figure 6 Default Routing Code Figure 8 Bookmark Service Master Page
public class MvcApplication : System.Web.HttpApplication <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN"
{ "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd">
public static void RegisterRoutes(RouteCollection routes) <html xmlns="http://www.w3.org/1999/xhtml">
{ <head runat="server">
routes.IgnoreRoute("{resource}.axd/{*pathInfo}"); <title><asp:ContentPlaceHolder ID="Title" runat="server" /></title>
</head>
routes.MapRoute( <body>
"Default", // Route name <div style="text-align:right">
"{controller}/{action}/{id}", // URL with parameters <% Html.RenderPartial("LogOnUserControl"); %>
new { controller="Home", // Parameter defaults </div>
action="Index", id="" } <h1><asp:ContentPlaceHolder ID="Heading" runat="server" /></h1>
); <hr />
<asp:ContentPlaceHolder ID="MainContent" runat="server" />
} <hr />
<div class="nav-links-footer">
protected void Application_Start() <%=Html.ActionLink("Home", "Index", "Home", null,
{ new { @class = "root-link" } )%> |
RegisterRoutes(RouteTable.Routes); <%=Html.ActionLink("Public Bookmarks", "BookmarkIndex",
} "Bookmark", null,
} new { @class = "public-bookmarks-link" } )%> |
<%=Html.ActionLink("User Bookmarks", "BookmarksByUserIndex", "Bookmark",
new { username = HttpContext.Current.User.Identity.Name },
new { @class = "my-bookmarks-link" })%> |
Consumers can also browse to “/tags/{tagname}” or “/users/{user- <%=Html.ActionLink("Users", "UserIndex", "Bookmark", null,
name}” to filter bookmarks by tag or username, respectively. All new { @class = "users-link" } )%> |
<%=Html.ActionLink("Tags", "TagIndex", "Bookmark", null,
other URIs are handled by the default route shown in Figure 6. new { @class = "tags-link" } )%>
</div>

Implementing the Views


</body>
</html>
Up to this point, most of what we’ve done applies to both
MVC “sites” and “services.” All MVC applications need models,
controllers, and routes. Most of what’s different about building master page to maintain a consistent template. Figure 8 shows the
MVC “services” is found in the view. Instead of producing a master page for the bookmark service.
traditional HTML view for human consumption, a service The master page defines three placeholders: one for the page title,
must produce a view that’s appropriate for both human and another for the <h> heading, and another for the main content
programmatic consumption. area. These placeholders will be filled in by each individual view.
We’re going to use XHTML for our default service representation In addition, the master page displays a login control at the top of
and apply the techniques described earlier for mapping bookmark the page, and it provides a footer containing the root service links
data to XHTML. We’ll map data entities to <div> and <span> to simplify navigation. Notice how I’m using the Html.ActionLink
elements, and we’ll represent collections using a combination of method to generate these links based on the predefined routes and
<ol> and <li>. We’ll also annotate these elements with the “class” controller actions.
attribute to provide consumers with additional type metadata. Figure 9 shows the main Bookmark Index view you get back
ASP.NET MVC “views” are just .aspx pages that define a view when you browse to “/bookmarks”. It displays the list of bookmarks
template. The .aspx pages are organized by controller name within
the Views directory. Each view can be associated with an ASP.NET
Figure 9 Bookmark Index View
Figure 7 Bookmark Service Routes
<%@ Page Title="" Language="C#" MasterPageFile="~/Views/Shared/Site.
public static void RegisterRoutes(RouteCollection routes) Master" Inherits="System.Web.Mvc.ViewPage
{ <IEnumerable<MvcBookmarkService.Models.Bookmark>>" %>
routes.IgnoreRoute("{resource}.axd/{*pathInfo}");
// customer routes <asp:Content ID="Content1" ContentPlaceHolderID="Title" runat="server">
routes.MapRoute("Users", "users", Public Bookmarks</asp:Content>
new { controller = "Bookmark", action = "UserIndex" }); <asp:Content ID="Content2" ContentPlaceHolderID="Heading" runat="server">
routes.MapRoute("Tags", "tags", Public Bookmarks</asp:Content>
new { controller = "Bookmark", action = "TagIndex" }); <asp:Content ID="Content3" ContentPlaceHolderID="MainContent" runat="server">
routes.MapRoute("Bookmarks", "bookmarks", <%= Html.ActionLink("Create bookmark", "Create", "Bookmark",
new { controller = "Bookmark", action = "BookmarkIndex" }); new { id = "" },
routes.MapRoute("BookmarksByTag", "tags/{tag}", new { @class = "create-bookmark-form-link" } )%>
new { controller = "Bookmark", action = "BookmarksByTagIndex", <ol class="bookmark-list">
tag = "" }); <% foreach (var item in Model) { %>
routes.MapRoute("BookmarksByUser", "users/{username}", <li><%= Html.ActionLink(Html.Encode(item.Title), "Details",
new { controller="Bookmark", action="BookmarksByUserIndex", "Bookmark",
username="" }); new { id = item.BookmarkID }, new { @class =
// default route "bookmark-link" })%></li>
routes.MapRoute("Default", "{controller}/{action}/{id}", <% } %>
new { controller = "Home", action = "Index", id = ""}); </ol>
} </asp:Content>

56 msdn magazine RESTful XHTML


Figure 10 Bookmark Create View XHTML description of how to programmatically create a new book-
<%@ Page Title="" Language="C#" MasterPageFile="~/Views/Shared/Site.Master"
mark using our service (by simply submitting the form).
Inherits="System.Web.Mvc.ViewPage<MvcBookmarkService.Models.Bookmark>" %> As a final example, Figure 11 shows the beginning of the Book-
<asp:Content ID="Content1" ContentPlaceHolderID="Title" runat="server">
mark Details view. Here I’m using a <div> element to represent
Create Bookmark</asp:Content> the bookmark structure (class=“bookmark”) along with <span>
<asp:Content ID="Content2" ContentPlaceHolderID="Heading" runat="server">
Create Bookmark</asp:Content>
and <a> elements to represent the bookmark fields. Each carries a
<asp:Content ID="Content3" ContentPlaceHolderID="MainContent" runat="server"> “class” attribute to specify the field name.
<% using (Html.BeginForm("Create", "Bookmark", FormMethod.Post,
new { @class = "create-bookmark-form" } ))
Again, I don’t have space to look at all the view examples in detail,
{%> but I hope this illustrates how you can produce clean XHTML result
<p>
<label for="Title">Title:</label><br />
sets that are easy for both applications and humans to consume.

Consuming the Bookmark Service


<%= Html.TextBox("bookmark-title")%>
</p>
<p>
<label for="Url">Url:</label><br />
The easiest way to consume the bookmark service is through a
<%= Html.TextBox("bookmark-url")%> Web browser. Thanks to the XHTML design, you should be able to
</p>
<p>
browse to the service’s root URL and begin navigation from there.
<label for="Tags">Tags:</label><br /> Figure 12 shows what the browser looks like when you browse to
<%= Html.TextBox("bookmark-tags")%>
</p>
the root of the bookmark service and log in. You can click Public
<p> Bookmarks to navigate to the list of all public bookmarks, and then
<label for="Shared">Share with public: </label>
<%= Html.CheckBox("bookmark-shared")%>
navigate to a specific bookmark in the list. If you click Edit, you can
</p> actually edit the bookmark details (see Figure 13). The service is
<p>
<input type="submit" value="Create" name="submit" />
fully usable from any Web browser.
</p>
<% } %>
</asp:Content>

using a combination of <ol>, <li>, and <a> elements. The <ol>


elements are annotated with class=“bookmark-list”, and each <a>
element is annotated with class=“bookmark-link”. This view also
provides a link to retrieve the Create bookmark form description
(right above the list). If you navigate to the link, the Create view
shown in Figure 10 comes into action.
The Create view produces a simple XHTML form, but the <form>
element is annotated with class=“create-bookmark-form”, and each
<input> element has been given a contextual name/ID value that
identifies each bookmark field. This form gives consumers a complete

Figure 12 Browsing to the MVC Bookmark Service


Figure 11 Bookmark Details View
<%@ Page Title="" Language="C#" MasterPageFile="~/Views/Shared/Site.
Master" Inherits="System.Web.Mvc.ViewPage<MvcBookmarkService.Models.
Bookmark>" %>

<asp:Content ID="Content1" ContentPlaceHolderID="Title" runat="server">


Bookmark Details: <%= Model.BookmarkID %></asp:Content>
<asp:Content ID="Content2" ContentPlaceHolderID="Heading" runat="server">
Bookmark Details: <%= Model.BookmarkID %></asp:Content>
<asp:Content ID="Content3" ContentPlaceHolderID="MainContent"
runat="server">
<br />
<div class="bookmark">
BookmarkID: <span class="bookmark-id"><%= Html.Encode(Model.
BookmarkID) %></span><br />
Title: <span class="bookmark-title"><%= Html.Encode(Model.Title)
%></span><br />
Url: <a class="bookmark-url-link" href="<%= Html.Encode(Model.Url) %>">
<%= Html.Encode(Model.Url) %></a><br />
Username: <%=Html.ActionLink(Model.Username, "BookmarksByUserIndex",
"Bookmark",
new { username=Model.Username }, new { @class="bookmark-username-
link" }) %><br />
...
Figure 13 Editing a Specific Bookmark
msdnmagazine.com July 2009 57
Figure 14 Writing a .NET Client to Consume the Bookmark Service using XLinq
class Program bookmarkDetails["bookmark-url-link"].Value);
{ Console.WriteLine("bookmark-title: {0}",
static void Main(string[] args) bookmarkDetails["bookmark-title"].Value);
{ Console.WriteLine("bookmark-shared: {0}",
// navigate to the root of the service bookmarkDetails["bookmark-shared"].Value);
Console.WriteLine("Navigating to the root of the service..."); Console.WriteLine("bookmark-last-modified: {0}",
Uri uri = new Uri("http://localhost:63965/"); bookmarkDetails["bookmark-last-modified"].Value);
CookieContainer cookies = new CookieContainer();
var doc = XDocument.Load(uri.ToString()); // retrieving login form
doc.AddAnnotation(uri); Console.WriteLine("\nRetrieving login form...");
Uri logonUri = new Uri("http://localhost:63965/Account/Logon");
// navigate to public bookmarks var logonDoc = XDocument.Load(logonUri.ToString());
Console.WriteLine("Navigating to public bookmarks..."); logonDoc.AddAnnotation(logonUri);
var links = doc.Body().Ul("nav-links").Lis(); logonDoc.AddAnnotation(cookies);
var bookmarksLink = links.Where(l => l.HasAnchor("public-
bookmarks-link")).First(); // logging on as skonnard
var bookmarksDoc = bookmarksLink.Anchor().Navigate(); Console.WriteLine("Logging in as 'skonnard'");
bookmarksDoc.AddAnnotation(cookies); var logonForm = logonDoc.Body().Form("account-logon-form");
logonForm["username"] = "skonnard";
// display list of bookmarks logonForm["password"] = "password";
Console.WriteLine("\nPublic bookmarks found in the system:"); logonForm.Submit();
var bookmarks = bookmarksDoc.Body().Ol("bookmark-list").Lis(); Console.WriteLine("Login successful!");
bookmarks.Each(bm => Console.WriteLine("{0}: {1}",
bm.Anchor().AnchorText, bm.Anchor().AnchorLink)); // create a new bookmark as 'skonnard'
var createBookmarkDoc = bookmarksDoc.Body().Anchor(
// navigate to the first bookmark in the list "create-bookmark-form-link").Navigate();
Console.WriteLine("\nNavigating to the first bookmark in the createBookmarkDoc.AddAnnotation(cookies);
list..."); var createBookmarkForm = createBookmarkDoc.Body().Form("create-
var bookmarkDoc = bookmarks.First().Anchor().Navigate(); bookmark-form");
var bookmarkDetails = bookmarkDoc.Body().Struct("bookmark"); createBookmarkForm["bookmark-title"] = "Test from console!";
createBookmarkForm["bookmark-url"] = "http://live.com/";
// print the bookmark details out to the console window createBookmarkForm["bookmark-tags"] = "Microsoft, Search";
Console.WriteLine("Bookmark details:"); createBookmarkForm.Submit();
Console.WriteLine("bookmark-id: {0}", bookmarkDetails["bookmark- Console.WriteLine("\nBookmark created!");
id"].Value); }
Console.WriteLine("bookmark-url-link: {0}", }

While you’re browsing around the service, select View Source gets easier as you move toward dynamic languages in future ver-
occasionally in your browser, and you’ll notice how simple the sions of .NET.
resulting XHTML looks, which again makes it easy to program Ultimately, ASP.NET MVC provides an inherently RESTful
against. framework for implementing a web of XHTML-based resources
Figure 14 shows the code for a complete .NET client applica- that can be consumed by both humans and applications simulta-
tion that consumes the bookmark service. It uses the set of XLinq neously. You can download the entire sample application shown
extension methods I described earlier to simplify the XHTML and in this article from the MSDN Magazine Web site.
HTTP processing details. What’s interesting about this sample is that For another complete example of an XHTML-based RESTful
it acts more like a human—it needs only the root URI to navigate to service found in the real world, browse to the Microsoft/TechNet
everything else of interest exposed by the bookmark service. Publishing System (MTPS) Content Services at labs.msdn.microsoft.com/
The client starts by navigating to the root address, and then it RESTAPI/. This service uses many of the practices I’ve outlined in
looks for the link to the public bookmarks. Next it navigates to the this article.
public bookmark list and identifies a specific bookmark of inter-
est (in this case, the first one). Next it navigates to the bookmark Acknowledgments
details and displays them to the console window. Then it retrieves Thanks to both Tim Ewald and Craig Andera, whose creative
the login form and performs a login using a set of credentials. Once thinking in this area provided fuel for my article. Tim also provided
logged in, the application retrieves the create bookmark form, fills the XLinq extension methods found in the accompanying sample
it out, and submits a new bookmark to the system. application. „
There are a few key observations to make at this point. First, the
console application is capable of doing everything a human can do
through the Web browser. That’s the killer feature of this XHTML
design style. Second, consumers need only to be hard-coded against
the root URIs exposed by the service. All other URIs are discov- AARON SKONNARD is a cofounder of Pluralsight, a Microsoft training provider
offering both instructor-led and on-demand developer courses. These days Aar-
erable at run time by navigating links found within the XHTML. on spends most of his time recording Pluralsight On-Demand! courses focused
And finally, processing XHTML structures isn’t that much differ- on Cloud Computing, Windows Azure, WCF and REST. You can reach him at
ent from anything else—it’s just data. Plus, this type of code only http://pluralsight.com/aaron and http://twitter.com/skonnard.
60 msdn magazine RESTful XHTML
Mother Nature understands the
importance of building in security.
Do you?

Did you know malicious attackers are intentionally seeking out


software vulnerabilities? Threats to application security have
Register for the
become more prevalent in an increasingly interconnected world.
CSSLP Education
You must devise ways to ensure your applications are built
securely in accordance with your policies and procedures. Take
Seminar or Exam.
the lead in building security into every aspect of the software
lifecycle by becoming an (ISC)2® Certified Secure Software
Start Here.
Lifecycle Professional (CSSLPCM). You’ll prove your knowledge on visit www.isc2.org/csslp
how security should be applied. No one understands the critical
nature of security like Mother Nature. We couldn’t ask for a
better teacher. Become a CSSLP today!
SCALE OUT

Distributed Caching
on the Path
to Scalability
Iqbal Khan

If you’re developing an ASP.NET application, Web computing) has traditionally used Java, but as .NET gains market
services or a high-performance computing (HPC) application, share, it is becoming more popular for HPC applications as well.
you’re likely to encounter major scalability issues as you try to HPC applications are deployed to hundreds and sometimes
scale and put more load on your application. With an ASP.NET thousands of computers for parallel processing, and they often
application, bottlenecks occur in two data stores. The first is the need to operate on large amounts of data and share intermediate
application data that resides in the database, and the other is ASP. results with other computers. HPC applications use a database
NET session state data that is typically stored in one of three modes or a shared file system as a data store, and both of these do not
(InProc, StateServer, or SqlServer) provided by Microsoft. All three scale very well.
have major scalability issues.
Web services typically do not use session state, but they do have Distributed Caching
scalability bottlenecks when it comes to application data. Just like Caching is a well-known concept in both the hardware and
ASP.NET applications, Web services can be hosted in IIS and software worlds. Traditionally, caching has been a stand-alone
deployed in a Web farm for scalability. mechanism, but that is not workable anymore in most environ-
HPC applications that are designed to perform massive par- ments because applications now run on multiple servers and in
allel processing also have scalability problems because the data multiple processes within each server.
store does not scale in the same manner. HPC (also called grid In-memory distributed caching is a form of caching that allows
the cache to span multiple servers so that it can grow in size and in
transactional capacity. Distributed caching has become feasible now
This article discusses:
for a number of reasons. First, memory has become very cheap, and
• Distributed caching
you can stuff computers with many gigabytes at throwaway prices.
• Scalability Second, network cards have become very fast, with Gbit now stan-
• Database synchronization dard everywhere and Gbit gaining traction. Finally, unlike a data-
Technologies discussed: base server, which usually requires a high-end machine, distributed
ASP.NET, Web services, HPC applications
caching works well on lower cost machines (like those used for Web
servers), which allows you to add more machines easily.
62 msdn magazine
removes it. You can specify two types of expirations: absolute-time
expiration and sliding-time (or idle-time) expiration.
If the data in your cache also exists in a master data source,
you know that this data can be changed in the database by users
or applications that might not have access to the cache. When
that happens, the data in the cache becomes stale. If you’re able
to estimate how long this data can be safely kept in the cache,
you can specify absolute-time expiration—something like,
“Expire this item  minutes from now” or “Expire this item at
midnight tonight.”
One interesting variation to absolute expiration is whether your
cache can reload an updated copy of the cached item directly from
your data source. This is possible only if your cache provides a
read-through feature (see later sections) and allows you to regis-
ter a read-through handler that reloads cached items when abso-
lute expiration occurs. Except for a few commercial caches, most
caches do not support this feature.
You can use idle-time (sliding-time) expiration to expire an item
Figure 1 Distributed Cache Shared by Various Apps in an Enterprise if it is not used for a given period of time. You can specify some-
thing like, “Expire this item if nobody reads or updates it for 
Distributed caching is scalable because of the architecture it employs. minutes.” This approach is useful when your application needs the
It distributes its work across multiple servers but still gives you a logical data temporarily, but once the application is finished using it, you
view of a single cache. For application data, a distributed cache keeps a want the cache to automatically expire it. ASP.NET session state is
copy of a subset of the data in the database. This is meant to be a tem- a good candidate for idle-time expiration.
porary store, which might mean hours, days or weeks. In a lot of situa- Absolute-time expiration helps you avoid situations in which
tions, the data being used in an application does not need to be stored the cache has a stale copy of the data or a copy that is older than
permanently. In ASP.NET, for example, session data is temporary and the master copy in the database. Idle-time expiration is meant to
needed for maybe a few minutes to a few hours at most. clean up the cache after your application no longer needs certain
Similarly, in HPC, large portions of the processing requires data. Instead of having your application keep track of necessary
storing intermediate data in a data store, and this is also temporary cleanup, you let the cache take care of it.
in nature. The final outcome of HPC might be stored in a database,
however. Figure 1 shows a typical configuration of a distributed Evictions
cache in an enterprise. Most distributed caches are in-memory and do not persist the
cache to disk. This means that in most situations, memory is limited
Must-Have Features in Distributed Cache and the cache size cannot grow beyond a certain limit, which could
Traditionally, developers have considered caching only static be the total memory available or much less than that if you have
data, meaning data that never changes throughout the life of the other applications running on the same machine.
application. But that data is usually a very small subset—maybe Either way, a distributed cache should allow you to specify a
%—of the data that an application processes. Although you can maximum cache size (ideally, in terms of memory size). When
keep static data in the cache, the real value comes if you can cache the cache reaches this size, it should start removing cached
dynamic or transactional data—data that keeps changing every items to make room for new ones, a process usually referred to
few minutes. You still cache it because within that time span, you as evictions.
might fetch it tens of times and save that many trips to the data- Evictions are made based on various algorithms. The most popular
base. If you multiply that by thousands of users who are trying to is least recently used (LRU), where those cached items that have not
perform operations simultaneously, you can imagine how many been touched for the longest time are removed. Another algorithm
fewer reads you have on the database. is least frequently used (LFU). Here, those items that have been
But if you cache dynamic data, the cache has to have certain features touched the least number of times are removed. There are a few
designed to avoid data integrity problems. A typical cache should have other variations, but these two are the most popular.
features for expiring and evicting items, as well as other capabilities.
I’ll explore these features in the following sections. Caching Relational Data
Most data comes from a relational database, but even if it does
Expirations not, it is relational in nature, meaning that different pieces of data
Expirations are at the top of the list. They let you specify how are related to one another—for example, a customer object and
long data should stay in the cache before the cache automatically an order object.
msdnmagazine.com July 2009 63
Figure 2 Keeping Track of Relationships Among Cached Items If you had , items in the cache, , of them might
using System.Web.Caching;
have file dependencies, and for that you might have , files in
a special folder. Each file has a special name associated with that
...
cached item. When some other application—whether written in
public void CreateKeyDependency() .NET or not—changes the data in the master data source, that
{
Cache["key1"] = "Value 1";
application can communicate to the cache through an update of
the file time stamp.
// Make key2 dependent on key1.
String[] dependencyKey = new String[1];
dependencyKey[0] = "key1"; Database Synchronization
CacheDependency dep1 = new CacheDependency(null, dependencyKey);
The need for database synchronization arises because the
Cache.Insert("key2", "Value 2", dep2); database is being shared across multiple applications, and not all
}
those applications have access to your cache. If your application
is the only application updating the database and it can also easily
update the cache, you probably don’t need the database synchro-
When you have these relationships, you need to handle them in nization capability. But in a real-life environment, that’s not always
a cache. That means the cache should know about the relationship the case. Whenever a third party or any other application changes
between a customer and an order. If you update or remove a cus- the database data, you want the cache to reflect that change. The
tomer from the cache, you might want the cache to also automati- cache reflects changes by reloading the data, or at least by not having
cally update or remove related order objects from the cache. This the older data in the cache.
helps maintain data integrity in many situations. If the cache has an old copy and the database a new copy, you
But again, if a cache cannot keep track of these relationships, you now have a data integrity problem because you don’t know which
have to do it, and that makes your application much more cumber- copy is right. Of course, the database is always right, but you don’t
some and complex. It’s a lot easier if you just tell the cache at the time always go to the database. You get data from the cache because your
data is added that an order is associated with a customer, and the application trusts that the cache will always be correct or that the
cache then knows that if that customer is updated or removed, related cache will be correct enough for its needs.
orders also have to be updated or removed from the cache. Synchronizing with the database can mean invalidating the related
ASP.NET Cache has a really cool feature called CacheDependency, item in the cache so that the next time your application needs it,
which allows you to keep track of relationships between differ- it will fetch it from the database. One interesting variant to this
ent cached items. Some commercial caches also have this feature. process is when the cache automatically reloads an updated copy
Figure 2 shows an example of how ASP.NET Cache works. of the object when the data changes in the database. However, this
This is a multilayer dependency, meaning that A can depend on is possible only if your cache allows you to provide a read-through
B and B can depend on C. So, if your application removes C, A and handler (see the next section) and then uses it to reload the cached
B have to be removed from the cache as well. item from the database. However, only some of the commercial
caches support this feature, and none of the free ones do.
Synchronizing a Cache with Other Environments ASP.NET Cache has a SqlCacheDependency feature (see Figure 3)
Expirations and cache dependency features are intended to help that allows you to synchronize the cache with a SQL Server /
you keep the cache fresh and correct. You also need to synchronize
your cache with data sources that you and your cache don’t have
Figure 3 Using SqlDependency to Synchronize with a Relational
access to so that changes in those data sources are reflected in your
Database
cache to keep it fresh.
For example, let’s say your cache is written using the Microsoft using System.Web.Caching;
using System.Data.SqlClient;
.NET Framework, but you have Java or C++ applications modifying
data in your master data source. You want these applications to notify ...

your cache when certain data changes in the master data sources so public void CreateSqlDependency(Customers cust, SqlConnection conn)
that your cache can invalidate a corresponding cached item. {
// Make cust dependent on a corresponding row in the
Ideally, your cache should support this capability. If it doesn’t, // Customers table in Northwind database
this burden falls on your application. ASP.NET Cache provides string sql = "SELECT CustomerID FROM Customers WHERE ";
sql += "CustomerID = @ID";
this feature through CacheDependency, as do some commercial SqlCommand cmd = new SqlCommand(sql, conn);
caching solutions. It allows you to specify that a certain cached cmd.Parameters.Add("@ID", System.Data.SqlDbType.VarChar);
cmd.Parameters["@ID"].Value = cust.CustomerID;
item depends on a file and that whenever this file is updated or re-
moved, the cache discovers this and invalidates the cached item. SqlCacheDependency dep = new SqlCacheDependency(cmd);
string key = "Customers:CustomerID:" + cust.CustomerID;
Invalidating the item forces your application to fetch the latest copy
of this object the next time your application needs it and does not Cache.Insert(key, cust, dep);
}
find it in the cache.
64 msdn magazine Scale Out
or Oracle g R or later version database—basically any database this could easily crowd your cache. In such cases, it is much more
that has the .NET CLR built into it. Some of the commercial caches efficient to rely on polling, where with one database query you can
also provide this capability. fetch hundreds or thousands of rows that have changed and then
ASP.NET Cache SqlCacheDependency allows you to specify a invalidate corresponding cached items. Of course, polling creates
SQL string to match one or more rows in a table in the database. a slight delay in synchronization (maybe – seconds), but this
If this row is ever updated, the DBMS fires a .NET event that your is acceptable in many cases.
cache catches. It then knows which cached item is related to this
row in the database and invalidates that cached item. Read-Through
One capability that ASP.NET Cache does not provide but that In a nutshell, read-through is a feature that allows your cache to
some commercial solutions do is polling-based database synchro- directly read data from your data source, whatever that may be. You
nization. This capability is helpful in two situations. First, if your write a read-through handler and register it with your cache, and
DBMS does not have the .NET CLR built into it, you cannot benefit then your cache calls this handler at appropriate times. Figure 4
from SqlCacheDependency. In that case, it would be nice if your shows an example.
cache could poll your database on configurable intervals and detect Because a distributed cache usually lives outside your application,
changes in certain rows in a table. If those rows have changed, your it is shared across multiple instances of your application or even mul-
cache invalidates their corresponding cached items. tiple applications. One important capability of a read-through handler
The second situation is when data in your database is frequently is that the data you cache is fetched by the cache directly from the
changing and .NET events are becoming too chatty. This occurs be- database. Hence, your applications don’t have to have database code.
cause a separate .NET event is fired for each SqlCacheDependency They can just fetch data from the cache, and if the cache doesn’t have
change, and if you have thousands of rows that are updated frequently, it, the cache goes and takes it from the database.
You gain even more important benefits if you combine read-through
Figure 4 Example of a Read-Through Handler for SQL Server capabilities with expirations. Whenever an item in the cache expires,
the cache automatically reloads it by calling your read-through han-
using System.Web.Caching;
using System.Data.SqlClient; dler. You save a lot of traffic to the database with this mechanism. The
using Company.MyDistributedCache; cache uses only one thread, one database trip, to reload that data from
... the database, whereas you might have thousands of users trying to
access that same data. If you did not have read-through capability,
public class SqlReadThruProvider : IReadhThruProvider
{ all those users would be going directly to the database, inundating
private SqlConnection _connection; the database with thousands of parallel requests.
// Called upon startup to initialize connection Read-through allows you to establish an enterprise-level data
public void Start(IDictionary parameters) grid, meaning a data store that not only is highly scalable, but can
{
_connection = new SqlConnection(parameters["connstring"]); also refresh itself from master data sources. This provides your
_connection.Open(); applications with an alternate data source from which to read data
}
and relieves a lot of pressure on your databases.
// Called at the end to close connection As mentioned earlier, databases are always the bottleneck in
public void Stop() { _connection.Close(); }
high-transaction environments, and they become bottlenecks due
// Responsible for loading object from external data source mostly to excessive read operations, which also slow down write
public object Load(string key, ref CacheDependency dep)
{ operations. Having a cache that serves as an enterprise-level data
string sql = "SELECT * FROM Customers WHERE "; grid above your database gives your applications a major perfor-
sql += "CustomerID = @ID";
SqlCommand cmd = new SqlCommand(sql, _connection); mance and scalability boost.
cmd.Parameters.Add("@ID", System.Data.SqlDbType.VarChar); However, keep in mind that read-through is not a substitute for
// Let’s extract actual customerID from "key" performing some complex joined queries in the database. A typical
int keyFormatLen = "Customers:CustomerID:".Length; cache does not let you do these types of queries. A read-through
string custId = key.Substring(keyFormatLen,
key.Length - keyFormatLen); capability works well for individual object read operations but not
cmd.Parameters["@ID"].Value = custId; in operations involving complex joined queries, which you always
// fetch the row in the table need to perform on the database.
SqlDataReader reader = cmd.ExecuteReader();

// copy data from "reader" to "cust" object Write Through, Write Behind
Customers cust = new Customers(); Write-through is just like read-through: you provide a handler,
FillCustomers(reader, cust);
and the cache calls the handler, which writes the data to the database
// specify a SqlCacheDependency for this object whenever you update the cache. One major benefit is that your ap-
dep = new SqlCacheDependency(cmd);
return cust; plication doesn’t have to write directly to the database because the
} cache does it for you. This simplifies your application code because
}
the cache, rather than your application, has the data access code.
66 msdn magazine Scale Out
Map, Chart, and DataGrid out-of-the-box ready
ComponentOne® SharePoint Web Parts bring you superior performance, styling,
animation, and interactivity with no coding required. Build rich data visualization
into your Microsoft Office SharePoint Server 2007 and Windows SharePoint
Services 3.0 sites using these feature-packed, easily configurable Web Parts.

Try them for Free at www.componentone.com/webparts

Doc-To-Help makes both end-user and reference documentation easy by allowing you to
publish multiple formats and versions from one project. Doc-To-Help is also the only tool
to incorporate features specifically for software developers such as Microsoft Sandcastle
integration, Help 2.0 output, an embeddable Dynamic Help pane, and more!

nual outpu ts
Great Help & printed ma
Easy-to-use interface
Get everything you need in one solution.

ComponentOne Sales 1.800.858.2739 or 1.412.681.4343


© 1987-2009 ComponentOne. All rights reserved. All product and brand names are trademarks and/or registered trademarks of their respective holders.
Normally, your application issues an update to the cache (for Tagging lets you attach multiple arbitrary tags to a specific ob-
example, Add, Insert, or Remove). The cache updates itself first ject, and the same tag can be associated with multiple objects. Tags
and then issues an update call to the database through your write- are usually string-based, and tagging also allows you to categorize
through handler. Your application waits until both the cache and objects into groups and then find the objects later through these
the database are updated. tags or groups.
What if you want to wait for the cache to be updated, but you
don’t want to wait for the database to be updated because that slows Event Propagation
down your application’s performance? That’s where write-behind You might not always need event propagation in your cache, but
comes in, which uses the same write-through handler but updates it is an important feature that you should know about. It’s a good
the cache synchronously and the database asynchronously. This feature to have if you have distributed applications, HPC applica-
means that your application waits for the cache to be updated, but tions, or multiple applications sharing data through a cache. What
you don’t wait for the database to be updated. event propagation does is ask the cache to fire events when certain
You know that the database update is queued up and that the things happen in the cache. Your applications can capture these
database is updated fairly quickly by the cache. This is another events and take appropriate actions in response.
way to improve your application performance. You have to write Say your application has fetched some object from the cache
to the database anyway, but why wait? If the cache has the data, and is displaying it to the user. You might be interested to know if
you don’t even suffer the consequences of other instances of anybody updates or removes this object from the cache while it is
your application not finding the data in the database because displayed. In this case, your application will be notified, and you
you just updated the cache, and the other instances of your can update the user interface.
application will find the data in the cache and won’t need to go This is, of course, is a very simple example. In other cases, you
to the database. might have a distributed application where some instances of
your application are producing data and other instances need to
Cache Query consume it. The producers can inform the consumers when data
Normally, your application finds objects in the cache based on a is ready by firing an event through the cache that the consumers
key, just like a hash table, as you’ve seen in the source code examples receive. There are many examples of this type, where collabora-
above. You have the key, and the value is your object. But sometimes tion or data sharing through the cache can be achieved through
you need to search for objects based on attributes other than the event propagation.
key. Therefore, your cache needs to provide the capability for you
to search or query the cache. Cache Performance and Scalability
There are a couple of ways you can do this. One is to search on When considering the caching features discussed in the previous
the attributes of the object. The other involves situations in which sections, you must not forget that the main reasons you’re think-
you’ve assigned arbitrary tags to cached objects and want to search ing of using a distributed cache, which are to improve performance
based on the tags. Attribute-based searching is currently available and, more important, to improve the scalability of your applica-
only in some commercial solutions through object query languages, tion. Also, because your cache runs in a production environment
but tag-based searching is available in commercial caches and in as a server, it must also provide high availability.
Microsoft Velocity. Scalability is the fundamental problem a distributed cache
Let’s say you’ve saved a customer object. You could say, “Give me addresses. A scalable cache is one that can maintain performance
all the customers where the city is San Francisco,” when you want even when you increase the transaction load on it. So, if you have
only customer objects, even though your cache has employees, an ASP.NET application in a Web farm and you grow your Web
customers, orders, order items, and more. When you issue a SQL- farm from five Web servers to  or even  Web servers, you should
like query such as the one shown in Figure 5, it finds the objects be able to grow the number of cache servers proportionately and
that match your criteria. keep the same response time. This is something you cannot do
with a database.
A distributed cache avoids the scalability problems that a database
Figure 5 Using a LINQ Query to Search Items in the Cache usually faces because it is much simpler in nature than a DBMS
using Company.MyDistributedCache; and also because it uses different storage mechanisms (also known
as caching topologies) than a DBMS. These include replicated,
...
partitioned, and client cache topologies.
public List<Customers> FindCustomersByCity(Cache cache, string city) In most distributed cache situations, you have two or more
{
// Search cache with a LINQ query cache servers hosting the cache. I’ll use the term “cache cluster”
List<Customers> custs = from cust in cache.Customers to indicate two or more cache servers joined together to form
where cust.City == city
select cust; one logical cache. A replicated cache copies the entire cache on
return custs; each cache server in the cache cluster. This means that a replicat-
}
ed cache provides high availability. If any one cache server goes
68 msdn magazine Scale Out
down, you don’t lose any data in the cache because another copy why high availability is so critical for a distributed cache. Here are
is immediately available to the application. It’s also an extremely a few questions to keep in mind when evaluating whether a cach-
efficient topology and provides great scalability if your applica- ing solution provides high availability.
tion needs to do a lot of read-intensive operations. As you add • Can you bring one of the cache servers down without stopping
more cache servers, you add that much more read-transaction the entire cache?
capacity to your cache cluster. But a replicated cache is not the • Can you add a new cache server without stopping the cache?
ideal topology for write-intensive operations. If you are updat- • Can you add new clients without stopping the cache?
ing the cache as frequently as you are reading it, don’t use the In most caches, you use a specified maximum cache size so that
replicated topology. the cache doesn’t exceed the amount of data. The cache size is based
A partitioned cache breaks up the cache into partitions and then on how much memory you have available on that system. Can
stores one partition on each cache server in the cluster. This topol- you change that capacity? Let’s say you initially set the cache size
ogy is the most scalable for transactional data caching (when writes to be GB but now want to make it GB. Can you do that without
to the cache are as frequent as reads). As you add more cache serv- stopping the cache?
ers to the cluster, you increase not only the transaction capacity Those are the types of questions you want to consider. How many
but also the storage capacity of the cache, since all those partitions of these configuration changes really require the cache to be restart-
together form the entire cache. ed? The fewer, the better. Other than the caching features, the first
Many distributed caches provide a variant of a partitioned cache criteria for having a cache that can run in a production environ-
for high availability, where each partition is also replicated so that ment is how much uptime the cache is going to give you.
one cache server contains a partition and a copy or a backup of an-
other server’s partition. This way, you don’t lose any data if any one Performance
server goes down. Some caching solutions allow you to create more Simply put, if accessing the cache is not faster than accessing
than one copy of each partition for added reliability. your database, there is no need to have it. Having said that, what
Another very powerful caching topology is client cache (also should you expect in terms of performance from a good distrib-
called near cache), which is very useful if your cache resides in a uted cache?
remote dedicated caching tier. The idea behind a client cache is The first thing to remember is that a distributed cache is usu-
that each client keeps a working set of the cache close by (even ally OutProc or remote, so access time will never be as fast as that
within the application’s process) on the client machine. However, of a stand-alone InProc cache (for example, ASP.NET Cache). In
just as a distributed cache has to be synchronized with the data- an InProc stand-alone cache, you can probably read , to
base through different means (as discussed earlier), a client cache , items per second (KB object size). With an OutProc or a
needs to be synchronized with the distributed cache. Some com- remote cache, this number drops significantly. In terms of perfor-
mercial caching solutions provide this synchronization mecha- mance, you should expect about , to , reads per second
nism, but most provide only a stand-alone client cache without (KB object size) as the throughput of an individual cache server
any synchronization. (from all clients hitting on it). You can achieve some of this InProc
In the same way that a distributed cache reduces traffi c to the performance by using a client cache (in InProc mode), but that is
database, a client cache reduces traffic to the distributed cache. It only for read operations and not for write operations. You sacri-
is not only faster than the distributed cache because it is closer to fice some performance in order to gain scalability, but the slower
the application (and can also be InProc), it also improves the scal- performance is still much faster than database access.
ability of the distributed cache by reducing trips to the distributed
cache. Of course, a client cache is a good approach only when you Gaining Popularity
are performing many more reads than writes. If the number of reads A distributed cache as a concept and as a best practice is gain-
and writes are equal, don’t use a client cache. Writes will become ing more popularity. Only a few years ago, very few people in the
slower because you now have to update both the client cache and .NET space knew about it, although the Java community has been
the distributed cache. ahead of .NET in this area. With the explosive growth in applica-
tion transactions, databases are stressed beyond their limits, and
High Availability distributed caching is now accepted as a vital part of any scalable
Because a distributed cache runs as a server in your production application architecture. „
environment, and in many cases serves as the only data store for
your application (for example, ASP.NET session state), the cache
must provide high availability. This means that your cache must
be very stable so that it never crashes and provides the ability to
make configuration changes without stopping the cache. IQBAL KHAN is president and technology evangelist at Alachisoft (www.alachisoft.com).
Alachisoft provides NCache , an industry-leading .NET distributed cache for boost-
Most users of a distributed cache require the cache to run with- ing performance and scalability in enterprise applications. Iqbal has an MS in
out any interruptions for months at a time. Whenever they have to computer science from Indiana University, Bloomington. You can reach him at
stop the cache, it is usually during a scheduled down time. That is iqbal@alachisoft.com.
70 msdn magazine Scale Out
Get ready for…
the all-in-one .Net toolbox
One vendor. The Telerik quality. The Telerik support.

www.telerik.com

US Sales: +1.888.365.2779 • Germany Sales: +49.89.8780687.70 • Europe HQ: +359.2.80.99.850 • e-mail: sales@telerik.com

* Telerik WebUI Test Studio and WinUI Test Studio include Automation Design Canvas software, owned and developed by ArtOfTest, Inc.
TEST RUN JAMES MCCAFFREY

Request-Response Testing with F#

The F# programming language has several parameterize the test harness later. My har-
characteristics that make it well-suited for soft- ness begins by echoing the target URL of
ware test automation. In this month’s column, localhost:/MiniCalc/Default.aspx. Notice
I show you how to use F# to perform HTTP that I am using the Visual Studio development
request-response testing for ASP.NET Web Web server rather than IIS, so I specify a port
applications. Specifically, I create a short test- number () instead of the IIS default port
harness program that simulates a user exercis- . In test case , my F# harness program-
ing an ASP.NET application. The F# harness matically posts information that corresponds
programmatically posts HTTP request infor- to a user typing  into control TextBox, typing
mation to the application on a Web server. It a  into TextBox, selecting RadioButton (to
then fetches the HTTP response stream and indicate an addition operation), and clicking
examines the HTML text for an expected value on Button (to calculate). Behind the scenes,
of some sort to determine a pass/fail result. In the harness captures the HTTP response from
addition to being a useful testing technique in the Web server and then searches the response
its own right, learning how to perform HTTP for an indication that . (the correct result
request-response testing with F# provides Figure 1 ASP.NET Web Application of  + ) is in the TextBox result control. The
you with an excellent way to learn about the Under Test harness tracks the number of test cases that
F# language. This column assumes you have pass and the number that fail (which is a sur-
basic familiarity with ASP.NET technology and intermediate .NET prisingly interesting operation in F#) and displays those results after
programming skills with C# or VB.NET, but does not assume you all test cases have been processed.
have any experience with the F# language. However, even if you In the sections of this column that follow, I briefly describe the
are new to ASP.NET and test automation in general, you should Web application under test so you’ll know exactly what is being
still be able to follow this month’s column without too much dif- tested. Next, I walk you through the details of creating lightweight
ficulty. To see where I’m headed, take a look at Figures 1 and 2. HTTP request-response automation using the F# language. I wrap
Figure 1 illustrates the example ASP.NET Web application under up by describing how you can modify the techniques I’ve presented
test that I use. The system under test is a simple but representative to meet your own needs. I also present a few opinions about why
Web application, named MiniCalc. you should take time to investigate F#. I think you’ll find the
I deliberately keep my ASP.NET Web application under test as information presented here interesting and a useful addition to
simple as possible so that I don’t obscure the key points in the test your testing toolset.
automation. Realistic Web applications are significantly more com-
plex than the dummy MiniCalc application shown in Figure 1, but The Application Under Test
the F# testing techniques I describe here easily generalize to com- Let’s take a look at the code for the MiniCalc ASP.NET Web
plex applications. The MiniCalc Web application accepts two inte- application, which is the target of my test automation. I created the
gers and an instruction to add or multiply. It then sends the values MiniCalc application using Visual Studio  to take advantage of
to a Web server where the result is computed. The server creates the built-in development Web server. After launching Visual Studio,
the HTML response and sends it back to the client browser where I clicked on File | New | Web Site. In order to avoid the ASP.NET
the result is displayed to four decimal places. Figure 2 shows an code-behind mechanism and keep all the code for my Web applica-
F# test harness in action. tion in a single file, I selected the Empty Web Site option. To avoid
My F# harness is named project.exe and does not accept any
command-line arguments. For simplicity, I have hard-coded Send your questions and comments for James to testrun@microsoft.com.
information including the URL of the Web application, test case Code download is available at code.msdn.microsoft.com/mag200907 Testing.
input, and test case expected results. I will explain how you can
72 msdn magazine
To verify that my Web application under test was built correctly, I hit
the <F> key. I clicked OK on the resulting Debugging Not Enabled
dialog to instruct Visual Studio to modify the Web application’s
Web.config file. Visual Studio then started the development server
and assigned a random port number to the Web application—in this
case , as you can see in Figure 1. If I had wanted to specify a
port number, I could have selected the MiniCalc project in Solu-
tion Explorer, and then clicked on the View main menu item and
selected the Properties Window option. This would display a “Use
dynamic ports” option set to a default of True, and I could change
the value to False and enter a value in the “Port number” field.

Figure 3 MiniCalc Web Application Under Test Source


<%@ Page Language="C#" %>
<script language="C#" runat="server">
private void Button1_Click(object sender, System.EventArgs e)
{
Figure 2 Request-Response Testing with F# int alpha = int.Parse(TextBox1.Text.Trim());
int beta = int.Parse(TextBox2.Text.Trim());

if (RadioButton1.Checked) {
using IIS, I selected the File System option from the Location field TextBox3.Text = Sum(alpha, beta).ToString("F4");
drop-down control. I decided to use C# for the MiniCalc applica- }
else if (RadioButton2.Checked) {
tion, but the F# test harness I present in this column works with TextBox3.Text = Product(alpha, beta).ToString("F4");
ASP.NET applications written in VB.NET. Additionally, with slight }
else
modifications the harness can target non-.NET Web applications that TextBox3.Text = "Select method";
are written using technologies such as classic ASP, CGI, PHP, JSP, }
private static double Sum(int a, int b) {
Ruby, and so on. Because I intend to use the Visual Studio develop- double ans = a + b;
ment server, I specified a standard location in the file system, such return ans;
}
as C:\MyWebApps\MiniCalc, rather than an IIS-specific location, private static double Product(int a, int b) {
such as C:\Inetpub\wwwroot\MiniCalc. I clicked OK on the New double ans = a * b;
return ans;
Web Site dialog to generate the structure of my Web application. }
Next, I went to the Solution Explorer window, right-clicked on the </script>

MiniCalc project name, and selected Add New Item from the context <html>
menu. I then selected Web Form from the list of installed templates <head>
<style type=”text/css”>
and accepted the Default.aspx file name. I cleared the “Place code in fieldset { width: 16em }
separate file” option and then clicked the Add button. body { font-size: 10pt; font-family: Arial }
</style>
Next, I double-clicked on the Default.aspx file name in Solution <title>Default.aspx</title>
Explorer to edit the template-generated code. I deleted all the tem- </head>
<body bgColor=”#ffcc99”>
plate code and replaced it with the code shown in Figure 3. <h3>MiniCalc by ASP.NET</h3>
In order to keep my source code small in size and easy to <form method=”post” name=”theForm” id=”theForm”
runat=”server” action=”Default.aspx”>
understand, I am not using good coding practices in this Web <p><asp:Label id=”Label1” runat=”server”>
application. In particular, I do not do any error checking and I use Enter integer:&nbsp&nbsp</asp:Label>
<asp:TextBox id=”TextBox1” width=”100” runat=”server” /></p>
a somewhat haphazard design approach by combining server-side <p><asp:Label id=”Label2” runat=”server”>
controls (such as <asp:TextBox>) with plain HTML (such as <field- Enter another:&nbsp</asp:Label>
<asp:TextBox id=”TextBox2” width=”100” runat=”server” /></p>
set>). The most important parts of the code listing in Figure 3 for <p></p>
you to note are the IDs of my ASP.NET server-side controls. I use <fieldset>
<legend>Arithmentic Operation</legend>
default IDs Label (user prompt), TextBox and TextBox (input for <p><asp:RadioButton id=”RadioButton1” GroupName=”Operation”
two integers), RadioButton and RadioButton (choice of addition runat=”server”/>Addition</p>
<p><asp:RadioButton id=”RadioButton2” GroupName=”Operation”
or multiplication), Button (calculate), and TextBox (result). To runat=”server”/>Multiplication</p>
perform automated HTTP request-response testing for an ASP.NET <p></p>
</fieldset>
application, you must know the IDs of the application’s controls. In <p><asp:Button id=”Button1” runat=”server” text=” Calculate “
this situation, I have the source code available because I am creating onclick=”Button1_Click” /> </p>
<p><asp:TextBox id=”TextBox3” width=”120” runat=”server” /></p>
the application myself; but even if you are testing a Web applica- </form>
tion you didn’t write, you can always examine the application by </body>
</html>
using a Web browser’s View Source functionality.
msdnmagazine.com July 2009 73
Notice that the action attribute of my <form> element is set to The first line in my F# source code, #light, instructs the F# com-
Default.aspx. In other words, every time a user submits a request, the piler to use lightweight syntax, where indentation and newlines
same Default.aspx page code is executed. This gives my MiniCalc Web in part determine program structure. This is standard practice
application the feel of a single application rather than a sequence of for F# programs. I use the F# open keyword to bring the relevant
different Web pages. Because HTTP is a stateless protocol, ASP.NET .NET namespaces into scope, similar to using keyword in C# or
accomplishes the application effect by maintaining the Web appli- the imports keyword in Visual Basic. Notice that because I am
cation’s state in a special hidden value type, called the ViewState. As using light syntax, I do not use an explicit statement terminator
we’ll see shortly, dealing with an ASP.NET application’s ViewState is such as the semicolon character used by C#, JavaScript, and other
the key to programmatically posting data to the application. C-related languages. I open System.Text in order to easily use the
Encoding class to convert text to bytes. The System.Net and System.
ASP.NET Request-Response Testing with F# IO namespaces house the key classes needed to programmatically
Now that we’ve seen the Web application under test, let’s go post data to a Web application. The System.Web namespace is not
over the F# test harness program that produced the screenshot visible by default to an F# application, so I explicitly added that
in Figure 2. F# is tentatively scheduled to ship with the next namespace by right-clicking on the project name in Visual Studio
version of Visual Studio. For this column, I decided to use the and then selecting the Add Reference option. I open System.Web
September  Community Technical Preview (CTP) version of so that I can use the HttpUtility class to perform URL-encoding.
F# and Visual Studio . By the time you read this column, you My next instruction is straightforward:
may have other options for using F#. In any case, the CTP version printfn "\nBegin F# HTTP request-response test of MiniCalc Web App\n"

I used was extremely stable, and I installed it without any difficulty. The printfn() library function displays information to the
The F# installation process places everything you need to write F# command shell as you’d expect. Strings in F# are delimited by
programs into Visual Studio. double-quotes and can contain embedded escape sequences such
I started by launching Visual Studio and then clicking File as \n for a newline. One of the key characteristics of F# is that almost
| New | Project. On the New Project dialog, I selected the Visual everything revolves around functions, and most functions return
F# project type, then selected the F# Application template to a value that must be explicitly dealt with. However, the printfn()
create a command-line application. I named my project, “Project.” function does not return a value. Next I set my target URL:
The project name will become the name of the resulting executable let url = "http://localhost:15900/MiniCalc/Default.aspx"
(here, project.exe), so a more descriptive name, such as Request- I use the let keyword and the = operator to bind a string value to
Response-Harness, might have been preferable. After specifying an identifier named url. (Many members of the F# team prefer to
a location for my F# project and clicking OK, I double-clicked on use the term symbol rather than identifier.) Notice the port num-
file Program.fs in the Visual Studio Solution Explorer window to ber in the URL string. In this case, I do not specify the data type
open an editing window. The overall structure for my F# harness for identifier url so the F# compiler will have to infer the type. I
is listed in Figure 4. could have explicitly typed identifier url as a string:
let url : string = " . . . "
Figure 4 F# Test Harness Structure Instead of hard-coding the URL into the F# source, I could fetch
it as a command-line argument using the built-in Sys.argv array:
#light
open System let url = Sys.argv.[1]
open System.Text Sys.argv.[] is the program name, .[] is the first argument, and
open System.Net
open System.IO so on. Next I echo the target URL:
open System.Web printfn "URL under test = %s \n" url

printfn "\nBegin F# HTTP request-response test of MiniCalc Web App\n" The printfn() function uses C-language style formatting, where %s
let url = "http://localhost:15900/MiniCalc/Default.aspx" indicates a string. Other common specifiers include %d for integer,
printfn "URL under test = %s \n" url
%x for hexadecimal, %.f for floating point with two decimals, and
// define function to get ViewState & EventValidation a generic %A for all types. F# supports exception handling using a
// set up test case data
try try-with construction. If an exception is thrown in the try block,
// set numPass & numFail counters to 0 control will be transferred to the with block where the exception
// iterate and process each test case
// display number pass & number fail will be handled. Notice that with light syntax I do not use curly
printfn "\nEnd F# test run" braces to define begin and end points for a code block. Instead, all
Console.ReadLine() |> ignore
with statements that are indented the same number of blank spaces (you
| :? System.OverflowException as e -> cannot use tab characters) are part of the same code block. The last
printfn "Fatal overflow exception %s" e.Message
Console.ReadLine() |> ignore line in my try block is a call to ReadLine(), so that my harness will
| :? System.Exception as e -> pause execution and hold the command shell on the screen:
printfn "Fatal: %s" e.Message
Console.ReadLine() |> ignore Console.ReadLine() |> ignore
ReadLine() returns a string, and in F#, the return value must be
// end source
accounted for or the compiler will generate an error. Here I use the
74 msdn magazine Test Run
|> pipe operator to send the return value to the built-in ignore object. Because I placed an open statement to the System.Net namespace
As an alternative for discarding a return value in most situations, that houses the WebClient class, I do not need to fully qualify the
you can use the special _ (underscore) identifier like this: class name. Next, I send a priming request to the target URL:
let _ = Console.ReadLine() let st = wc.OpenRead(url)
let sr = new StreamReader(st)
However, for subtle F# syntax reasons, this approach does not let res = sr.ReadToEnd()
work as the last statement of a code block. My exception handling I use the OpenRead() method, combined with a StreamReader object
code uses an interesting F# construction: and its ReadToEnd() method, to send a request to the target Web ap-
| :? System.OverflowException as e -> plication, grab the entire HTML source of the response as a string, and
printfn "Fatal overflow exception %s" e.Message
| :? System.Exception as e -> bind that result to an identifier named res. In most cases, I suggest you
printfn "Fatal: %s" e.Message explicitly type F# identifiers, such as let res : string = sr.ReadToEnd(),
You can interpret the first part of this code to mean, “If the but in this situation, I let the F# compiler infer the type of identifier res.
exception matches type System.OverflowException, then assign At this point, I need to parse out the ViewState and EventValidation
the exception to an identifier named e, and then print a string values from the string value bound to the res identifier. The part of the
message that includes the exception Message text.” The | token is string value bound to res that has the ViewState value resembles:
the match operator. In other words, if an exception is thrown, my <input type="hidden" name="__VIEWSTATE"
program attempts to match the exception with two patterns. The id="__VIEWSTATE" value="/wEPDwUK . . . Zmv==" />

:? operator tests for types rather than values. First, I find the location of the beginning of the ViewState value
using the String.IndexOf() method:
Harness Details let startI = res.IndexOf("id=\"__VIEWSTATE\"", 0) + 24
Now that I’ve explained the overall structure of my F# test harness, The  argument means begin searching res at index position ,
let’s look at the details. If you refer to Figure 4, you’ll see that I define which is the beginning of the response string. Notice that because my
a function that fetches ViewState and EventValidation information target string contains double-quote characters, I delimit the target
from the MiniCalc application under test. In F#, you must define string by using an \” escape sequence. Once I find where my target
functions in source code before you call them. As I described earlier, value begins, the actual ViewState value starts  characters from
because HTTP is a stateless protocol, ASP.NET uses a special ViewState that index. Note that “—VIEWSTATE” has two leading underscore
value to maintain state. A ViewState value is a Base-encoded string characters, not just one. This is a pretty crude way to parse for the
that represents the state of the ASP.NET Web application after every initial ViewState value and makes my code brittle. However, in this
request-response round trip. An EventValidation value is somewhat situation, I am performing quick and easy test automation and I’m
similar to a ViewState value, but is used for security purposes to willing to accept the possibility that my code may break. Now I find
help prevent script-insertion attacks. As it turns out, if you want to the location within res where the ViewState value ends:
programmatically post to an ASP.NET Web application, you must let endI = res.IndexOf("\"", startI)

send the application’s current ViewState value and current Event- I search for the first occurrence of any double-quote character
Validation value. The code for the function that fetches ViewState after the beginning of the ViewState value I just found (at startI)
and EventValidation values is listed in Figure 5. and bind that location with identifier endI. Now that I know where
Notice that I use the let keyword and the = operator to define the ViewState value begins and ends within the response string res,
an F# function. I name my function getVSandEV and specify a I can extract the value using the SubString method:
single input parameter named url that has type string. My function let viewState = res.Substring(startI, endI - startI)

signature does not explicitly indicate the return type, but I could The two arguments to the SubString() method are the index
have done so, as I’ll explain in a moment. My function begins by within res to begin extracting from, and the number of characters
instantiating a WebClient object: to extract (not the index to end extracting, which is a common
let wc = new WebClient() mistake). As always with indexing methods, you’ve got to be very
careful to avoid starting or ending one character too soon or too
Figure 5 Function to Fetch ViewState and EventValidation late. The identifier viewState now holds the initial ViewState value
let getVSandEV (url : string) =
for the target ASP.NET Web application. Extracting the EventVali-
let wc = new WebClient() dation value follows the same pattern:
let st = wc.OpenRead(url) let startI = res.IndexOf("id=\"__EVENTVALIDATION\"", 0) + 30
let sr = new StreamReader(st) let endI = res.IndexOf("\"", startI)
let res = sr.ReadToEnd() let eventValidation = res.Substring(startI, endI - startI)
sr.Close()
st.Close() Notice that I reuse my startI and endI identifiers and bind them
let startI = res.IndexOf("id=\"__VIEWSTATE\"", 0) + 24 to new values. This is legal in F# when the code is inside a method
let endI = res.IndexOf("\"", startI) definition, as it is here, but not legal in top-level code outside of a
let viewState = res.Substring(startI, endI - startI)
method. At this point, I have my two values bound to identifiers
let startI = res.IndexOf("id=\"__EVENTVALIDATION\"", 0) + 30
let endI = res.IndexOf("\"", startI) ViewState and EventValidation. I use a neat F# feature to effectively
let eventValidation = res.Substring(startI, endI - startI) return both values simultaneously:
(viewState, eventValidation)
(viewState, eventValidation)

76 msdn magazine Test Run


I do not use an explicit return keyword; the last line of an F# setting up counter identifiers to store the number of test cases
function automatically represents the return value. Here I use that pass and fail:
parentheses to construct an F# tuple, which you can think of as let numPass = ref 0
let numFail = ref 0
a set of values. As I mentioned above, I could have defined my
getVSandEV() function in a way that explicitly indicates the In most cases, I’d simply bind these identifiers to , for example,
return value: “let numPass = ”. However, I need to initialize the counters out-
let getVSandEV (url : string) : (string * string) = . . . side a function and increment each counter inside the function (as
The (string * string) notation means my function returns a tuple we’ll see shortly), but print the final values outside the function.
of two strings. Now that I’ve defined my helper function, I set up This causes a minor problem, because of scope visibility issues. In
my test case data:
let testCases =
[| "001,TextBox1=5&TextBox2=3&Operation=RadioButton1" +
In awkward situations like this,
one approach is to use the ref
"&Button1=clicked,value=\"8.0000\",Add 5 and 3"
"002,TextBox1=5&TextBox2=3&Operation=RadioButton2" +
"&Button1=clicked,value=\"15.0000\",Multiply 5 and 3"

keyword as I’ve done here.


"003,TextBox1=0&TextBox2=0&Operation=RadioButton1" +
"&Button1=clicked,value=\"0.0000\",Add 0 and 0"
|]
Here I create an immutable F# array named testCases that con-
tains three strings. The [| . . . |] notation is used by F# to delimit awkward situations like this, one approach is to use the ref key-
an array. Because my three strings are quite long, I break each word as I’ve done here. I’ll explain how to work with ref identifiers
string into two parts and use the + string concatenation opera- shortly. Now comes the trickiest syntax part of my F# test harness.
tor to stitch them back together. F# also accepts the ^ character I process each test case:
for string concatenation. The first part of the first string in array testCases |>
Seq.iter(fun (testCase : string) ->
testCases, , is a test case ID. After a comma delimiter, I have // parse current test case
TextBox=&TextBoxt= that, when posted to the MiniCalc Web // get ViewState and EventValidation
// send HTTP request and get response
application, will simulate a user typing these values. The third name- // check response for expected value
value pair (Operation=RadioButton) is a way to simulate a user // print pass or fail
)
selecting a RadioButton control item—in this case, the RadioButton
I use the |> pipe operator to send my testCases array into the Seq.iter()
that corresponds to addition. You might have incorrectly guessed
method, which will process each test case item in the array. Seq.iter()
(as I originally did) at something like RadioButton=checked.
expects an argument that is a function that will be applied to every
However, RadioButton is a value of the Operation control, not
item in the sequence. I could write a function explicitly, but a thematic
a control itself. The fourth name-value pair (Button=clicked)
F# approach is to define an anonymous function on the fly using
is somewhat misleading. I need to supply a value for Button to
the “fun” keyword. My anonymous function accepts a single-string
simulate that it has been clicked by a user, but any value will work.
parameter named testCase, which will be bound to each item in turn
So, I could have used “Button=yadda” or even just “Button=” if I
in the testCases array. Here, the parameter testCase must be type
had wanted to. But “Button=clicked” is more descriptive. The next
string because each item in the array testCases is a string, so I could
field (value=“.”) in my test case data is an expected value in
have omitted the explicit typing for testCase. Inside my anonymous
the form of a string that I’ll look for in the HTML response stream.
function, I parse out the values in the current test case:
The final field (Add  and ) is a simple comment. let delimits = [|',';'~'|];
Using an F# array to store my test case data is perhaps the sim- let tokens = testCase.Split(delimits)
let caseID = tokens.[0]
plest approach, but there are several alternatives. I could have let input = tokens.[1]
replaced the [| . . . |] delimiters with plain square brackets such as: let expected = tokens.[2]
let comment = tokens.[3]
let testCases = [ “ . . .” ] to create an F# List. Lists are common
and thematic with older functional programming languages such Notice I use the F# [| . . |] syntax with a semicolon delimiter to
as LISP and Prolog, but in this situation, I’d gain no advantage specify an array of characters as an argument for the String.Split()
by using a List. Alternatively, I could have created an F# mutable method. Next, I call the getVSandEV() function I defined earlier:
let (vs, ev) = getVSandEV url
array like this:
Recall that getVSandEV() accepts a single-string argument
let testCases = Array.create 3 ""
testCases.[0] <- "001,. . ."
and returns a tuple of two strings. I use parentheses to capture
testCases.[1] <- "002,. . ." the tuple, deconstruct the tuple values, and bind them to identi-
testCases.[2] <- "003,. . ."
fiers vs and ev. Notice that in F#, I do not need to use parenthe-
The use of mutable arrays is quite rare in F# and, again, I’d ses when calling a program-defined function. Next, I build up my
gain no advantage by using one here; I mention the possibility actual POST data:
mostly to show you mutable array syntax. If you refer back to let data = input +
"&__VIEWSTATE=" + HttpUtility.UrlEncode(vs) +
the overall test harness structure shown in Figure 3, you can see "&__EVENTVALIDATION=" + HttpUtility.UrlEncode(ev)
that I am now ready to start processing each test case. I begin by let buffer = Encoding.ASCII.GetBytes(data)

msdnmagazine.com July 2009 77


I take the value bound to identifier input and concatenate the the ref keyword. We’ve seen that F# uses = to bind an initial value
required ViewState and EventValidation values. Because ViewState to an identifier, and the <- operator to update a mutable identifier.
and EventValidation values are Base encoded, they may contain This code shows that F# uses the := operator to change the value
characters that are not valid in an HTTP POST request (in par- referenced by an identifier. Notice, too, that because I am working
ticular, = padding characters), so I use the UrlEncode() method with ref objects, I use the ! operator to dereference and get the value
in the System.Web namespace to convert any such troublesome associated with the ref object. After each test case has been processed,
characters into an escape sequence such as %D. I use the Get- I pause for a key-press and then delay execution for  second:
Bytes() method to convert my string into an array of bytes. Now let _ = Console.ReadLine()
System.Threading.Thread.Sleep(1000)
I can create an HTTP request object: ) // end anonymous function
let req = WebRequest.Create(url) :?> HttpWebRequest Here I use the common F# idiom to discard the return value of
req.Method <- "POST"
req.ContentType <- "application/x-www-form-urlencoded" Console.ReadLine() by assigning the return to the special_identifier.
req.ContentLength <- int64 buffer.Length Instead, I could have piped the return to “ignore”, as I explained
I instantiate an HttpWebRequest object using a factory mecha- earlier. Notice that because I did not open namespace System.Thread-
nism of the WebRequest class. I must cast the return value of the ing, I must fully qualify my call to the Thread.Sleep() method. After
Create() method, so I use the downcast :?> operator, which you can all test case input in the array testCase has been processed by the
interpret to mean “and cast as type xxx”. Because the properties of anonymous function in Seq.iter(), control is transferred to the end
my .NET object are mutable, I use the <- operator to assign values of of the F# harness and I can print summary results:
“POST”, “application/x-www-form-urlencoded”, and buffer.Length printfn "Number pass = %d" !numPass
to the Method, ContentType, and ContentLength properties of the printfn "Number fail = %d" !numFail
printfn "\nEnd F# test run"
request object. Notice that to cast the Length property of identifier
Now my harness is ready to run. Once I make sure the Visual
buffer as int, I use syntax similar to C-based languages rather
Studio development Web server is running, I can execute my har-
than using the :?> operator. In F#, you use C-style cast syntax with
ness from a command shell, as shown in Figure 2.
.NET value types, such as int, and use :?> with reference types,
such as HttpWebRequest. With the request object created, I can Wrap Up
fire it off to the Web application under test: The example F# harness I’ve presented here should give you a solid
let reqSt = req.GetRequestStream()
reqSt.Write(buffer, 0, buffer.Length) base to create a test harness that meets your own particular testing situ-
reqSt.Flush() ation. In general, the most difficult problem you’ll face is determining
reqSt.Close()
what data to post to your Web application under test. One good way
The Write() method does not actually write its byte array argu- to do this is to manually exercise your Web application and capture
ment, so I explicitly call the Flush() method to do so. Now I can fetch HTTP request data with a sniffer tool to see what information is be-
the HTTP response and bind it to an identifier named html: ing sent to the Web server that is hosting the application. There are
let res = req.GetResponse() :?> HttpWebResponse
let resSt = res.GetResponseStream() many such tools available on the Internet as free downloads.
let sr = new StreamReader(resSt) Let me address why you might want to investigate the F# lan-
let html = sr.ReadToEnd()
guage. After all, learning any new programming language requires
In F#, when you call a .NET method that accepts only a single a significant investment of time. Learning F# will require some
argument, you can omit the parentheses in the call. However, .NET effort, especially if you are new to functional programming. Here
methods with no arguments, those with two or more arguments,
are four reasons I decided to learn F#. First, F# has gotten very good
and constructor calls all require parentheses. Therefore, the F# team
informal technical reviews from several of my colleagues whose
suggests always using parentheses when calling .NET methods.
opinions I respect. Second, it never hurts to add a new technology
Now I display my test case input:
to your toolset and resume. Third, I believe that learning a new pro-
printfn "============================"
printfn "Test case: %s" caseID gramming language helps you understand other languages better
printfn "Comment : %s" comment and use them more effectively. Fourth, I’m finding the F# language
printfn "Input : %s" input
printfn "Expected : %s" expected interesting and just plain enjoyable to learn.
And then I examine the HTML response to look for the Acknowledgement: My thanks to Tim Ng, a senior development
expected string: lead on the F# team, who provided much technical advice for this
if html.IndexOf(expected) >= 0 then article. „
printfn "Pass"
numPass := !numPass + 1
else
printfn " ** FAIL **"
numFail := !numFail + 1 DR. JAMES MCCAFFREY works for Volt Information Sciences, Inc., where he man-
I use the String.IndexOf() method, which returns - if its string ages technical training for software engineers working at Microsoft’s Redmond,
Washington campus. He has worked on several Microsoft products including
argument is not found, or the index location of the argument if the Internet Explorer and MSN Search. James is the author of “.NET Test Automa-
argument is found. Notice that in F#, if . . then syntax uses an explicit tion Recipes” (Apress, ). James can be reached at jmccaffrey@volt.com or
then keyword. Recall that I declared my two counter identifiers using v-jammc@microsoft.com.
78 msdn magazine Test Run
JON FLANDERS SERVICE STATION

More on REST

In the last two columns, I’ve described the basics of REST and used when a particular feature of SOAP is needed, and the advan-
talked about exposing and consuming Web feeds. In this column, tages of REST make it generally the best option otherwise.
I’ll answer a number of questions that often come up when I make
presentations or conduct training sessions on using REST to build What about security? Isn’t SOAP
service-based applications. more secure than REST?
This question touches one of my pet peeves because the answer
Which is better, REST or SOAP? is clearly no. It is just as easy to make a RESTful service secure as
This is one of the most common questions I get about REST, and it is to make a SOAP-based service secure. In the majority of cases
it is probably the least fair. Both REST and SOAP are often termed involving either REST or SOAP, the security system is the same:
“Web services,” and one is often used in place of the other, but they some form of HTTP-based authentication plus Secure Sockets
are totally different approaches. REST is an architectural style for Layer (SSL). Although technically the technology for secure con-
building client-server applications. SOAP is a protocol specifica- versations over HTTP is now called Transport Layer Security (TLS),
tion for exchanging data between two endpoints. SSL is still the name most commonly used.
What is true is that a SOAP-based service, because of the extra
protocols specified in the various WS-* specifications, does support
Because REST relies on the end-to-end message security. This means that if you pass SOAP
messages from endpoint to endpoint to endpoint, over the same or
semantics of HTTP, requests for different protocols, the message is secure. If your application needs
this particular feature, SOAP plus WS-* is definitely the way to go.
data can be cached. REST probably wouldn’t be an option here because of its dependence
on HTTP, and inherently you’d be designing a multiprotocol applica-
tion. I believe that the fact that SOAP with WS-* enables end-to-end
Comparing REST with the remote procedure call (RPC) style of message-level security is the source of the misconception that SOAP-
building client-server applications would be more accurate. RPC is based services are more secure than RESTful services.
a style (rather than a protocol, which is what SOAP is) of building Another area in which the WS-* folks have spent a lot of time
client-server applications in which a proxy (generally generated from and effort recently is federated security. The simple idea behind
metadata) is used in the client’s address space to communicate with federated identity is to create trust between two companies, where
the server and the proxy’s interface mimics the server’s interface. authenticated users from one company can be trusted and considered
Although SOAP doesn’t require the RPC style, most modern SOAP authenticated by another company without the second company
toolkits are geared toward (at least they default to) using RPC. having to maintain the authentication information (username
In contrast to RPC, REST lacks the metadata-generated proxy and password, typically). The various WS-* specifications have
(see the next question for more information), which means that implementations from all the major vendors, and Microsoft is in-
the client is less coupled to the service. Also, because REST relies tegrating the ideas into Active Directory through Active Directory
on the semantics of HTTP, requests for data (GET requests) can be Federation Services (ADFS).
cached. RPC systems generally have no such infrastructure (and even In the realm of federated security, the WS-* arena certainly has
when performing RPC using SOAP over HTTP, SOAP responses more standards than the RESTful arena (and this will probably
can’t be cached because SOAP uses the HTTP POST verb, which is always continue to be the case), but there are efforts to support
considered unsafe). SOAP intentionally eschews HTTP, specifically federated security in the world of REST. OpenID is one such effort.
to allow SOAP to work over other protocols, so it’s actually a little The .NET Service Bus (part of Windows Azure) also contains a
disingenuous to call SOAP-based services Web services.
My perspective is that both REST and SOAP can be used to Send your questions and comments to sstation@microsoft.com.
implement similar functionality, but in general SOAP should be
July 2009 79
DevConnections ...
Providing the vision

One Place, One Time ...

BONUS PRODUCT GIVEAWAY


>> The first 400 paid attendees will be mailed SQL Server 2008 standard with one CAL

DEVCONNECTIONS ROCKS the technology industry as the LARGEST and


most EXCITING event focused on MICROSOFT TECHNOLOGY and YOUR needs.

Scott Guthrie Thomas Rizzo Steve Riley Quentin Clark Dave Mendlen
Microsoft Microsoft Microsoft Microsoft Microsoft
Corporate Vice Director, Senior Security General Manager Director of
President, .NET SharePoint Group Strategist of Database Developer
Developer Division Engine Group Marketing

Technology+Solutions=Impact
Enter:
MSDN
intelligence into the online
discount code when
to keep you and your company registering online
competitive in today’s market! before August 1st to
receive a $200
discount!

DevConnections Fall ‘09


Make CONNECTIONS the CONFERENCE
you bring your whole team to this year!

NOVEMBER 9-12, 2009


LAS VEGAS, NEVADA
MANDALAY BAY RESORT & CASINO

I 125+ MICROSOFT AND

Exciting Announcements: INDUSTRY EXPERTS


I 200+ IN-DEPTH SESSIONS
Be among the first to get the BONUS:
I UNPARALLELED WORKSHOPS Book 3 nights
insiders scoop on the products
I EXCITING ANNOUNCEMENTS by August 1st at
and technology you rely on! Mandalay Bay
Hot Trends in Software and receive a
As a DevConnections attendee, you and $100 Mandalay
I CLOUD COMPUTING
Bay certificate!
your colleagues can attend all of the
I SILVERLIGHT (based on a 3-night
Connections shows, and cross between minimum stay)
I BONUS MOBILE TRACK
all of the sessions, at the same time for
I WINDOWS 7
the same price.
I ENERGYNET

* REGISTER BY AUGUST 1ST to receive the “special” room rate of $149 @ MANDALAY BAY

CHECK WEBSITE FOR DESCRIPTIONS OF SESSIONS AND WORKSHOPS

www.DevConnections.com • 800.438.6720 • 203.268.3204 • Register Today!


federated identity service, which works just as well with HTTP (and Since one of the driving points behind creating the SOAP
therefore REST) as it does with SOAP-based services. specification was to create an interoperable way to communicate
between different platforms and different languages, many people
What about transactions? are surprised by this assertion. But a funny thing happened on the
Here is another area in which SOAP and WS-* have explicit way to widespread interoperability: the WS-* specifications (and
support for an “advanced” feature and REST has none. WS-Atomic vendors’ implementations of said specifications) made SOAP services
Transactions supports distributed, two-phase commit transactional less interoperable rather than more interoperable.
semantics over SOAP-based services. REST has no support for The problem in the SOAP and WS-* arena is the large number
distributed transactions. of different standards (and versions of each of those standards) to
Generally speaking, if you want something like transactions in a choose from. And when a particular vendor chooses to implement
RESTful system, you create a new resource. (Creating a new resource a particular standard, that vendor often provides an implemen-
whenever you run into a problem with a RESTful system generally tation that is just slightly different from another vendor’s (or all
solves most problems.) You can have a resource called Transaction. others). This leads to problems whenever you have to cross vendor
When your client needs to do something transactional (such as boundaries (languages and operating system).
transferring money between two bank accounts), the client cre- Of course, even to use SOAP you need a SOAP toolkit on your
ates a Transaction resource that specifies all the correct resources platform, which most (but not all) platforms have today. And then
affected (in my example, the two bank accounts) by doing a POST you have to deal with myriad WS-* specifications and figure out
to the Transaction factory URI. The client can then perform updates which to use (or not to use) and how that affects interoperability.
by sending a PUT to the transaction URI and close the transaction To be honest, it’s kind of a mess out there.
by sending a DELETE to the URI. In terms of platforms, REST has the advantage because all you
This, of course, requires some amount of hand-coding and explicit need to use REST is an HTTP stack (either on the client or the
control over your system, whereas the WS-Atomic Transactions system server). Since almost every platform and device has that today, I
is more automatic because (in the case of Windows Communication would argue that REST has the widest interoperability. Given that
Foundation) it is tied to your runtime’s plumbing. mobile devices, household devices, POS devices, DVD players, and
TVs all have Internet connectivity, there are more and more plat-
forms for which having a full SOAP toolkit is impossible or unlikely.
In terms of platforms, REST has And even if you do have a SOAP toolkit for a particular platform,
the chance of it working with another platform’s implementation
the advantage because all you is not %.

need to use REST is an HTTP stack But what about metadata? So what if REST is so
interoperable—there’s no WSDL with REST, and
(either on the client or the server). without WSDL, I can’t generate a client-side proxy
If your system absolutely needs atomic transactional semantics to call a service. REST is hard to use.
across diverse systems, WS-Atomic Transactions is probably the It’s true that in the world of REST, there is no direct support for
way to go. Using distributed transactions in this way may or may not generating a client from server-side-generated metadata, as there
be smart because it increases the coupling between the two systems is in the world of SOAP with Web Service Description Language
and creates potential problems if you aren’t controlling the code on (WSDL). A couple of efforts are being made to get such support
both ends. But the most important thing is to use the right tool for into REST, one being a parallel specification, known as WADL
the right job (once you’ve figure out what the right job is). (Web Application Description Language). The other is a push to use
In defense of REST, I think it is fair to say that given today’s WSDL . to describe RESTful endpoints. I often say that REST is
distributed, service-oriented architectures, coupling two endpoints simple, but simple doesn’t always mean easy. SOAP is easy (because
so tightly using a distributed transaction may not be the best design. of WSDL), but easy doesn’t always mean simple.
On the other hand, some situations call for this type of functionality, Yes, using WSDL makes generating a proxy for a SOAP-based
and if you need it, use SOAP and WS-Atomic Transactions. service easier than writing the code to call a RESTful service. But
once you generate that proxy, you still have to learn the API. Noth-
ing in the WSDL tells you which method to call first or second or
What about interoperability? Isn’t SOAP supposed whether you need to call the methods in any particular order at all.
to be about interoperability? Isn’t SOAP more These are all things you need to figure out after you generate the
interoperable than REST? proxy and are prototyping the code to use the service.
If you define interoperability as the technical ability to commu- Building a client against a RESTful service means you are learn-
nicate between two divergent endpoints, I assert that REST wins ing the service and how it works as you build the client. Once
the interoperability battle hands down. you have finished, you have a complete understanding of the
82 msdn magazine Service Station
service, its resources, and the interaction you can have with those doesn’t have support for distributed transactions, but does that
resources. To me, this is a big benefit. Since RESTful services follow mean ASP.NET isn’t useful for enterprises?"
the constraints of REST (at least they are supposed to), there is a My point is that not every technology solves every problem,
convention that you can easily follow as you determine the different and there are plenty of technologies that don’t support the typical
parts of the service. features people think of when they think of enterprises but that are
Also, out in the wilds of developer-land, most services are wrapped incredibly helpful for enterprises nonetheless.
in something often called a "service agent," which is another layer

Having a metadata-generated
of indirection to protect clients from changes in the service layer.
This may be needed in either REST or SOAP.

proxy in REST also reduces the


Another point is that metadata-generated proxies are part of
what SOAP was meant to get away from in the RPC era, namely

chances of taking advantage of


local-remote transparency. The concept of having an API on the
client that matches the API on the server was considered to be a bad

hyperlinking. Using hypertext as


idea, but that’s exactly what happens in most SOAP-based services.
Having a metadata-generated proxy in REST also reduces the chances

the engine of application state is


of taking advantage of hyperlinking. Using hypertext as the engine
of application state (HATEOAS) is one of the constraints of REST,

one of the constraints of REST.


and using it requires a more loosely coupled client API.
The last point I’ll make is that as support for REST becomes
more ubiquitous, building clients will get easier and easier. If
you look at the Windows Communication Foundation (WCF) In fact, when I think of enterprise applications, I often think of
REST starter kit ( codeplex.com/aspnet/Wiki/View.aspx?title=WCF%20 speed and scalability—scalability being one of the main differences
REST), it includes facilities that head in this direction. The new between REST and SOAP. SOAP services are much harder to scale
HttpClient API makes using HTTP much easier than using the than RESTful services, which is, of course, one of the reasons that
.NET WebRequest/WebResponse API. Also, there is a new Paste REST is often chosen as the architecture for services that are exposed
as XML Serializable tool, which allows you to copy a piece of XML via the Internet (like Facebook, MySpace, Twitter, and so on).
(say from the documentation of a RESTful endpoint) and generate Inside enterprises, applications also often need to scale as well.
a .NET type that can represent that XML instance in your appli- Using REST means that you can take advantage of HTTP caching
cation. This is similar to what the WCF tools do automatically and other features, like Conditional GET, that aid in scaling services.
for the whole service with WSDL. Over time, these tools will Many of these techniques can’t be used with SOAP because SOAP
become much more sophisticated, further simplifying the client uses POST only over HTTP.
experience in WCF when using RESTful services.
Bottom Line
What if I want to use a transport other than HTTP? I hope that after you read this column, you’ll think that the
The common (somewhat sarcastic) answer from the REST answer to “Which is better, REST or SOAP?” is “It depends.” Both
community here is, “Go ahead, there isn’t anything stopping you.” the REST architectural style and SOAP and the WS-* protocols
Realistically, however, REST is currently tied to HTTP, if only have advantages and disadvantages when it comes to building
because most developers and teams of developers do not have the services. Those of us in the RESTafarian camp (yes, I must give
time for the engineering effort necessary to get the semantics of full disclosure here: I am definitely in that camp) believe that for
REST to work over, say, TCP/IP. most service situations, REST provides more benefits than SOAP
The common answer is technically correct, because nothing is or WS-*. On the other hand, SOAP and WS-* have some features
stopping you from implementing the concepts of REST over other that are easy (and possible) to implement using REST. When you
protocols, but until vendors add support for this, I find it a dubious need those specific features, you definitely want to use runtimes
proposition for most. and toolkits that can provide those features. Although this col-
umn wasn’t specifically about WCF, one nice feature of adopting
WCF is that it supports both REST and SOAP/WS-*. Moving back
After all that information, aren’t you telling me and forth between the two worlds becomes easier if you have one
that REST is good for Internet-facing applications, programming and runtime model to learn. „
and SOAP for enterprise applications?
If you’ve read the rest of this column, you can probably imagine
that I think this statement is generalized and false. Often I hear this
JON FLANDERS is an independent consultant, speaker, and trainer for Pluralsight.
sentiment after discussing the lack of explicit distributed transac- He specializes in BizTalk Server, Windows Workflow Foundation, and Windows
tion support in REST versus the explicit support in WS-Atomic Communication Foundation. You can contact Jon at masteringbiztalk.com/
Transactions. My retort is generally something like "Well, ASP.NET blogs/jon.
msdnmagazine.com July 2009 83
NEED TO HIRE DEVELOPERS?

WE HAVE OVER 6 MILLION OF THEM.


JOBS.CODEPROJECT.COM
Posting your job vacancies on jobs.codeproject.com
is the most cost effective way of reaching millions
of skilled developers, managers, and architects.
Try us today and get results.
Post your job descriptions now.
jobs.codeproject.com

New: product catalog, code-signing certificates!


K. SCOTT ALLEN EXTREME ASP.NET

Guiding Principles for Your


ASP.NET MVC Applications
Although the release of Microsoft ASP.NET MVC
. is relatively recent, many of the patterns and
These principles whose name reveals its intended usage, e.g., failed-
Validation is a piece of code that gives me more in-
principles surrounding the framework have been
around for a long time. The Model View Controller
are not rules, formation than reading #FF. These qualities
of the CSS approach make the application easier to
(MVC) pattern itself was passed down by Smalltalk
developers from the s, and is in use in a variety
but ideals and maintain and change, which is the simplicity we are
striving to achieve.
of frameworks and across a wide range of languages
and diverse platforms. Thus, we can learn a lot about
values that help Using CSS in combination with HTML exhibits
another characteristic of simplicity, which is how CSS
how to make the best use of the framework just by
looking around our software ecosystem.
you build great and HTML can have separate and distinct respon-
sibilities. HTML becomes responsible for only the
In this article, I want to lay out some principles you
should follow when working with the ASP.NET MVC
ASP.NET MVC structure of a Web page, and CSS becomes respon-
sible for the look of a Web page. This separation of
framework. These principles are not rules, but ideals
and values you should cherish and keep in the forefront
applications. responsibilities, or “separation of concerns,” is an im-
mensely powerful weapon in the battle against soft-
of your mind when building ASP.NET MVC applica- ware complexity. Have you ever gone in to modify
tions that need to survive and evolve beyond a single release. an object that represents a Web page only to find it is responsible
for data access, data binding, business rule validation and coloring
Simplicity Through Separation the background of a failed input field red? Was it easy to modify
When I was a young boy, I used to love jigsaw puzzles. It was the object? Was it easy to focus on the one change request you had
a marvelous feeling to build simple scenes, like the picture of an to implement inside this complex puzzle of logic?
apple, from a collection of complex and irregularly shaped puzzle The MVC design pattern has proved itself successful over time in
pieces. This love of complexity is a virus I fight on a daily basis as part because the pattern forces a separation of responsibilities. This
a software developer. allows us to focus on specific tasks, hide our implementation details
As developers, we want to work in the opposite direction of the and make changes that don’t interfere with other components. In
jigsaw puzzle. Instead of making the simple things complex, we MVC, for example, the view component is only responsible for pre-
should strive to make complex things simple. Many of the tech- senting data. When inside the view, we can focus on presentation
nologies around us are in place to help us move toward simplicity; and not worry about where data originates. We can make changes
we just have to make effective use of the technologies. to the view without worrying about data access code.
Let’s take CSS and HTML as examples, because these are two With the goal of simplicity through separation in mind, let’s
technologies that every Web developer should be familiar with. If look at specific guidance on each of the three components in the
I want to draw a user’s attention to required questions they failed model-view-controller pattern.
to answer on my Web pages, then I might change the background
color of the failed questions to red. I could do this by specifying Trouble-Free Controllers
the color red as a style on each failed question, or I could create In an MVC pattern, controllers are in the middle of the action.
a CSS rule specifying the color red and point any failed question Controllers handle incoming requests, interact with the model and
elements to this style rule. select views for rendering. Because of their position, controllers
Although the CSS solution might require a little more work– can be difficult to implement while maintaining a separation of
because I write both CSS and HTML instead of just HTML–in concerns. You have to remain diligent and not let your controllers
the end it will prove itself a simpler solution. Using a CSS rule become the centerpieces of your application logic.
removes duplicate style information from my application, and I
can easily change the color of all failed questions by modifying a Send your questions and comments to xtrmasp@microsoft.com
single style rule. The CSS approach also allows me to create a rule
July 2009 85
As an example, I want to call out a contrast in two different ver- during an HTTP POST operation and never during an HTTP
sions of Oxite. Oxite is an open source content management sys- GET operation. With an HTTP GET operation, the parameters
tem and blog engine built with the ASP.NET MVC framework and required for the action live in the URL. This means that a mali-
hosted on CodePlex at codeplex.com/oxite/. In an early version of Oxite cious user could send you an e-mail with an image, and point the
(circa December ), the controller action responsible for saving source of the image to a URL that can delete records. You don’t
a blog post was  lines of C# code. The action was responsible for need to push any buttons. You only have to view the image to de-
validating the post, managing the relationship between a post and lete a record. Stephen Walther includes some more details on this
its tags, managing the relationship between a post and its parent, scenario in his blog post, “Don’t use Delete Links because they cre-
deciding if a post is new or an edited version of an existing post ate Security Holes” (stephenwalther.com/blog/archive/2009/01/21/asp.net-
and saving the post to a repository. Contrast your mental image of mvc-tip-46-ndash-donrsquot-use-delete-links-because.aspx).
this code with the current version, as shown in Figure 1.
It is obvious that the Oxite team has spent some time refactor- Plain Views
ing this controller action. All of the validation logic and relation- Views have a clear role when using the MVC design pattern on
ship management hides behind an IPostService object, leaving the the Web. They present the model to the user. However, what does
controller free to focus on its role as a mediator in the request. A “present” mean in this context? Presentation could involve HTML,
good rule of thumb for a controller action is that once an action JavaScript, CSS, JSON, XML, and a heavy dose of server-side
method exceeds  to  lines of code, you should consider refac- code. Views run the risk of becoming complicated with the many
toring to produce a simpler action method. You’ll often find that modes of presentation. Your job is to keep the views as simple as
the extra lines of code in the action represent business logic that possible–not only because the view will be easier to maintain, but
would lead a happier life inside your business objects. also because it is the hardest component to test. One of the biggest
complexity risks in a view is to turn it into “tag soup.”
Controllers and Security “Tag soup” is a programmer’s term for HTML that is difficult for
While our primary focus is on simplicity, I’d be amiss if I didn’t the programmer to read and maintain, and possibly even difficult
introduce some important security-related principles for con- for the Web browser to parse. You look at the source code and it
trollers. First, the controller must play a role in avoiding a Cross- looks like soup–the chef stirred all the ingredients together into
site Request Forgery (CSRF). Phil Haack, program manager of an illegible sea of broth.
the ASP.NET team, recently blogged on this topic at haacked.com/ Avoiding tag soup can be as easy as making sure the HTML in
archive/2009/04/02/anatomy-of-csrf-attack.aspx. He detailed how the con- a Web form view is both well formed and well formatted. Both of
troller and view can work together to avoid an attack. To summarize these steps are easy to achieve in Microsoft Visual Studio, which
Phil’s post: Whenever you have actions that authenticated users can highlights malformed HTML and gives you the option to format
invoke, then you should have the view render an antiforgery token HTML (Ctrl K + Ctrl +D is the shortcut to format an entire docu-
to the client, and have the controller action validate this token by ment). However, there are additional challenges in Web form views,
applying the ValidateAntiForgeryToken attribute. This solution is as demonstrated in the code below.
simple to implement, but you have to remember to put these safe- <% if ((bool)ViewData["isLoggedIn"])
{ %>
guards in place. <img src="<%= ViewData["loggedInImage"] %>" />
Second, controllers should make use of the AcceptVerbs attri- <% }
else
bute to restrict certain actions to the operations of HTTP POST. {%>
A controller action that modifies data, like ultimately deleting an <img src="<%= ViewData["anonymousImage"] %>"
onclick="login();" />
Order from the Orders table, should allow itself to execute only <% }%>
This code is an ugly amalgamation of HTML, C#, JavaScript and
Figure 1 Controller Action string literals. There are, however, a few guidelines we can follow
to avoid such code.
[ActionName("ItemEdit"), AcceptVerbs(HttpVerbs.Post)]
public virtual object SaveEdit(PostAddress postAddress, Post postInput)
{ HTML Helpers
Post post = postService.GetPost(postAddress); HTML helpers are extension methods that you can use inside a
ValidationStateDictionary validationState; view to encapsulate the creation of HTML and hide some simple
presentation logic. In the above code, we have an IF condition that
postService.EditPost(post, postInput, out validationState);
complicates the view. We also have image tags that are complicated
if (!validationState.IsValid) by including server-side code inside the attributes. Rob Conery
{
ModelState.AddModelErrors(validationState); of Microsoft lives by a rule that he explains in his post, “ASP.NET
return Edit(postAddress); MVC: Avoiding Tag Soup” (blog.wekeroad.com/blog/asp-net-mvc-avoiding-
}
tag-soup/). His rule is, “If there’s an IF, make a Helper.” He then goes
return Redirect(Url.Post(postInput)); on to show the implementation of an HTML helper to build for a
}
pager that displays links to navigate a paged list of items.
86 msdn magazine Extreme ASP.NET
HTML helpers can also help you avoid the ugly intermixing of $(function() {
$("#loginImage").click(login);
server-side code and HTML. Fortunately, the MVC framework }
includes a number of helpers. You can find more of them in the
MVC Futures project (codeplex.com/aspnet/) and in the MVC Con- Partial Views
trib project (MVCContrib.org). Make sure to download both to look Another technique you can use to manage the complexity of
at the helpers available in these libraries. Some of them are spe- views is partial views. Partial views in the Web forms view engine
cifically designed to work with strongly typed models, which is are user control files with a .ascx extension. Just like in ASP.NET
our next topic. Web forms, we can use partial views to encapsulate HTML and
code that we might want to reuse across multiple views (like a log-
Strong Typing in display). We can also use partials to break down a complicated
You can put views into one of two categories: those that use the view into smaller pieces.
ViewData dictionary and those that use strong typing. Views that We can strongly type partial views by deriving from System.Web.
use the ViewData dictionary derive from System.Web.Mvc.View- Mvc.ViewUserControl<T>. The add new view wizard in an MVC
Page. Our tag soup example uses this approach. Strongly typed application allows you to select a checkbox to choose between a
view pages derive from System.Web.Mvc.ViewPage<T>, where T view and a partial view, and also allows you to select the strongly
is a generic parameter to specify the type of the model. I recom- typed model (see Figure 2). There is also an HTML helper avail-
mend you use strong typing. able (RenderPartial) in the framework to make the job of using a
Strong typing means your view knows exactly the type of model partial view easy:
you expect it to work with. You don’t have to guess at the magic string <% Html.RenderPartial("_LoginStatus"); %>
required to pull a particular piece of information from the ViewData Views and Logic

dictionary. Instead, you can just use code like Model.Username To paraphrase Albert Einstein, views should be as simple as pos-
and have IntelliSense aid you in the quest for data. In addition, sible, but not simpler. Views that contain any business logic or data
by running the aspnet_compiler, you can catch any errors that access logic violate the spirit of the MVC design pattern and the
you might have made when accessing the model. When you add principle of least surprise. No one familiar with the MVC pattern
a new view to an MVC project, the add view wizard makes it easy will expect to find business logic lurking inside a view.
to select a strongly typed model, so you shouldn’t have problems Although the Web forms view engine allows you to create an
in the setup. In general, you’ll find that removing magic string lit- associated code-behind file for a view, doing so will reduce any
erals and relying on strongly typed views will help you maintain benefits you might have otherwise obtained by using MVC. Code-
your software in the long run. behind can only encourage the practice of putting more logic into a

Unobtrusive JavaScript
We know that CSS and HTML can work together to separate the
structure of a view from the visual presentation of the view in the
browser. I strongly encourage you to use CSS as a tool to maintain
this separation. However, modern Web pages have three primary
concerns: structure, presentation and behavior. JavaScript imple-
ments this behavior. Including JavaScript in your view can be just
as disruptive as including excessive amounts of style information
and server-side code. Pages rich with JavaScript behavior demand
a separation of structure from behavior so you can focus on the
pieces independently.
Unobtrusive JavaScript is the practice of removing all signs of
JavaScript from a view and placing any JavaScript you need into
an external .js file. Unobtrusive JavaScript means you do not have
any JavaScript functions defined in the view, and you do not have
any on-click attributes, including JavaScript code in your HTML.
Libraries like jQuery and Microsoft’s ASP.NET AJAX libraries make
it easy to reach into a page and wire up your events from the out-
side. All you need to do is add a <link> tag in your view to include
the external .js file with your view behaviors.
As an example, the following jQuery code will run after the view
renders on the client to wire up the click event of an element with an
ID of “loginImage” to a JavaScript function by the name of “login.”
No on-click attribute required! Figure 2 Add View Wizard
msdnmagazine.com July 2009 87
view and, even worse, can introduce the traditional ASP.NET page list the patients in a hospital and include some aggregate calculations
life-cycle events to a view. Page life-cycle events are one feature of on their length of stay, such as minimums, maximums, averages and
ASP.NET that the MVC framework tries very hard to hide. standard deviations. If your business logic never makes use of this
information, you’ll find it only clutters your logic with complexity.
Views and Security Remember, our goal is simplicity through separation.
One word on security for views: Make sure you protect your
application from cross-site scripting attacks (XSS) by HTML
encoding output in a view. XSS attacks are enormously popular
these days because they are relatively easy to execute. Although
ViewModels help you gain
not all output technically needs encoding, I suggest you err on the
side of safety and encode everything by default. Fortunately, there
simplicity through separation.
is already an HTML helper available for HTML encoding:
<%= Html.Encode(Model.Message) %> ViewModels are the ideal abstraction to separate and isolate the
In addition, ASP.NET will validate incoming requests by default. model that your view requires from the model that your business
This request validation feature will look for requests containing un- requires. ViewModels introduce some additional work into your
encoded HTML input. If ASP.NET finds such a request, it will throw project, because you’ll need to define additional classes to repre-
an exception before the request reaches a controller. In some cases, sent the ViewModels and map information into the ViewModel
you may find you need to turn this feature off so you can accept properties. However, this additional work can reap tremendous
HTML from the user. You can turn it off on a controller action by benefits because you are free to change your business logic without
using the ValidateRequest attribute, but be careful with the input breaking the presentation layer, and vice versa. Also, you don’t nec-
that arrives. The people behind XSS attacks have become excep- essarily need a ViewModel class for each view. It’s quite common
tionally clever at hiding malicious input inside of requests. to share a ViewModel class among related views.

Big Models Unit Tests


At last we’ve reached a place where we aren’t building simple In this article, we’ve stressed simplicity. The MVC design pattern
pieces of software, right? Doesn’t the model represent our business can guide us toward simplicity, but we still need to follow some prin-
logic with rich behaviors and complex rules? ciples. We also need to keep an eye out for places where complexity
It depends. is creeping into our software. A good technique to discover com-
Technically, the model that the controller hands over to a view plexity is to write unit tests. Unit tests not only help you maintain
doesn’t need any behavior or rules, because the view shouldn’t be the quality of your software, but can also help you spot problems
using any of these rules or behaviors in the model, if they exist. in a design (see Jeremy Miller’s article on designing for testability
The view only wants to pick out data from the model to present at msdn.microsoft.com/en-us/magazine/dd263069.aspx).
to the user. Thus, even though you can use your business objects Perhaps you’ve been reading this article as a longtime Web
as the models for your views, many developers follow the guid- forms programmer who never felt comfortable writing unit tests.
ance of creating view-specific models. We also call these types of It’s possible you found unit tests too difficult to implement in the
models ViewModels. ASP.NET Web forms environment. This is your chance to start!
ViewModels are simple data transfer objects–they contain The ASP.NET MVC team designed the framework with testabil-
all state but no behavior of any significance. In other words, ity in mind. Start simple and let your unit testing experience grow.
you’ll implement lots of properties on your ViewModels but no Eventually, you should find that unit tests are yet another weapon
methods. ViewModels have the pleasant effect of giving a view in the battle against the ever-growing complexity of software. „
exactly the data that it needs to present–no more and no less.
While you optimize the design of your business objects for your
business logic, you optimize the design of your ViewModels for
your views.
Even when you do want to consume business objects from your K. SCOTT ALLEN is a member of the Pluralsight technical staff and founder of
view, many times you will struggle to introduce functionality that OdeToCode. You can reach Scott at scott@OdeToCode.com or read his blog at
your business layer doesn’t require. For example, suppose you need to odetocode.com/blogs/scott.

88 msdn magazine Extreme ASP.NET


WICKED CODE JEFF PROSISE

Taking Silverlight Deep Zoom


to the Next Level
After Silverlight Deep Zoom was introduced to the world at MIX mouse if you pan too quickly. Try it. Take any Deep Zoom app
, the buzz surrounding it persisted for weeks. An outgrowth of created by Deep Zoom Composer and position the mouse cur-
the Seadragon project at Microsoft Live Labs (livelabs.com/seadragon), sor over an identifiable point or pixel in the scene. Then move
Deep Zoom is a Silverlight adaptation of a technology for presenting the mouse quickly back and forth and up and down a few times.
vast amounts of pictorial data to users in a highly bandwidth-efficient Observe that when you stop and the scene springs back to the
manner. Sister adaptations that target Windows Mobile and AJAX cursor position, the cursor is no longer located at the point it was
are available and serve to increase the reach of the platform. when you started. The more and faster you move, the greater the
If you haven’t seen Deep Zoom before, drop what you’re doing difference. It’s not a deal breaker, but try the same experiment on
and visit the canonical Deep Zoom site at memorabilia.hardrock.com. the Hard Rock Memorabilia site and you’ll find that the scene
Use the mouse to pan around in the scene and the mouse wheel to reliably snaps back to the original cursor position no matter how
zoom in and out. Thanks to Deep Zoom, you don’t have to down- hard you try to fool it.
load gigabytes (or terabytes) of imagery to browse the Hard Rock Figure 1 shows how to modify Deep Zoom Composer’s code to
Café’s vast memorabilia collection. Deep Zoom downloads only fix the problem. First, declare two new fields named lastViewpor-
the pixels it needs at the resolution it needs, and in Silverlight, the tOrigin and lastMousePosition in the Page class in Page.xaml.cs.
complexity of Deep Zoom is masked behind a remarkable control (While you’re at it, delete the fields named dragOffset and current-
named MultiScaleImage. Once a Deep Zoom scene is composed Position, because they’re not needed.) Then rewrite the MouseLeft-
(typically with Deep Zoom Composer, which you can download ButtonDown and MouseMove handlers as shown. You’ll find that
for free from go.microsoft.com/fwlink/?LinkId=148861), presenting the
scene in a browser requires little more than declaring a Multi- Figure 1 Fixing Deep Zoom Composer’s Panning Code
ScaleImage control and pointing the control’s Source property to
Point lastViewportOrigin;
Deep Zoom Composer’s output. Supporting interactive panning Point lastMousePosition;
and zooming requires a little mouse-handling code that interacts ...
this.MouseLeftButtonDown += delegate(object sender, MouseButtonEventArgs
with the control, but these days Deep Zoom Composer will even e)
provide that for you. {
mouseButtonPressed = true;
Despite the ease with which a basic Deep Zoom application can mouseIsDragging = false;
be built, you’re missing out on the true richness of Deep Zoom lastViewportOrigin = msi.ViewportOrigin;
lastMousePosition = e.GetPosition(msi);
if you go no further than Deep Zoom Composer takes you. Did };
you know, for example, that you can programmatically manipu-
this.MouseMove += delegate(object sender, MouseEventArgs e)
late the images in a Deep Zoom scene, that you can create meta- {
data and associate it with each image, or that Deep Zoom images if (mouseIsDragging)
{
can come from a database or be composed on the fly? Some of Point pos = e.GetPosition(msi);
the truly remarkable Deep Zoom applications out there rely on a Point origin = lastViewportOrigin;
origin.X += (lastMousePosition.X - pos.X) /
little-known feature of Deep Zoom that adds a whole new dimen- msi.ActualWidth * msi.ViewportWidth;
sion to the platform. origin.Y += (lastMousePosition.Y - pos.Y) /
msi.ActualWidth * msi.ViewportWidth;
If you care to take Silverlight Deep Zoom to the next level, here msi.ViewportOrigin = lastViewportOrigin = origin;
are three ways to do just that. lastMousePosition = pos;
}
};
Fixing Composer’s Panning Logic
First things first: if you want to get more out of Deep Zoom,
the first thing you should know is not to trust the mouse-han- Send your questions and comments for Jeff to wicked@microsoft.com.
dling code emitted by Deep Zoom Composer. The code that pans Code download available at code.msdn.microsoft.com/mag200907DeepZoom.
around the scene in response to MouseMove events “loses” the
90 msdn magazine
the scene snaps back to precisely the original cursor position when The nine images featured in DeepZoomTravelDemo are photos I
you stop moving the mouse, and if you’re as fastidious as I am about snapped on some of my overseas trips. I imported them into Deep
these things, you’ll be able to sleep at night once more. Zoom Composer, arranged them in a grid, and exported the scene
(making sure to select “Export as Collection”). Then I imported the
Accessing Sub-Images and Metadata output from Deep Zoom Composer into a Silverlight project and
You may have noticed that when you export a project from added zooming and panning logic similar to that in the preced-
Deep Zoom Composer, you’re offered the choice of exporting as ing section. To keep the download size manageable (MB versus
a composition or as a collection The latter option comes with one MB), I deleted the bottom two layers of the image pyramid
very desirable benefit: rather than exporting a Deep Zoom scene that Composer generated before I uploaded the app to the MSDN
containing all the images you added lumped together into one Code Gallery. The version that you download works just fine, but
monolithic image, it exports a scene containing individually ad- when you zoom, the images get grainy a lot quicker than they do
dressable sub-images. The sub-images are exposed through the in the original version.
MultiScaleImage control’s SubImages property, and because they Displaying image metadata as DeepZoomTravelDemo does pres-
are individually addressable objects, the sub-images can be ma- ents two challenges to the developer. First, where do you store the
nipulated, animated, and fumigated (just kidding!) to add sparkle metadata, and how do you associate it with images in the scene?
and interactivity to Deep Zoom applications. Second, how do you correlate the items in the MultiScaleImage
Each item in the SubImages collection is an instance of Multi- control’s SubImages collection with images in the scene since
ScaleSubImage, which derives from DependencyObject and includes the MultiScaleSubImage class provides no information relating
the properties AspectRatio, Opacity, ZIndex, ViewportOrigin, and the two?
ViewportWidth. The latter two combine to determine a sub-image’s The first task—storing the metadata—is accomplished by enter-
size and position in a Deep Zoom scene. Be aware that when a ing a text string into the Tag box displayed in the lower right cor-
MultiScaleImage control first loads, its SubImages property is empty. ner of Deep Zoom Composer when an image is selected. I used it
Your first opportunity to iterate over the sub-images is when the to store each image’s title and description, separated by plus signs.
control fires its ImageOpenSucceeded event. Composer writes the tags to the Metadata.xml file created when
One use for the SubImages property is to hit-test individual you export the project. Each image in the scene is represented by
images in order to display metadata—titles, descriptions, and so an <Image> element in Metadata.xml, and each <Image> element
forth—in response to clicks or mouseovers. Another use for it is contains a sub-element named <Tag> that contains the correspond-
to programmatically rearrange the images in a Deep Zoom scene. ing tag. Figure 3 shows the <Image> element written into Metadata.
The DeepZoomTravelDemo application shown in Figure 2 dem- xml for the image in the upper left corner of the scene. Composer’s
onstrates how to do both. When you position the mouse over one tag editing interface is somewhat clumsy since the Tag box is so
of the images in the scene, a partially transparent information panel small, but you can always edit the Metadata.xml file manually as I
appears on the right containing an image title and description. And did to tag each image with a title and description.
when you click the Shuffle button in the upper left corner, the im- It would be great if the MultiScaleSubImage class had a Tag
ages rearrange themselves in random order. property that was automatically initialized with the content of the
<Tag> element; but it doesn’t, so you have to improvise. First, you
can write a bit of code that downloads Metadata.xml and parses
the tags from it. Second, you can use the <ZOrder> elements in
Metadata.xml to correlate <Image> elements with images in the

Figure 3 An <Image> Element in Metadata.xml


<Image>
<FileName>
C:\Users\Jeff\Documents\Expression\Deep Zoom Composer
Projects\DeepZoomTravelDemo\source images\great wall of china.jpg
</FileName>
<x>0</x>
<y>0</y>
<Width>0.316957210776545</Width>
<Height>0.313807531380753</Height>
<ZOrder>1</ZOrder>
<Tag>
Great Wall of China+The Great Wall of China near Badaling, about an
hour
north of Beijing. This portion of the Great Wall has been restored
and
offers outstanding views of the surrounding mountains.
</Tag>
</Image>
Figure 2 DeepZoomTravelDemo
msdnmagazine.com July 2009 91
Deep Zoom scene. If the scene contains nine images (and the file by introducing an extra assembly required by LINQ to XML
MultiScaleImage control’s SubImages collection therefore contains (System.Xml.Linq.dll).
nine MultiScaleSubImage objects), SubImages[] corresponds to Now that _Metadata is initialized with SubImageInfo objects
the image whose <ZOrder> is , SubImages[] corresponds to the containing titles and descriptions, the next step is to write the code
image whose <ZOrder> is , and so on. to display the titles and descriptions. That happens in Figure 5.
DeepZoomTravelDemo uses this correlation to store image titles The MouseMove handler that pans the Deep Zoom scene if the left
and descriptions. At startup, the Page constructor uses a WebCli- mouse button is down behaves differently if the left mouse button
ent object to initiate an asynchronous download of Metadata.xml is up: it hit-tests the scene to determine whether the cursor is cur-
from the server’s ClientBin folder (see Figure 4). When the down- rently over one of the sub-images. Hit testing is performed by the
load is complete, the WebClient_OpenReadCompleted method helper method named GetSubImageIndex, which returns - if the
parses the downloaded XML with an XmlReader and initializes cursor isn’t over a sub-image or a -based image index if it is. That
the field named _Metadata with an array of SubImageInfo objects index identifies both a sub-image in MutliScaleImage.SubImages
containing information about the images in the scene, including and a SubImageInfo object in _Metadata. A few lines of code cop-
titles and descriptions. The class is shown here: ies the title and description from the SubImageInfo object to a pair
public class SubImageInfo
{
of TextBlocks and one more line of code triggers an animation that
public string Caption { get; set; } displays the information panel if it’s not already displayed. Note
public string Description { get; set; }
public int Index { get; set; }
that GetSubImageIndex checks the sub-images for hits in reverse
} order since the final sub-image in the MultiScaleimage control’s
The <ZOrder> values read from Metadata.xml are used to or- SubImages collection is highest in the Z-order, the next-to-last
der the SubImageInfo objects in the _Metadata array, ensuring sub-image is second highest in the Z-order, and so on.
that the order of the items in the _Metadata array is identical to In addition to supporting mouseovers, DeepZoomTravel-
the order of the items in MultiScaleImage’s SubImages collection. Demo lets you rearrange the images in the scene. If you haven’t
In other words, _Metadata[] contains the title and description already, try clicking the Shuffle button in the upper left corner of
for SubImages[], _Metadata[] contains the title and descrip- the scene. (In fact, click it several times; the images will assume a
tion for SubImages[], and so on. Incidentally, I used XmlReader different order each time.) The rearranging is performed by the
rather than LINQ to XML to avoid increasing the size of the XAP Shuffle method in Figure 6, which creates an array containing all

Figure 4 Downloading Metadata.xml and Correlating Metadata with Sub-Images


private SubImageInfo[] _Metadata; while (reader.Read())
... {
public Page() if (reader.NodeType == XmlNodeType.Element &&
{ reader.Name == "Image ")
InitializeComponent(); info = new SubImageInfo();
else if (reader.NodeType == XmlNodeType.Element &&
// Register mousewheel event handler reader.Name == "ZOrder ")
HtmlPage.Window.AttachEvent("DOMMouseScroll ", OnMouseWheelTurned); info.Index = reader.ReadElementContentAsInt();
HtmlPage.Window.AttachEvent( "onmousewheel ", OnMouseWheelTurned); else if (reader.NodeType == XmlNodeType.Element &&
HtmlPage.Document.AttachEvent( "onmousewheel ", OnMouseWheelTurned); reader.Name == "Tag ")
{
// Fetch Metadata.xml from the server string[] substrings =
WebClient wc = new WebClient(); reader.ReadElementContentAsString().Split(‘+');
wc.OpenReadCompleted += new info.Caption = substrings[0];
OpenReadCompletedEventHandler(WebClient_OpenReadCompleted); if (substrings.Length > 1)
wc.OpenReadAsync(new Uri( "Metadata.xml ", UriKind.Relative)); info.Description = substrings[1];
} else
info.Description = String.Empty;
private void WebClient_OpenReadCompleted(object sender, }
OpenReadCompletedEventArgs e) else if (reader.NodeType == XmlNodeType.EndElement &&
{ reader.Name == "Image ")
if (e.Error != null) images.Add(info);
{ }
MessageBox.Show( "Unable to load XML metadata "); }
return; catch (XmlException)
} {
MessageBox.Show( "Error parsing XML metadata ");
// Create a collection of SubImageInfo objects from Metadata.xml }
List<SubImageInfo> images = new List<SubImageInfo>();
// Populate the _Metadata array with ordered data
try _Metadata = new SubImageInfo[images.Count];
{
XmlReader reader = XmlReader.Create(e.Result); foreach (SubImageInfo image in images)
SubImageInfo info = null; _Metadata[image.Index - 1] = image;
}

92 msdn magazine Wicked Code


IMAGINE.
EVERYTHING
EVERYTHHIN
NG DEVELOPERS
DEVE
ELOP
PERS NEED
NE
EED
IN ONE ONLINE RESOURCE.

Introducing TotalDevPro.com
SEARCH our extensive library of products and services.
FIND everything you need for your next purchasing decision.
RANK content and join our community.

IMAGINE.
TotalDevPro.com
SEARCH. FIND. RANK.
EVERYTHING MICROSOFT DEVELOPER.

www.TotalDevPro.com SEARCH. FIND. RANK. EVERYTHING MICROSOFT DEVELOPER.


Figure 5 Hit-Testing the Sub-Images
private int _LastIndex = -1; }
...
private void MSI_MouseMove(object sender, MouseEventArgs e) private int GetSubImageIndex(Point point)
{ {
if (_Dragging) // Hit-test each sub-image in the MultiScaleImage control to
{ determine
// If the left mouse button is down, pan the Deep Zoom scene // whether "point " lies within a sub-image
... for (int i = MSI.SubImages.Count - 1; i >= 0; i--)
} {
else MultiScaleSubImage image = MSI.SubImages[i];
{ double width = MSI.ActualWidth /
// If the left mouse button isn't down, update the infobar (MSI.ViewportWidth * image.ViewportWidth);
if (_Metadata != null) double height = MSI.ActualWidth /
{ (MSI.ViewportWidth * image.ViewportWidth * image.
int index = GetSubImageIndex(e.GetPosition(MSI)); AspectRatio);

if (index != _LastIndex) Point pos = MSI.LogicalToElementPoint(new Point(


{ -image.ViewportOrigin.X / image.ViewportWidth,
_LastIndex = index; -image.ViewportOrigin.Y / image.ViewportWidth)
);
if (index != -1) Rect rect = new Rect(pos.X, pos.Y, width, height);
{
Caption.Text = _Metadata[index].Caption;
Description.Text = _Metadata[index].Description; if (rect.Contains(point))
FadeIn.Begin(); {
} // Return the image index
else return i;
{ }
FadeOut.Begin(); }
}
} // No corresponding sub-image
} return -1;
} }

the images’ ViewportOrigins, reorders the array using a random-


number generator, and then creates a Storyboard and a series of
PointAnimations to move the sub-images to the positions con-
Figure 6 Shuffling the Sub-Images tained in the reordered array. The key here is that the MultiScalei-
private void Shuffle() mage control’s SubImages property exposes the sub-images to your
{ code, and you can modify a sub-image’s ViewportOrigin property
// Create a randomly ordered list of sub-image viewport origins
List<Point> origins = new List<Point>(); to change its position in the scene.
foreach (MultiScaleSubImage image in MSI.SubImages)
As I researched this column, I found several blog entries that
origins.Add(image.ViewportOrigin); provided helpful information. One was Jaime Rodriquez’s “Working
Random rand = new Random();
with Collections in Deep Zoom” (go.microsoft.com/fwlink/?LinkId=148862).
int count = origins.Count; Another was “Deep Zoom Composer—Filtering by Tag Sample”
for (int i = 0; i < count; i++)
(blog.kirupa.com/?p=212), which was written by a member of the Ex-
{ pression Blend team and presents a technique for filtering Deep
Point origin = origins[i];
origins.RemoveAt(i);
Zoom images based on image tags. Implementing mouseovers,
origins.Insert(rand.Next(count), origin); rearranging the images in a scene, and filtering images based on
}
tag data are but a few of the features made possible by the ability
// Create a Storyboard and animations for shuffling to address the individual sub-images in a Deep Zoom scene and
Storyboard sb = new Storyboard();
associate metadata with them.
for (int i = 0; i < count; i++)
{
PointAnimation animation = new PointAnimation();
Dynamic Deep Zoom: Supplying
animation.Duration = TimeSpan.FromMilliseconds(250); Image Pixels at Run Time
animation.To = origins[i];
Storyboard.SetTarget(animation, MSI.SubImages[i]); Deep Zoom Composer’s export feature generates all the data
Storyboard.SetTargetProperty(animation, needed by a MultiScaleImage control. That data includes an XML
new PropertyPath( "ViewportOrigin "));
sb.Children.Add(animation); file (dzc_output.xml) that references other XML files, which in
} turn reference the individual images in the scene. Composer’s
// Run the animations output also includes hundreds (sometimes thousands) of image
sb.Begin(); tiles generated from those images. The tiles form an image pyra-
}
mid, with each level of the pyramid containing a tiled version of
94 msdn magazine Wicked Code
the original image, and each level representing a different reso- The application depicted in Figure 7 demonstrates the basics
lution. The level at the top of the pyramid, for example, might of dynamic Deep Zoom. MandelbrotDemo provides a Deep
contain a single tile with a x rendition of the image. The Zoom window into the Mandelbrot set—probably the most fa-
next level down would contain four x tiles which, put to- mous fractal in the world. The Mandelbrot set is infinitely com-
gether, form a x version of the image. The next level down plex, which means that you can zoom in forever and the level of
would contain sixteen x tiles representing different parts
of a ,x, image, and so on. Deep Zoom Composer gener-
ates as many levels as necessary to depict the original image in
its native resolution. As a user zooms and pans in a Deep Zoom
You can start in outer space
Scene, the MultiScaleImage control is constantly firing off HTTP
requests to the server to fetch image tiles at the proper resolution.
and zoom all the way down to
It also does some slick blending work to smooth the transition
from one level to another.
your front lawn.
What you probably don’t realize about the MultiScaleImage con-
trol is that it doesn’t require Deep Zoom Composer. Composer is detail will never decrease. Mandelbrot viewers are common in
really just a tool for quickly and easily creating Deep Zoom projects the software world, but few are as slick as the one that uses Deep
that incorporate scenes built from static images. As an alternative Zoom. Try it; run MandelbrotDemo and zoom in on some of the
to providing MultiScaleImage with static content, you can generate swirling regions at the edge of the Mandelbrot set (at the bound-
content at run time in response to requests from MultiScaleImage ary between black and bright colors). You can’t zoom in forever
and download that content to the client. because even a dynamic Deep Zoom scene has a finite width and
Why would you ever need to generate Deep Zoom content at height, but the scene’s dimensions can be very, very large (up to
run time? Developers ask me how to do this all the time. “Is it pos-  pixels per side).
sible to supply image data to Deep Zoom dynamically?” Th e rea- The first step in implementing dynamic Deep Zoom is to de-
son is that it enables a whole new genre of Deep Zoom applications rive from Silverlight’s MultiScaleTileSource class, which is found
that fetch image tiles from databases and that generate image tiles in the System.Windows.Media namespace of System.Windows.
on the fly. dll, and override the GetTileLayers method. Each time the Multi-
Want an example? Check out the Deep Earth project at ScaleImage control needs a tile, it calls GetTileLayers. Your job is
codeplex.com/deepearth and an example of Deep Earth at work at to create an image tile and return it to the MultiScaleImage con-
deepearth.soulsolutions.com.au/. Deep Earth is referred to as a mapping trol by adding it to the IList passed in GetTileLayers’ parameter
control powered by the combination of Microsoft’s Silverlight  list. Other parameters input to GetTileLayers specify the zoom
platform and the DeepZoom (MultiScaleImage) control. In other level (literally, the level of the image pyramid from which tiles are
words, it’s a control you can drop into a Silverlight application to being requested) and the X and Y position within that level of the
expose the vast amount of geographic data available from Microsoft pyramid of the tile that is being requested. Just as X, Y, and Z val-
Virtual Earth through a Deep Zoom front end. You can start in ues are sufficient to identify a point in D coordinate space, an
outer space and zoom all the way down to your front lawn. And the
zooming is amazingly smooth, thanks to the work done behind the
scenes by MultiScaleImage and the Deep Zoom runtime. Figure 8 MultiScaleTileSource Derivative
Deep Earth isn’t driven by a bunch of XML files and image tiles public class MandelbrotTileSource : MultiScaleTileSource
output by Deep Zoom Composer; it supplies image tiles to the {
private int _width; // Tile width
MultiScaleImage control dynamically, and it fetches image tiles private int _height; // Tile height
from Virtual Earth. Users of Deep Earth refer to this as “dynamic
public MandelbrotTileSource(int imageWidth, int imageHeight,
Deep Zoom.” int tileWidth, int tileHeight) :
base(imageWidth, imageHeight, tileWidth, tileHeight, 0)
{
_width = tileWidth;
_height = tileHeight;
}

protected override void GetTileLayers(int level, int posx, int posy,


IList<object> sources)
{
string source = string.Format(
"http://localhost:50216/MandelbrotImageGenerator.ashx? " +
"level={0}&x={1}&y={2}&width={3}&height={4} ",
level, posx, posy, _width, _height);

sources.Add(new Uri(source, UriKind.Absolute));


}
}
Figure 7 Two Views of the Mandelbrot Set
msdnmagazine.com July 2009 95
Figure 9 Registering a Deep Zoom Tile Source Figure 9 shows an excerpt from MandelbrotDemo’s Page.xaml.cs
public Page()
file—specifically, the XAML code-behind class’s constructor. The
{ key statement is the one that creates a MandelbrotTileSource ob-
InitializeComponent();
ject and assigns a reference to it to the MultiScaleImage control’s
// Point MultiScaleImage control to dynamic tile source Source property. For static Deep Zoom, you set Source to the
MSI.Source = new MandelbrotTileSource((int)Math.Pow(2, 30),
(int)Math.Pow(2, 30), 128, 128);
URI of dzc_output.xml. For dynamic Deep Zoom, you point it
to a MultiScaleTileSource object instead. Th e MandelbrotTile-
// Register mousewheel event handler
HtmlPage.Window.AttachEvent( "DOMMouseScroll ", OnMouseWheelTurned);
Source object created here specifies that the image being served
HtmlPage.Window.AttachEvent( "onmousewheel ", OnMouseWheelTurned); up measures  pixels on each side and is divided into x-
HtmlPage.Document.AttachEvent( "onmousewheel ", OnMouseWheelTurned);
}
pixel tiles.
The work of generating the image tiles is performed by Mandel-
brotImageGenerator.ashx back on the server (see Figure 10). Af-
X value, a Y value, and a level, uniquely identify an image tile in a ter retrieving input parameters from the query string, it creates a
Deep Zoom image pyramid. bitmap depicting the requested tile and writes the image bits into
Figure 8 shows the MultiScaleTileSource-derived class featured in the HTTP response. DrawMandelbrotTile does the pixel genera-
MandelbrotDemo. The GetTileLayers override does little more than tion. When called, it converts the X-Y-level value identifying the
submit an HTTP request for the image tile to the server. The endpoint image tile that was requested into coordinates in the complex plane
for the request is an HTTP handler named MandelbrotImageGenera- (a mathematical plane in which real numbers are graphed along
tor.ashx. Before we examine the handler, however, let’s see how Man- the X axis and imaginary numbers—numbers that incorporate the
delbrotTileSource is wired up to a MultiScaleImage control. square root of -—are graphed along the Y axis). Then it iterates

Figure 10 HTTP Handler for Generating Deep Zoom Image Tiles


public class MandelbrotImageGenerator : IHttpHandler double i0 = -1.5 + (3.0 * posy / cy);
{
private const int _max = 128; // Maximum number of iterations // Compute increments for real and imaginary components
private const double _escape = 4; // Escape value squared double dr = (3.0 / cx) / (width - 1);
double di = (3.0 / cy) / (height - 1);
public void ProcessRequest(HttpContext context)
{ // Iterate by row and column checking each pixel for
// Grab input parameters // inclusion in the Mandelbrot set
int level = Int32.Parse(context.Request[ "level "]); for (int x = 0; x < width; x++)
int x = Int32.Parse(context.Request[ "x "]); {
int y = Int32.Parse(context.Request[ "y "]); double cr = r0 + (x * dr);
int width = Int32.Parse(context.Request[ "width "]);
int height = Int32.Parse(context.Request[ "height "]); for (int y = 0; y < height; y++)
{
// Generate the bitmap double ci = i0 + (y * di);
Bitmap bitmap = DrawMandelbrotTile(level, x, y, width, height); double zr = cr;
double zi = ci;
// Set the response's content type to image/jpeg int count = 0;
context.Response.ContentType = "image/jpeg ";
while (count < _max)
// Write the image to the HTTP response {
bitmap.Save(context.Response.OutputStream, ImageFormat.Jpeg); double zr2 = zr * zr;
double zi2 = zi * zi;
// Clean up and return
bitmap.Dispose (); if (zr2 + zi2 > _escape)
} {
tile.SetPixel(x, y,
public bool IsReusable ColorMapper.GetColor(count, _max));
{ break;
get { return true; } }
}
zi = ci + (2.0 * zr * zi);
private Bitmap DrawMandelbrotTile(int level, int posx, int posy, zr = cr + zr2 - zi2;
int width, int height) count++;
{ }
// Create a bitmap to represent the requested tile
Bitmap tile = new Bitmap(width, height); if (count == _max)
tile.SetPixel(x, y, Color.Black);
// Compute the number of tiles in each direction at this level }
int cx = Math.Max(1, (int)Math.Pow(2, level) / width); }
int cy = Math.Max(1, (int)Math.Pow(2, level) / height);
// Return the bitmap
// Compute starting values for real and imaginary components return tile;
// (from -2.0 - 1.5i to 1.0 + 1.5i) }
double r0 = -2.0 + (3.0 * posx / cx); }

96 msdn magazine Wicked Code


Making Dynamic Deep Zoom Even Better
I have been fascinated by fractals ever since I dis- also increasing the load on the server.
covered them some 20 years ago. I wrote my first Mandel- Silverlight 2 doesn’t include an API for generating bit-
brot viewer in the early 1990s, if memory serves me correctly. maps on the client, but you can generate them anyway us-
My bookshelf of oldie-but-goodie computer books still con- ing Joe Stegman’s Silverlight PNG encoder (from his blog at
tains a pristine copy of a book titled Fractal Image Compression go.microsoft.com/fwlink/?LinkId=148864). Minh Nguyen used
by Barnsley and Hurd that I used in a research project on data it to build Mandelbrot Explorer, which you can read all about at
compression in the mid-90s. And one of my favorite books of all his blog at
time is Chaos by James Gleick (Penguin, 2008). go.microsoft.com/fwlink/?LinkId=148865. Silverlight 3, which will be
Building an interactive and visually compelling Mandelbrot view- in beta by the time you read this, has a bitmap API, but the problem
er for browsers is something I’ve wanted to do since the first day I remains that Deep Zoom wants to pull images from the server. It is
laid eyes on Silverlight. Dynamic Deep Zoom made it possible. The unclear at the moment whether the next version of Deep Zoom will
downside, of course, is that the images are generated on the serv- have a client-side story, but if it does, you can bet that I’ll be revising
er and downloaded to the client, leading to unwanted latency and MandelbrotDemo for Silverlight 3 to work entirely on the client.

through all the points in the complex plane that correspond to One final note on my implementation: virtually every ap-
pixels in the image tile, checking each point to determine whether plication that renders the Mandelbrot set uses a different
it belongs to the Mandelbrot set and assigning the corresponding color scheme. I chose a scheme that assigns pixels represent-
pixel a color representing its relationship to the Mandelbrot set ing coordinates that belong to the Mandelbrot set black, and
(more on this in a moment). pixels representing coordinates outside the Mandelbrot set
RGB colors. The further a coordinate lies from the Mandel-
brot set, the “cooler” or bluer the color; the closer it lies to
In theory, you could use the Mandelbrot set, the “hotter” the color. Distance from the
Mandelbrot set is determined by how rapidly the point escapes
DeepZoomTools to build to infinity. In the code here, this is the number of iterations
it takes DrawMandelbrotTile’s while loop to determine that
composition tools of your own. the point is not part of the Mandelbrot set. The fewer the it-
erations, the further the point lies from the set of points that
make up the Mandelbrot set. I factored the code that gener-
Sadly, there is virtually no documentation on Silverlight’s ates an RGB color value from the iteration count into a sep-
MultiScaleTileSource class. Lest you think I’m a genius for fig- arate class named ColorMapper ( Figure 11 ). If you want to
uring all this out (anyone that knows me will attest that I am experiment with different color schemes, simply modify the
not), let me give credit where credit is due. As I wrestled with the GetColor method. You can see the results of the gray-scale
meaning of the input parameters and how to map Deep Zoom rendering by doing the following:
X-Y-level values to the complex plane, I found an excellent blog int val = (count * 255) / max;
return Color.FromArgb(val, val, val);
post by Mike Ormond at go.microsoft.com/fwlink/?LinkId=148863. His
post provided key insights into dynamic Deep Zoom and also
referenced another blog post at warp.povusers.org/Mandelbrot/ that DeepZoomTools.dll
A final tidbit of information regarding Deep Zoom that you
describes an efficient approach to computing the Mandelbrot
might find useful involves an assembly named DeepZoomTools.
set. My work was probably halved by work others had done
dll. Deep Zoom Composer uses this assembly to generate tiled im-
before me.
ages and metadata from the scenes that you build. In theory, you
Figure 11 ColorMapper Class could use it to build composition tools of your own. I say “in the-
ory” because there’s precious little out there in terms of documen-
public class ColorMapper
{ tation. Find out more about DeepZoomTools.dll at go.microsoft.com/
public static Color GetColor(int count, int max) fwlink/?LinkId=148866. And shoot me an e-mail if you come up with
{
int h = max >> 1; // Divide max by 2 some unique, creative uses for Deep Zoom but aren’t quite sure
int q = max >> 2; // Divide max by 4 how to make it do what you want it to do. „
int r = (count * 255) / max;
int g = ((count % h) * 255) / h;
int b = ((count % q) * 255) / q; JEFF PROSISE is a contributing editor to MSDN Magazine and the author of sev-
eral books, including Programming Microsoft .NET (Microsoft Press, ).
return Color.FromArgb(r, g, b); He’s also cofounder of Wintellect (www.wintellect.com), a software consulting
}
}
and education firm that specializes in Microsoft .NET. Have a comment on this
column? Contact Jeff at wicked@microsoft.com.
msdnmagazine.com July 2009 97
Providing the vision and
intelligence to keep you and
your company competitive
in today’s market!

Technology+Solutions=Impact

Enter:
MSDN
into the online
DevConnections Fall ‘09
discount code when
registering online NOVEMBER 9-12, 2009
before August 1st to
receive a $200 LAS VEGAS, NV • Mandalay Bay Resort & Casino
discount!

>> The first 400 paid attendees will be mailed SQL Server 2008 standard with one CAL

Exciting Announcements Be among the first to get the insiders scoop


on the products and technology you rely on!
As a DevConnections attendee,
you and your colleagues can
BONUS:
attend all of the Connections Book 3 nights
shows, and cross between all by August 1st at
of the sessions, at the same Mandalay Bay
and receive a
time for the same price. Scott Guthrie Dave Mendlen Thomas Rizzo
Microsoft Microsoft Microsoft $100 Mandalay
I 125+ MICROSOFT AND Bay certificate!
Corporate Vice Director of Director,
INDUSTRY EXPERTS President, .NET Developer SharePoint Group (based on a 3-night
minimum stay)
I 200+ IN-DEPTH SESSIONS Developer Division Marketing

* REGISTER BY AUGUST 1ST to receive the “special” room rate of $149 @ MANDALAY BAY

CHECK WEB SITE FOR DESCRIPTIONS OF SESSIONS AND WORKSHOPS


www.DevConnections.com • 800.438.6720 • 203.268.3204 • Register Today!

MyDevConnections
JUVAL LOWY FOUNDATIONS

Securing the .NET Service Bus

In my April  column, I presented the .NET Services Bus and Using an endpoint behavior (as opposed to a service behavior)
described how you can utilize relay bindings to connect your ap- provides two advantages. First, a service host can choose a dif-
plication and customers across almost all network boundaries. ferent authentication mechanism for each endpoint. Second,
However, if just anyone were allowed to relay messages to your an endpoint behavior offers a unified programming model for
service, or if any service could receive your client calls, the relay the client and the service because the client has only endpoint
service would be a dangerous proposition. You need to protect behaviors.
the transfer of the message from the client to the service via the
relay service. In this column, I show you how to secure the .NET CardSpace Authentication
Services Bus and also provide some helper classes and utilities to CardSpace is the default credential type used by all relay bind-
automate many of the details. ings. When the client or the host uses CardSpace authentication,
The .NET Services Bus mandates that the service must always the user is prompted to provide a card on opening either the host
authenticate itself to receive relayed messages. Clients, on the other or the proxy. After the user provides the card, it is bundled in the
hand, may or may not authenticate themselves. Typically (and by connection request message to the relay service. Obviously, such an
default), clients do authenticate, but the relayed service may decide approach is best suited for interactive applications. These prompts,
to waive the client’s .NET Services Bus authentication. however, will annoy the user if they occur every time the user opens
The .NET Services Bus offers three different authentication a new proxy or host. Fortunately, the credential is cached in the app
mechanisms—CardSpace, password, or certificate—and it is up domain, and the user is not prompted again as long as he or she
to the solution administrator to associate these using the solution accesses the same endpoint address.
page shown in Figure 1.
A single solution can support multiple authentication options, and
for each option the administrator can add multiple credentials. For
example, the administrator might configure three passwords, two
cards, and a single certificate. Presenting any one of these credentials
is enough to authenticate against the relay service. Also, the service
and the client can use different authentication methods. For example,
the service can use a password, and the client a certificate.

Configuring Authentication
The enum TransportClientCredentialType, shown here, represents
the available credential options:
public enum TransportClientCredentialType
{
CardSpace,
UserNamePassword,
X509Certificate
Unauthenticated,
FederationViaCardSpace,
AutomaticRenewal
}
In TransportClientCredentialType, Client refers to a client of
the .NET Services Bus—that is, both the client and the relayed Figure 1 Configuring Solution Authentication Options
service.
The preferred authentication mechanism and the credentials Send your questions and comments to mmnet30@microsoft.com.
themselves are configured using an endpoint behavior called Code download available at code.msdn.microsoft.com/mag200907Foundations.
TransportClientEndpointBehavior, defined in Figure 2.
July 2009 99
Figure 2 TransportClientEndpointBehavior Figure 4 Setting the Solution Password on the Proxy
public class TransportClientEndpointBehavior : IEndpointBehavior TransportClientEndpointBehavior behavior =
{ new TransportClientEndpointBehavior();
public TransportClientCredentials Credentials behavior.CredentialType = TransportClientCredentialType.UserNamePassword;
{get;} behavior.CredentialType = TransportClientCredentialType.UserNamePassword;
public TransportClientCredentialType CredentialType behavior.Credentials.UserName.UserName = "MySolution";
{get;set;} behavior.Credentials.UserName.Password = "MyPassword";
} MyContractClient proxy = new MyContractClient();
public class TransportClientCredentials proxy.Endpoint.Behaviors.Add(behavior);
{
proxy.MyMethod();
public X509CertificateCredential ClientCertificate
{get;} proxy.Close();
public UserNamePasswordCredential UserName
{get;}
}
ServiceBusHelper defines the helper private method
SetBehavior, which accepts a collection of endpoints and assigns
a provided TransportClientEndpointBehavior object to all end-
Password Authentication points in the collection. The private SetServiceBusPassword helper
Like CardSpace authentication, password authentication is methods accept a collection of endpoints, the solution password,
best suited for an interactive application, typically in conjunction and optionally the solution name. If the solution name is not
with a login dialog box. However, there is no need to prompt specified, SetServiceBusPassword extracts it from the address of
the user to enter a user name because that is always the solution the first endpoint. SetServiceBusPassword then creates a Trans-
name. After the user provides the password, you need to portClientEndpointBehavior, configures it to use the password,
programmatically provide the solution name and the password to the and calls SetBehavior. The public SetServiceBusPassword meth-
TransportClientEndpointBehavior. ods simply call the private ones with the collection of endpoints
On the host side, you need to instantiate a new of the host.
TransportClientEndpointBehavior object and set the CredentialType The client needs to follow similar steps, except there is only one
property to TransportClientCredentialType.UserNamePassword. endpoint to configure—the one the proxy is using, as shown in
The credentials themselves are provided to the Credentials prop- Figure 4.
erty. You then add this behavior to every endpoint of the host that Again, you should encapsulate this repetitive code with extension
uses the relay service, as shown in Figure 3. methods and offer similar support for working with class factories.
You can encapsulate and automate the steps in Figure 3 by using Using these extensions, Figure 4 is condensed to this code:
extension methods such as the SetServiceBusPassword methods MyContractClient proxy = new MyContractClient();
of my ServiceBusHelper static class, shown here: proxy.SetServiceBusPassword("MyPassword");
proxy.MyMethod();
public static class ServiceBusHelper
proxy.Close();
{
public static void SetServiceBusPassword(this ServiceHost host, Figure 5 shows the implementation of two of the client-side
string password);
public static void SetServiceBusPassword(this ServiceHost host,
SetServiceBusPassword<T> extensions. Note the use of the
string solution,string password);
}
Figure 5 Implementing SetServiceBusPassword<T>
Using these extensions, Figure 3 is condensed to the
following: public static class ServiceBusHelper
{
ServiceHost host = new ServiceHost(typeof(MyService));
public static void SetServiceBusPassword<T>(this ClientBase<T> proxy,
host.SetServiceBusPassword("MyPassword");
string password) where T : class
host.Open();
{
You can see an implementation of the SetServiceBusPassword if(proxy.State == CommunicationState.Opened)
{
methods without error handling in the sample code that accom- throw new InvalidOperationException("Proxy is already opened");
panies this article. }
proxy.ChannelFactory.SetServiceBusPassword(password);
}
public static void SetServiceBusPassword<T>(this ChannelFactory<T> factory,
Figure 3 Providing the Host with the Solution Password string password) where T : class
{
TransportClientEndpointBehavior behavior = if(factory.State == CommunicationState.Opened)
new TransportClientEndpointBehavior(); {
behavior.CredentialType = TransportClientCredentialType.UserNamePassword; throw new InvalidOperationException("Factory is already opened");
behavior.Credentials.UserName.UserName = "MySolution"; }
behavior.Credentials.UserName.Password = "MyPassword"; Collection<ServiceEndpoint> endpoints =
ServiceHost host = new ServiceHost(typeof(MyService)); new Collection<ServiceEndpoint>();
foreach(ServiceEndpoint endpoint in host.Description.Endpoints)
endpoints.Add(factory.Endpoint);
{
SetServiceBusPassword(endpoints,password);
endpoint.Behaviors.Add(behavior);
}
}
//More members
host.Open(); }

100 msdn magazine Foundations


SetServiceBusPassword helper methods by wrapping the single The client can also programmatically provide the certificate to
endpoint the proxy has with a collection of endpoints. use to the proxy by following steps similar to Figure 4, and you
Because TransportClientEndpointBehavior is just another end- can automate this by using these extension methods, which follow
point behavior, you can also configure it in the config file. Storing the same default values discussed for the host:
the password in a text config file, however, is highly inadvisable. public static class ServiceBusHelper
{
Certificate Authentication public static void SetServiceBusCertificate<T>(this ClientBase<T> proxy)
where T : class;
Using a certificate to authenticate against the .NET Service Bus public static void SetServiceBusCertificate<T>(this ClientBase<T> proxy,
is the best option for noninteractive applications, and you can set string subjectName) where T : class;
public static void SetServiceBusCertificate<T>(this ClientBase<T> proxy,
the certificate both programmatically and in the config file. object findValue,StoreLocation location,StoreName
The main hurdle in using certificates is that the certificate must storeName,X509FindType findType) where T : class;
//Similar methods for a channel factory
contain a private key, which entails an elaborate setup sequence by }
the solution administrator. However, once the solution is configured Using the extensions on the client side yields this code:
to use the certificate, the rest of the work is straightforward. MyContractClient proxy = new MyContractClient();
Both the client and the service host use identical configuration proxy.SetServiceBusCertificate("MyRelayCert");
proxy.MyMethod();
entries, as shown in Figure 6. proxy.Close();
To programmatically provide the certificate on the host side,
you need to follow steps similar to those shown earlier in Figure 3. No Authentication
Besides setting the credentials type to TransportClientCredential- Although the service must always authenticate against the
Type.XCertificate, you need to use the SetCertificate method of service bus, you might decide to exempt the client and allow
the ClientCertificate property on Credentials. Here again, you can it unauthenticated access to the relay service. In that case,
streamline it using host extension methods, as shown here: the client must set TransportClientEndpointBehavior to
public static class ServiceBusHelper TransportClientCredentialType.Unauthenticated. When the clients
{
public static void SetServiceBusCertificate(this ServiceHost host);
are not authenticated by the relay service, it is now up to the relayed
public static void SetServiceBusCertificate(this ServiceHost host, service to authenticate the clients. The downside is that the service
string subjectName);
public static void SetServiceBusCertificate(this ServiceHost host,
is now less shielded than when the relay service authenticated cli-
object findValue,StoreLocation location, ents. In addition, you must use message security to transfer the
StoreName storeName,X509FindType findType);
//More members
client credentials (as discussed later). To enable unauthenticated
} access by the client, both the service and the client must explicitly
For example: allow it by configuring the relay binding to not authenticate, using
ServiceHost host = new ServiceHost(typeof(MyService)); the enum RelayClientAuthenticationType, shown here:
host.SetServiceBusCertificate("MyRelayCert"); public enum RelayClientAuthenticationType
host.Open(); {
SetServiceBusCertificate defaults the store to MyComputer RelayAccessToken, //Default
None
and the location to My, and it looks up the certificate by name. }
SetServiceBusCertificate also extracts the solution name from You assign that enum via the Security property.
the host endpoints.
Transfer Security
The next crucial aspect of security is how to securely transfer the
Figure 6 Solution Certificate in Config message through the relay to the service. In addition to message
<endpoint behaviorConfiguration = "RelayCert" transfer security, another important design decision is which client
... credentials (if any) the message should contain. Transfer security
/>
... is independent of how the client or the service authenticates itself
<behaviors> against the .NET Service Bus.
<endpointBehaviors>
<behavior name = "RelayCert"> The .NET Services Bus offers four options for transfer security,
<transportClientEndpointBehavior credentialType = represented by the enum EndToEndSecurityMode:
"X509Certificate">
<clientCredentials> public enum EndToEndSecurityMode
<clientCertificate {
findValue = "MyRelayCert" None,
storeLocation = "LocalMachine" Transport,
storeName = "My" Message,
x509FindType = "FindBySubjectName" TransportWithMessageCredential //Mixed
/> }
</clientCredentials> The four options are None, Transport, Message, and Mixed.
</transportClientEndpointBehavior>
</behavior> None means just that—the message is not secured at all. Trans-
</endpointBehaviors> port uses SSL or HTTPS to secure the message transfer. Message
</behaviors>
security encrypts the body of the message so that it can be sent over
msdnmagazine.com July 2009 101
GD97=5@
DF=79.
)-"-)

H\YZi``mgYUfW\UV`Y8f"8cVVÇg8YjY`cdYf@]VfUfm
8J8]bW`iXYg.

™ '&nZVghd[9g#9dWWÈh?djgcVa
™ &%nZVghd[I]ZEZga?djgcVa B9K
Hnh 6Yb^c
BV\Vo^cZ
™ &*nZVghd[Hnh6Yb^cBV\Vo^cZ Ã Wcad`YhY UfW\]jY Â
]bW`iX]b[ gcifWY
i]ZWZhi^cEZgaVcYA^cjmXdkZgV\Z
WcXY

™ &) nZVghd[8$8 JhZgh?djgcVa


™ )nZVghd[9g#9dWWÈhHdjgXZWdd`
™ 9dlcadVYVWaZVjY^dedYXVhihdc
ZkZgni]^c\[gdb#C:IYZkZadebZci
idWj^aY^c\G^X]>ciZgcZi6eea^XVi^dch

™ 6cYK>9:DH AZVgci]Z^chVcYdjih
d[L^cYdlhegd\gVbb^c\l^i]
HXdiiHl^\VgiÈhVlVgY"l^cc^c\
k^YZdhZg^Zh

»5@@CB5G=B;@98J8

CfXYf Bck kkk"XX^"Wca#WXfca


8jhidbZghl]d]VkZegZk^djhanejgX]VhZY9g#9dWWÈh9K9GZaZVhZ&!'!(VcY$dg)
VgZfjVa^[^ZYidejgX]VhZi]^h9K9ViVheZX^Vaje\gVYZeg^XZd['.#.*

H]^ee^c\VcY]VcYa^c\X]Vg\Zd[,#%%Veea^Zh[dgdgYZghYZa^kZgZYl^i]^ci]ZJ#H#:migVX]Vg\Zh[dgbjai^eaZXden
dgYZgh!gVe^Y YZa^kZgn!VcYYZa^kZgndjih^YZi]ZJ#H#l^aaVeean0ZmVXiX]Vg\Zhl^aaVeeZVgl]Zcdca^cZdgYZg^heaVXZY#
Index to Advertisers
Access all of these companies from MSDN Magazine online ad index at: msdn.microsoft.com/msdnmag
Advertiser Phone URL Page Advertiser Phone URL Page
(ISC)2 Inc. 866-462-4777 www.isc2.org/csslp 61 Intersoft Solutions www.intersoftpt.com/WebUIStudio/Silverlight 31

(ISC)2 Inc. 866-462-4777 www.msdnmagresources.com 107 Lead Technologies Inc. 800-637-1840 www.leadtools.com/msdn 55

/n Software 800-225-4190 www.nsoftware.com 15 Microsoft www.buildabetterapp.com 46-47

Accusoft-Pegasus 800-875-7009 www.accusoft.com 39 Pearson Education 317-428-3000 www.informIT.com 20

Acresso Software Inc. www.msdnmagresources.com 107 Programmers Paradise 800-445-7899 www.programmersparadise.com 5

Alexsys Corporation 781-279-0170 www.alexcorp.com 89 SAP AG 888-333-6007 www.sap.com/crystalreports/dev 9

Aspose PTY LTD. www.aspose.com 65 Scaleout Software 503-643-3422 www.saleoutsoftware.com 25

AVIcode www.msdnmagresources.com 107 Seapine Software Inc. 888-683-6456 www.seapine.com 75

ceTe Software 800-631-5006 www.dynamicPDF.com 53 Software FX 561-999-8888 wwwsoftwarefx.com 32-33

CodeBetter www.CodeBetter.com 13 SpreadsheetGear 888-774-3273 www.spreadsheetgear.com 51

ComponentArt 416-622-2923 www.componentart.com 6-7 Steema Software 34-972-21-87-97 www.steema.com 23

ComponentOne 800-858-2739 www.componentone.com 36-37, 67 Syncfusion 888-9DOTNET www.syncfusion.com 2-3

ComponentSource 888-850-9911 www.componentsource.com 29 Tech Conferences 800-438-6720 www.DevConnections.com 80-81, 98

Construx Software 866-296-6300 www.msdnmagresources.com 107 Telerik Corporation 888-365-2779 www.telerik.com/salesdashboard 18-19

dtSearch 800-IT-FINDS www.dtsearch.com 4 Telerik Corporation 888-365-2779 www.telerik.com/reporting 58-59

Dr. Dobb’s CD Rom www.ddj.com/cdrom 102 Telerik Corporation 888-365-2779 www.telerik.com 71

Dundas 800-463-1492 www.dundas.com C4 The Code Project 877-504-5214 www.codeproject.com 84

Far Point Technologies 800-645-5913 www.fpoint.com C3 TotalDevPro.com www.TotalDevPro.com 93

GrapeCity Inc. 425-828-4400 www.datadynamics.com 69 Wintellect 877-968-5528 www.wintellect.com 43

Infragistics 800-231-8588 www.infragistics.com C2-1 Xceed Software www.xceed.com 26-27, 105

Intel Corporation www.intel.com 10

SALES REPRESENTATIVES
■ Michele Hurabiell Strategic Accounts Manager
(415) 378-3540 mhurabiell@techweb.com
■ Ed Day Regional Account Manager-West/South
USA & APAC
(785) 838-7547 eday@techweb.com
■ Brenner Fuller Strategic Account Manager-West/South
USA & APAC
(603) 746-3057 bfuller@techweb.com
■ Jonathan Hampson Regional Account Director-Northeast
USA & EMEA
(603) 924-8500 jhampson@techweb.com
■ Julie Thibault Strategic Account Manager-Northeast
USA & EMEA
(603) 924-8400 jthibault@techweb.com
The index on this page is provided as a service to our readers. The publisher does not assume any liability for errors or omissions.
Figure 7 Binding and Transfer Security role-based security policy. Whenever the message contains creden-
tials, the service must also authenticate them (even if all it wants is
Binding None Transport Message Mixed
to authorize the client). Note that such authentication is on top of
TCP (Relayed) + + + +
the authentication the relay service has already performed. If the
TCP (Direct/Hybrid) + - + -
relay service has already authenticated the client, authenticating the
WS + + + + call again by the service does not add much in the way of security,
One-Way + + + - yet it burdens the service with managing the client’s credentials.
If the relay service is not authenticating the client, the service will
nonsecured transports. Mixed uses message security to contain the be subjected to all the unwanted traffic of unauthenticated clients,
client’s credentials but transfers the message over a secured trans- which could have severe IT operations implications.
port. Figure 7 shows the way the relay binding supports the vari- For these reasons, I find that the best practice is for the relay ser-
ous transfer security modes and their default values. A bold plus vice to authenticate the client and to avoid having the service do it
sign (+) marks defaults. again. You should also design your service so that it has no need for
You configure transfer security in the binding. Although the the client’s credentials. Such a design is aligned with the chain-of-
relay bindings use different default values, all the relay bindings trust design pattern that works well in a layered architecture. Th at
offer at least one constructor that takes EndToEndSecurityMode said, there are cases when the service needs the client credentials
as a construction parameter. You can also configure transfer se- for a local use other than authorization, such as personalization,
curity after construction by accessing the Security property and auditing, or proprietary integration with legacy systems.
its Mode property.
TCP Relay Binding and Transfer Security
Transport Security The TCP relay binding defaults to transport security, and no spe-
When it comes to transfer security, transport security is the sim- cial configuration steps are required. It simply uses SSL over port
plest to set up and configure. When using transport security, all cli- . When using transport security, however, you can only use the
ent calls are anonymous—the client messages do not contain any TCP relay binding connection mode of TcpRelayConnectionMode.
client credentials. While transport security is the easiest to use, it Relayed.
does not provide end-to-end security. It secures only the transfer Because the call is anonymous, on the service side Windows
of the message to the relay service and from the relay service. The Communication Foundation (WCF) attaches a generic princi-
journey inside the relay service is not secured. pal with a blank identity to the thread executing the call, and the
This means that in theory, the relay service can eavesdrop on ServiceSecurityContext is null.
the communication between the client and the service and even To protect the transfer of the message, you must configure the
tamper with the messages. However, I believe that in practice this service host with a certificate. The client will by default negotiate
is impractical given the volume of traffic to the .NET Services Bus. the certificate (obtain its public key), so there is no need to explicitly
Simply put, this kind of subversion cannot be performed as an aside list the certificate in the client’s config file. However, the client still
and requires dedicated resources, planning, staff, and technology. In needs to validate the negotiated certificate. As with regular WCF
addition, Microsoft has proven over the years that it has the highest and message security, the best practice is to validate the certificate
integrity and respect for its customers’ privacy, and it has many other using peer-trust, which means installing the certificate beforehand
areas it could have abused if it were indeed malicious. in the client’s Trusted People folder. Besides providing true end-
to-end transfer security over the relay, using message security also
Message Security enables the use of the direct and hybrid connection modes.
Message security encrypts the body of the message using a ser- As discussed previously, the message might or might not con-
vice-provided certificate. Because the message itself is protected tain the client’s credentials. If you avoid sending the credentials
rather than the transport, the journey inside the relay is protect- in the message, WCF will attach to the thread executing the call
ed as well. The downside to message security is that it requires a Windows principal with a blank identity, which does not make
additional setup steps. much sense. When using message security without credentials,
While I think that in practice transport security is enough, it is you should also set the host PrincipalPermissionMode to None
vital to assure customers and users of the presence of end-to-end to get the same principal as with transport security. To configure
privacy and integrity and to guard against even theoretical compro- the binding for message security with anonymous calls, use Mes-
mises. I therefore recommend always relying on message security sageCredentialType.None and assign that value to the ClientCre-
for all relayed communication, which will also provide additional dentialType property of MessageSecurityOverRelayConnection,
benefits, such as direct connection and the availability of security available in the Message property of NetTcpRelaySecurity. Figure 8
call context to the service. shows code that demonstrates this.
Unlike in transport security, in message security the message might Figure 9 shows the required host-side config file.
contain the client’s credentials. The primary use of client credentials On the client side, you must include the service certificate name
by the service is for local authorization of the call to establish some in the address identity of the endpoint because that name does not
104 msdn magazine Foundations
Figure 8 Configuring a Binding for Message Security with
Anonymous Calls
public sealed class NetTcpRelaySecurity
{
public EndToEndSecurityMode Mode
{get;set;}
public MessageSecurityOverRelayConnection Message
{get;}
//More members
}
public sealed class MessageSecurityOverRelayConnection
{
public MessageCredentialType ClientCredentialType
{get;set;}
//More members
}

match the relay service domain. Figure 10 shows the required


config file.
If you want to include the client credentials in the message, the
service must also authenticate those credentials, using the same
setting as with regular TCP calls. In that case, the service princi-
pal and primary identity will both have an identity matching those
credentials. The credential can be a user name and password, a cer-
tificate, or an issued token. You must indicate to both the host and
the client in the binding which credential types you expect. For
example, for user name credentials, use the following:
<bindings>
<netTcpRelayBinding>
<binding name = "MessageSecurity">
<security mode = "Message">
<message clientCredentialType = "UserName"/>
</security>
</binding>
</netTcpRelayBinding>
</bindings>

Figure 9 Configuring the Host for Message Security


<service name = "..." behaviorConfiguration = "MessageSecurity">
<endpoint
...
binding = "netTcpRelayBinding"
bindingConfiguration = "MessageSecurity"
/>
</service>
...
<serviceBehaviors>
<behavior name = "MessageSecurity">
<serviceCredentials>
<serviceCertificate
findValue = "MyServiceCert"
storeLocation = "LocalMachine"
storeName = "My"
x509FindType = "FindBySubjectName"
/>
</serviceCredentials>
<serviceAuthorization principalPermissionMode ="None"/>
</behavior>
</serviceBehaviors>
<bindings>
<netTcpRelayBinding>
<binding name = "MessageSecurity">
<security mode = "Message">
<message clientCredentialType = "None"/>
</security>
</binding>
</netTcpRelayBinding>
</bindings>

msdnmagazine.com July 2009 105


Figure 10 Configuring the Client for Message Security Figure 11 Configuring for Mixed Security
<client> <endpoint
<endpoint behaviorConfiguration = "ServiceCertificate" binding = "netTcpRelayBinding"
binding = "netTcpRelayBinding" bindingConfiguration = "MixedSecurity"
bindingConfiguration = "MessageSecurity" ...
<identity> />
<dns value = "MyServiceCert"/> ...
</identity> <bindings>
... <netTcpRelayBinding>
</endpoint> <binding name = "MixedSecurity">
</client> <security mode = "TransportWithMessageCredential"/>
<bindings> </binding>
<netTcpRelayBinding> </netTcpRelayBinding>
<binding name = "MessageSecurity"> </bindings>
<security mode = "Message">
<message clientCredentialType = "None"/>
</security>
</binding> Figure 11 shows how to configure either the service or the client
</netTcpRelayBinding>
</bindings> for mixed security.
<behaviors> After the messages are received by the service, the host must
<endpointBehaviors>
<behavior name = "ServiceCertificate"> authenticate the calls as with regular TCP. Once authenticated, the
<clientCredentials> service call will have a principal object matching the credentials
<serviceCertificate>
<authentication certificateValidationMode= "PeerTrust"/> provided and a security call context.
</serviceCertificate>
</clientCredentials>
</behavior> WS Relay Binding and Transfer Security
</endpointBehaviors> Combining the WS binding with transport security is as easy as
</behaviors>
changing the address schema from HTTP to HTTPS and setting
the binding to use transport security, as shown in Figure 12.
On the host side, if the credentials are user name and password, Note that the WS relay binding defaults to using message security
you must also configure, using behaviors, how to authenticate and for transfer security. Since message security requires additional
authorize the credentials. The default will be Windows credentials, configuration steps, the WS relay binding does not work as-is out
but the more common choice would be using some credentials of the box, and all calls will fail. However, configuring the WS relay
store such as the ASP.NET providers: binding to use message (or mixed) security is identical to config-
<service name = "..." behaviorConfiguration = "CustomCreds">
uring the TCP relay binding.
...
</service>
...
One-Way Relay Binding and Transfer Security
<serviceBehaviors> The one-way relay binding (and its subclasses) is the only bind-
<behavior name = "CustomCreds"> ing that defaults to having no transfer security at all. In addition,
<serviceCredentials>
<userNameAuthentication it does not support mixed transfer security. Configuring it to use
userNamePasswordValidationMode = "MembershipProvider" transport security is the same as with the TCP and WS relay bind-
/>
</serviceCredentials> ings. Configuring it to use message security is similar but with one
<serviceAuthorization principalPermissionMode = "UseAspNetRoles"/> important difference—the one-way relay binding cannot negoti-
</behavior>
</serviceBehaviors> ate the service certificate because there may not even be a service,
The client has to populate the proxy with the credentials. When and no direct interaction with the service takes place. When us-
using a username and password, the client code would be insert ing message security on the client, you must explicitly specify the
like this: service certificate, as shown in Figure 13.
MyContractClient proxy = new MyContractClient();
proxy.ClientCredentials.UserName.UserName = "MyUserName";
proxy.ClientCredentials.UserName.Password = "MyPassword"; Figure 12 WS Relay Binding with Transport Security
proxy.MyMethod(); <endpoint
address = "https://MySolution.servicebus.windows.net/..."
proxy.Close(); binding = "wsHttpRelayBinding"
The client has no way of knowing if the credentials it provides bindingConfiguration = "TransportSecurity"
...
are authenticated on the service side as Windows or custom />
credentials.
<bindings>
Mixed transfer security is the only way to avoid anonymous calls <wsHttpRelayBinding>
over transport security. Since transport security cannot pass cre- <binding name = "TransportSecurity">
<security mode = "Transport"/>
dentials, you pass the credentials using message security, hence the </binding>
term mixed. When using mixed transfer security over the TCP relay </wsHttpRelayBinding>
</bindings>
binding you are restricted to using only relayed connections.
106 msdn magazine Foundations
Figure 13 One-Way Relay Binding with Message Security Streamlining Transfer Security
<client> While transfer security offers a slew of details and intricate op-
<endpoint behaviorConfiguration = "ServiceCertificate" tions, you can and should streamline and automate most of these
...
</endpoint> security configuration decisions. To encapsulate it on the host side,
</client> use my ServiceBusHost class, defined as insert the following:
public class ServiceBusHost : ServiceHost
<behaviors>
{
<endpointBehaviors>
public ServiceBusHost(object singletonInstance,
<behavior name = "ServiceCertificate">
params Uri[] baseAddresses);
<clientCredentials>
public ServiceBusHost(Type serviceType,params Uri[] baseAddresses);
<serviceCertificate>
<scopedCertificates> public void ConfigureAnonymousMessageSecurity(string serviceCert);
<add targetUri = "sb://MySolution.servicebus..." public void ConfigureAnonymousMessageSecurity(string serviceCert,
findValue = "MyServiceCert" StoreLocation location,StoreName storeName);
storeLocation = "LocalMachine" public void ConfigureAnonymousMessageSecurity(StoreLocation location,
storeName = "My" StoreName storeName,X509FindType findType,object findValue);
x509FindType = "FindBySubjectName"
/> //More members
</scopedCertificates> }
</serviceCertificate>
</clientCredentials> When using ServiceBusHost, no other setting in config or
</behavior> in code is required. Per my recommendation, you can use the
</endpointBehaviors>
</behaviors> ConfigureAnonymousMessageSecurity method to enable anony-
mous calls over message security. All you need to provide it is the
certificate name to use:
ServiceBusHost host = new ServiceBusHost(typeof(MyService));
Another important distinction between the one-way relay bind- host.ConfigureAnonymousMessageSecurity("MyServiceCert");
ing and the other relay bindings is that if the call is anonymous, host.Open();
with either transport or message security, the call has a security ConfigureAnonymousMessageSecurity will default the certifi-
call context whose primary identity is cate location to the local machine and the certificate store to My,
service bus certificate CN=servicebus.windows.net. and it will look up the certificate by its common name. If you do

Resource Partners

N E W W H I T E PA P E R S

Get Windows Installer t ips from t he installat ion experts.


Learn how to:
AVIcode: The leading provider of r$WKNFWRITCFGHTKGPFN[/5+U
.NET application monitoring solutions r&GUKIPWRITCFGRCEMCIGU
r9QTMYKV JEWUVQOCEV KQPU
DOWNLOAD YOUR FREE Download t hem NOW
WHITEPAPER TODAY! www.acresso.com/whitepapers

$180 billion a year.


That's what the holes in
software development are costing us.

msdnmagazine.com Read more at July 2009 107


www.isc2.org/csslp-whitepapers
not call ConfigureAnonymousMessageSecurity, ServiceBusHost {
public MyContractClient()
will default to using anonymous message security with the solution {}
name for the certificate name: public void MyMethod()
{
ServiceBusHost host = new ServiceBusHost(typeof(MyService));
Channel.MyMethod();
host.Open();
}
You can also use the overloaded versions that let you explicitly }
specify some or all of the certificate details. The sample code download includes the full implementation of
ServiceBusHost makes use of the ConfigureBinding method of ServiceBusClientBase<T>. ServiceBusClientBase<T> uses peer-
ServiceBusHelper. ConfigureBinding defaults to anonymous calls. If trust to validate the service certificate. The bulk of the work is
the calls are to have credentials, ConfigureBinding always uses user- done by passing the endpoint binding to ServiceBusHelper.Con-
name credentials. With the TCP relay binding, ConfigureBinding figureBinding.
uses the hybrid connection mode. ConfigureBinding also always The one remaining sore point is the one-way relay binding,
enables reliable messages. with its lack of certificate negotiation. To alleviate that, I wrote
ServiceBusHost also supports message security with credentials OneWayClientBase<T>:
via the ConfigureMessageSecurity methods: public abstract class OneWayClientBase<T> : ServiceBusClientBase<T>
public class ServiceBusHost : ServiceHost where T : class
{ {
public void ConfigureMessageSecurity(); //Same constructors as ServiceBusClientBase<T>
public void ConfigureMessageSecurity(string serviceCert);
public void ConfigureMessageSecurity(string serviceCert, public void SetServiceCertificate(string serviceCert);
string applicationName); public void SetServiceCertificate(string serviceCert,
public void ConfigureMessageSecurity(string serviceCert, StoreLocation location,
bool useProviders, StoreName storeName);
string applicationName); public void SetServiceCertificate(object findValue,
//More members StoreLocation location,StoreName storeName,X509FindType findType);
} }

ConfigureMessageSecurity defaults to using the ASP.NET OneWayClientBase<T> derives from ServiceBusClientBase<T>


membership providers, but it can be instructed to use Windows and adds the SetServiceCertificate methods. If you never call
accounts as well. The implementation of ConfigureMessageSecurity SetServiceCertificate, OneWayClientBase<T> simply looks up the
is similar to that of ConfigureAnonymousMessageSecurity. service certificate from config. SetServiceCertificate offers a simple
You can provide clients with an easy way of configuring message programmatic way of avoiding the config altogether. It even sets
security with my ServiceBusClientBase<T>, defined as the identity tag of the endpoint address. SetServiceCertificate uses
public abstract class ServiceBusClientBase<T> : ClientBase<T> the same defaults as ServiceBusHost, including using the solution
where T : class name for the certificate name if no certificate is provided. Here’s
{
public ServiceBusClientBase(); how you use OneWayClientBase<T> :
public ServiceBusClientBase(string endpointName); class MyContractClient : OneWayClientBase<IMyContract>,IMyContract
public ServiceBusClientBase(Binding binding, {
EndpointAddress remoteAddress); public MyContractClient()
public ServiceBusClientBase(string username,string password); {}
public void MyMethod()
public ServiceBusClientBase(string endpointName, {
string username,string password); Channel.MyMethod();
public ServiceBusClientBase(Binding binding,EndpointAddress address, }
string username,string password); }
MyContractClient proxy = new MyContractClient();
protected virtual void ConfigureForServiceBus();
proxy.SetServiceCertificate("MyServiceCert");
protected virtual void ConfigureForServiceBus(string username,
string password); proxy.MyMethod();
proxy.Close();
}
As you can see, using OneWayClientBase<T> is straightforward. „
ServiceBusClientBase<T> offers two sets of constructors.
The constructors that merely take the endpoint parameters all
default to using message security with anonymous calls. You
can also use the constructors that accept the username and
password credentials. If no endpoint address identity is provid-
ed, ServiceBusClientBase<T> defaults it to the solution name.
You use ServiceBusClientBase<T> like the WCF-provided
ClientBase<T>:
[ServiceContract]
interface IMyContract
{ JUVAL LOWY a software architect with IDesign providing WCF training and
[OperationContract]
void MyMethod();
architecture consulting. His recent book is Programming WCF Services nd
} Edition (O’Reilly, ). He is also the Microsoft Regional Director for the Silicon
class MyContractClient : ServiceBusClientBase<IMyContract>,IMyContract Valley. Contact Juval at www.idesign.net.
108 msdn magazine Foundations

Você também pode gostar