Friday, June 19, 2009

Exception-Driven-Development: What are you doing with your Exceptions?

Fantastic article by Jeff Atwood, of Stack Overflow [1], on Exception Driven Development – some highlighted excerpts:

“If you're waiting around for users to tell you about problems with your website or application, you're only seeing a tiny fraction of all the problems that are actually occurring. The proverbial tip of the iceberg.


The first thing any responsibly run software project should build is an exception and error reporting facility

Our exception logs are a de-facto to do list for our team

… Broad-based trend analysis of error reporting data shows that 80% of customer issues can be solved by fixing 20% of the top-reported bugs. Even addressing 1% of the top bugs would address 50% of the customer issues. The same analysis results are generally true on a company-by-company basis too.

Although I remain a fan of test driven development, the speculative nature of the time investment is one problem I've always had with it. If you fix a bug that no actual user will ever encounter, what have you actually fixed? While there are many other valid reasons to practice TDD, as a pure bug fixing mechanism it's always seemed far too much like premature optimization for my tastes. I'd much rather spend my time fixing bugs that are problems in practice rather than theory.

You can certainly do both. But given a limited pool of developer time, I'd prefer to allocate it toward fixing problems real users are having with my software based on cold, hard data. That's what I call Exception-Driven Development. Ship your software, get as many users in front of it as possible, and intently study the error logs they generate. Use those exception logs to hone in on and focus on the problem areas of your code. Rearchitect and refactor your code so the top 3 errors can't happen any more. Iterate rapidly, deploy, and repeat the process. This data-driven feedback loop is so powerful you'll have (at least from the users' perspective) a rock stable app in a handful of iterations.”

Side-stepping the implementation details (I personally haven’t see strong justification in favour of using anything beyond Enterprise Library + MSMQ + Custom Database), the value is really in what you collect and how you store it rather than the particulars of your approach (whether it be log4net, EL or ELMAH).

At a minimum, the following information should be available in indexed columns for querying:

  • IP
  • Web Server
  • DB Server
  • Message
  • Time
  • Severity
  • SessionId (ASP.NET’s – crucial for correlating activity)
  • File Name (Class)
  • Method Name
  • Line Number (IL Offset)
  • Url

Next, the exception should have attached to it, in some form, a set of Extended Properties (typically stored in a different table under a CLOB):

  • Full Stack Trace (customized, in some instances)
  • Original Request Headers (all, especially cookie)
  • Request Total Bytes
  • Http Method
  • Url Referrer
  • ASP.NET Request Cookies (differ from header cookie, if changed)
  • Form Data
  • Session Data (particularly IsNewSession)

For session and form data, you have to exercise some caution in putting domain-specific rules in place to avoid unnecessary data (like ViewState, for example; or DataSets stored in Session). It’s prudent to plan on keeping the indexed data for a minimum of 6 month, if possible, and CLOB values for half of that period.

Being able to look back on trends is crucial; if you have time to implement a more elaborate warehousing strategy, all the better. But if you can’t answer basic questions like:

  1. What are the top trending exceptions in the last hour, day, month, or week?
  2. Is the error isolated to one web server or wide-spread? What about database servers?
  3. Are there trends by IP?
  4. Did the user see a Yellow Page of Death or was it a behind-the-scenes exception?
  5. What was the sequence of exceptions for a particular session (series of requests leading up to it)?
  6. What cookies did a request start with; what cookies were assigned, prior to the exception?
  7. What were the form, header values that generated a particular exception?

… then you really should stop development and reconsider your bearings.

[1] – Jeff Atwood: Exception Driven Development

[2] – More on ELMAH

Thursday, June 18, 2009

Google Sites vs. SharePoint Online: The Battle for Enterprise Collaboration (Oct. 2009)

Microsoft’s first-step in realizing its Software + Services [1] vision is the launch of its traditional productivity suite in SaaS form under the banner of Microsoft Online Services. Designed to thwart any penetration that Google Apps sees in the Enterprise market, Microsoft’s Online Services includes the all-inclusive Business Productivity Online Suite (BPOS) bundle or SharePoint Online [2], Exchange Online, Office Communication Online and Office Live Meeting each as individual offerings.

High-availability, comprehensive security and simplified IT management are the hallmarks of both Google Apps and Microsoft Online Services. For this particular post, we focus specifically on the collaboration aspect of each suite and offer a side-by-side comparison of SharePoint Online vs. Google Sites:


SharePoint Online

Google Sites

Internet Scenarios



Custom Domain



Code Extensibility

No (Custom WebParts Not Permitted)

Yes (via Gadgets Extensibility)

API for List Data


Yes – via new Sites API

Page-Level Meta Tags



Basic Workflow



Annual Fee/User



Storage per User


10Gb+500Mb/User per Domain [6]

Sites per Domain



Quota per Site

50 Gb

None – Tracked per Domain

Max File Size

250 Mb [7]

50 Mb

Ability to purchase additional storage?

Yes (per TB increments)


E-commerce Integration


Yes (via Google Checkout and 3rd-party services like PayPal)

In the SharePoint Online trial, we note that the Publishing Site template typically used for public-facing deployments is absent from the list of available templates; the choices are: Basic Meeting Workspace, Blank Site, Blog, Document Workspace, Team Site, and Wiki Site.

Aside from the handicap around public-facing scenarios (more on this below) the bulk of the standard WSS collaboration facilities continue to be available in their natural form, including: extendible lists, user-defined views, WebParts, search, alerts, and so on. The atomic “list” structure that the SharePoint franchise thrives on is indeed a powerful concept that provides tremendous value to self-serve audiences; that it’s preserved in its entirety is certainly promising for the SaaS offering.

Meanwhile, when creating pages in Google Sites, your template choices are:

The all-powerful list concept that SharePoint excels in has rudimentary support here:

image Users are limited to a handful of column types with no flexibility around validation, user-defined views, folders and lookup capabilities:

Contrasting this with the 12 column types available in WSS, each with unique validation and configuration options, the edge goes to SharePoint; but both offer the basic structures needed for self-serve collection needs.

And while Google Sites does offer rudimentary lists, it currently has no support for accessing this data via APIs as documented here earlier; SharePoint, on the other hand, continues to offer a broad range of externally accessible services for retrieving its data [5]. With Google Sites, your only immediate options are to host the data in spreadsheet format. (Update: the October, 2009 announcement of the import/export API for Sites changes this analysis.)

It’s clear that Microsoft intentionally abandoned the public-facing scenarios in its first version – those looking for a mixture of internal and external sites need to look elsewhere. And while Google hasn’t done so explicitly, they’ve also not demonstrated much interest in servicing this market either, particularly with the omission of features like page-level meta tags [4].

(Update: the July 25, 2009 announcement [8] of custom faveicon support for Google Sites is certainly a small step towards allowing deeper customizations needed for internet-facing scenarios.)

Microsoft, for its part, acknowledges the limitations in the first version and plans to address them (courtesy of David Gorbert [3]):

“Thanks for your suggestion about using SharePoint online for anonymous Internet use. Right now SharePoint Online is targeted to intranet-type applications like internal portals and collaboration and requires authentication. On-premises SharePoint is used for many applications that go beyond portals and collaboration both in the intranet and on the Internet. In fact, hosting Internet-facing sites with SharePoint is one of the fastest-growing on-premises uses of the product. For SharePoint Online we are looking at all the scenarios where SharePoint is used and will be adding functionality to the service to enable many more of these scenarios over time.

Regarding using your own domain name for non-anonymously-accessible sites, this is complicated by the fact that we use SSL (to protect your data) and would therefore need a server certificate to host your domain name. Current Internet standards make this difficult for large-scale hosters to do, but there is a (relatively) new extension to the standards called SNI that could help with this. This is a common request, and so we will be looking at ways to enable this as well.

Regarding connecting online and on-premises SharePoint deployments, it is possible to do this via SharePoint Online’s Web Services. Several of Microsoft's partners offer tools to help with this (e.g. Metalogix Site Migration Manager), or if you have a special scenario you can write custom integration solutions specific to your need. Troy Hopwood on my team demonstrated this recently at PDC. If you have specific online/on-premises integration scenarios, we’d love to hear them. For SharePoint Online the best way to continue the conversation is to post your questions and ideas in the SharePoint Online forum at SharePoint Online folks monitor this forum daily and we try to get back to you as quickly as possible.”

With the release of SharePoint 2010 keeping the team busy, the “when” is certainly an important question. Having to chose between these offerings for immediate external-facing needs (today), the edge would have to go to Google despite the feature deficit.

Update #1: On a related note, a list of  top 10 applications running on Google’s App Engine:

Update #2: A big thanks to SM Rana for bringing the HyperOffice comparison to my attention; I mostly agree with the analysis, with just a couple of  minor exceptions:

#1. Shared Documents: Not only is the offline access for Google Apps severely handicapped, but you can’t really compare document capabilities in GA with Office – the edge goes to Microsoft here by a landslide.

#2. On the sites/intranet front: Mixed feelings here. When SharePoint Online does allow code deployment, it will gain a significant advantage over Google Sites. On the other hand, if GA offers customized lists and storage, and you start to see a flourishing ecosystem of gadgets, both commercial and community-driven, then the playing field will be levelled somewhat. Again, the debate here is largely based on the same principles as the App Engine vs. Azure contest: developer appeal, familiarity, leverage of assets, and existing channel/partners will be the differentiators.


[1] – Software + Services

[2] – SharePoint Online

[3] – Anonymous Access, Custom URLs and Online/On-Premise Connectivity

[4] – Is it possible to add metadata, such as keywords and a description, to a page?

[5] – SharePoint Online Web Services

[6] – How much storage do I have in Google Sites?

[7] – Increase for SharePoint Online File Uploads

[8] – Google Sites Updates: July 15, 2009

Wednesday, June 17, 2009

Disable TCP Chimney to Address Sporadic ViewState Exceptions (2009)

What might seem like an exotic issue has turned up twice for me over the last few years, in completely different environments – and both times, heavy troubleshooting finally demonstrated inconsistencies that had no other explanation.

If you’ve stumbled across this post, it’s likely too late.

If only we had the benefit of forewarnings [1] on disabling TCP Off-loading Engine (TOE) in Windows 2003 as a precursor to troubleshooting connectivity issues:

“I've become quick to consider disabling the TCP off-loading engine features (or “TOE”) when trouble-shooting a problem involving IIS 6.0 and detecting the slightest hint of a "networking problem" or "communication problem." With the right combination of NIC, NIC driver level, and OS level, TOE is a good thing which significantly improves the performance of TCP processing. Windows can offload the TCP processing of some network streams to the network controller. But under some circumstances (especially with outdated NIC drivers) strange networking problems can result.”

What’s less documented (try a search for TCP Chimney + ViewState corruption, or variations thereof), is that a host of seemingly ViewState-related exceptions can surface as a result of TOE interference, including:

  • Invalid ViewState: Missing field: __VIEWSTATE? (post ViewState chunking)
  • Invalid character in a Base-64 string
  • Invalid length for a Base-64 char array
  • The serialized data is invalid
  • Invalid ViewState
  • The state information is invalid for this page and might be corrupted

In some cases, obvious network/firewall restrictions might be at play [3] – the key to this breed is that symptoms are sporadic.

Now before experimenting with ViewState chunking, enabling compression, questioning downstream devices, slaving to reduce ViewState or tinkering with load-balancers, consider disabling TOE as a starting point to see if symptoms abate. (You will always have a small trickle of these exceptions on high-volume sites that feature large ViewStates, but it shouldn’t be considerably higher than a 1 /13,000 page views average.)

Other symptoms – if you’re:

  • Experiencing sporadic ViewState corruption that can’t be reproduced consistently, and;
  • Are not seeing Application Pool restarts that may to be blame, and;
  • Can confirm that the issue is distributed randomly across site visitors (you do have this data, right? Site scrapping is generally a large source of ViewState exceptions);

…then you may want to consider the remedy above.

In the most recent case we encountered, Broadcom Gigabit Ethernet drivers were at play as well [4] (they’re notorious for this). That it might be tied to the number of connections the NIC is servicing is quite possible as well, but we never did see a clear correlation between volume and instances of the exceptions.

Not a ViewState Issue

ViewState is simply a common symptom of partial-offload failures in ASP.NET. In practise, reports of TOE failures interfering with SQL Server and Exchange are rampant as well [5]. This is a web-server network interface issue: offloading ViewState to a device like StrangeLoop or ScaleOut State Server wouldn’t alleviate it; keeping ViewState in process though, while chalk full of its own ramifications, would cause symptoms to abate. 

How does it impact you?

I’m very curious to hear from others on this issue. If you found this post useful or want to help, please drop a line in the comments and tell us about your experience.


[1] – Disabling the TCP Chimney During IIS 6.0 Troubleshooting:

[2] – Additional Readings

An update to turn off default SNP features is available:

The Microsoft Windows Server 2003 Scalable Networking Pack release:;EN-US;912222

Some problems occur after installing Windows Server 2003 SP2:

Error message when an application connects to SQL Server on a server that is running Windows Server 2003: "General Network error," "Communication link failure," or "A transport-level error":

Scalable Networking: Network Protocol Offload - Introducing TCP Chimney:

Having Network Problems on Win2003 SP2?

Windows 2003 Scalable Networking pack and its possible effects on Exchange (part 1):

Windows 2003 Scalable Networking pack and its possible effects on Exchange - Part 2:

Broadcom Demonstrates Industry's First Fully Integrated TCP/IP Offload Engine (TOE) that Supports Microsoft's TCP Chimney Architecture:

Broadcom to Deliver TCP/IP Offload Technology:

SP2 Scalable Networking Pack & Connectivity Issues:

[3] – ViewState Chunking

[4] – Dell PowerEdge & Broadcom Issues

[5] – ServerFault:

Tuesday, June 16, 2009

Updatable ASP.NET ResX Resource Provider – yes, it’s possible!

Update (November 4, 2009): I’ve been getting a lot of emails over this one; if you’re running into any issues implementing this, however slight, please post here and I will answer the questions for everyone. Also, note that, with a few minor changes, you can use CompilationMode to introduce content/layout changes directly on *.aspx/*.ascx/*.master surfaces at run-time as well.

First and foremost, I want to thank Rick Strahl for his efforts; his work served as the initial inspiration for us to even consider this approach [1]. Next, I want to highlight that our motivations for deviating from the standard (resource) providers differ somewhat: in our case, we had a very high-volume application that was already built-out with the default *.resx localization scheme, and we were more or less content with it, except for one drawback. Our motivation was solely driven by the need to push updates to production without: a) restarting the application and jeopardizing in-process session, cache data; and b) introducing any sort of performance degradation, however slight.

Editing Resources at Runtime

Think of this as a make-shift CMS, if you will, that enables you to update localized content in your ASP.NET applications without full-blown code promotions (and the attendant scheduling considerations).

Beyond the ability to update values in production, I’m not convinced that a custom authoring UI justifies some of the risks introduced in the DB-based solutions; certainly, some of the value-added features are nice-to-haves, but the focus is on remaining feature-compatible with the built-in provider (e.g. support for Visual Studio’s 'Generate Local Resources', Explicit and Implicit Resource Expressions, etc.) while providing run-time updatability.

That’s really all that we’re after and the simplicity (foot-print) of the solution should reflect it.

In this vein, we set out to deliver a scheme that allows you to preserve your *.resx files with the added ability to push updates to production app pools; in doing so, we needed an understanding of:

  1. Why web resources compiled from *.resx to *.resources in the first place?
    • Why would a web applications store binary in these files – images, audio, text-files, etc?
  2. How do we by-pass this to apply localization directly from cached *.resx files (with the necessary disk-based cache dependencies)?
  3. How do we stop the automatic assembly generation that eventually kills the app pool?
  4. How do we stop the FCNs for .NET special directories but continue to maintain FCNs for CacheDependencies?

We begin by first registering our custom provider in web.config:

<globalization resourceProviderFactoryType="Sample.UpdatableResXResourceProviderFactory, Sample" />

This, along with the reference assembly, is all that’s required to introduce this to an existing application.

[Sections below assume familiarity with File Change Notifications (FCNs) in ASP.NET.]

Next, we spent some time looking at how one would disable FCNs to allow *.resx files to be pushed to their default paths (for both local and global resources) without triggering assembly generation. A few queries proved that this is indeed possible with some crude reflection; the following hack, for instance, allows you to kill all FCNs across the application (once placed in Application_Start):


PropertyInfo p = typeof(System.Web.HttpRuntime).GetProperty("FileChangesMonitor", BindingFlags.NonPublic | BindingFlags.Public | BindingFlags.Static); object o = p.GetValue(null, null); FieldInfo f = o.GetType().GetField("_FCNMode", BindingFlags.Instance | BindingFlags.NonPublic | BindingFlags.IgnoreCase); f.SetValue(o, 1);

Unfortunately, this also has the nasty side-effect of killing the very FCNs that your Cache Dependencies rely on to reload file contents. There is a slight variation that’s successful in preserving the Cache Dependency FCNs but disabling virtually everything else:

PropertyInfo p = typeof(System.Web.HttpRuntime).GetProperty("FileChangesMonitor", BindingFlags.NonPublic | BindingFlags.Public | BindingFlags.Static); object o = p.GetValue(null, null); FieldInfo f = o.GetType().GetField("_dirMonSpecialDirs", BindingFlags.Instance | BindingFlags.NonPublic | BindingFlags.IgnoreCase); f.SetValue(o, null);

More on the need to unload the AppDomain [2], [3].

We were ultimately unhappy with this solution for several reasons and sought more simplicity. (We also had issues updating global resource files in their native paths on precompiled builds.) Recognizing that App_Data is one of a few folders under the web root that’s shielded from FCNs, we decided to move our *.resx output (preserving their relative paths) to App_Data\Resources. Under this new location, we don’t need reflection to interfere with the FileChangeMonitor and can continue to push intact *.resx files to the web root without generating FCNs or triggering assembly builds.

Ultimately, it’s really no different than pushing out any *.xml updates to App_Data except in this case the intact *.resx files that you’ve already developed serve as the payload.

Again, a key principle here was to minimize foot-print. The gist of it really boils down to a few lines of code, with the necessary dressing to plug it in. The abstract UpdatableResXResourceProvider contains all of the logic for both providers - the GlobalResXResourceProvider and the LocalResXResourceProvider simply provide path details for their respective locations:


The implementation boils down to the GetResourceCache method that leverages the framework’s own ResXResourceReader to read the original *.resx, storing the result it in the runtime cache with a new CacheDependency:



Special Instructions:

  • Update the DefaultStore = @"App_Data/Resources/" parameter as a first-step; to use the default location, you need a post-build process that propagates *.resx files to this location (preserving their relative depth). To test with resources in their current location, you can use DefaultStore = “”;
  • Note that a reference to System.Windows.Forms is required for referencing ResXResourceReader.


[1] – Updated Westwind.Globalization Data Driven Resource Provider for ASP.NET

[2] – Exactly which files and directions are monitored

[3] – On the need to throttle updates, even if down by a scheduled task after hours:

“This is a limitation of handling file change notifications from the operating system. If too many files change, the internal change buffers overflow and ASP.NET has to assume that the application needs to be restarted. Unfortunately, there is no workaround other than to throttle the rate at which changed files are propagated to a live web application.”

[4] – Stop Monitoring on Designated Folders

[5] – Error Debugging Project (KB 810886)

[6] – ASP.NET 2.0: Custom Resource Provider Using Sql Database (Support for Oracle)

Monday, June 15, 2009

Community Server 2009: Final Thoughts on Role, Pricing and Adoption

I’ve spent quite a bit of time recently in conversation with the helpful folks at Telligent on the current state of Community Server (CS); admittedly, it’s been a few years since I last looked at the platform and much has changed since. In this post we look at: where CS fits in the grand scheme, CPU-based licensing and the platform’s adoption.

CS as WCM?

Not quite, when you consider:

  • CS was born out of blogging, forum, and gallery software and is a natural union of these tools - it has no WCM roots;
  • CS Publisher gives it primitive WCM-esque capabilities;

What constitutes traditional WCM? At a minimum: flexible content-types, templates, versioning, workflow, content-sharing/content-query facilities. I would also add a criterion for “Rich Templating,” measuring the ability to minimize reliance on physical (ASPX/ASCX) layouts and maximize reliance on configurable CMS components that are injected at runtime into physical layouts to render a particular request. (SiteCore excels here whereas MOSS is more limited along the lines of traditional ASP.NET with its Master Page, Page Layout and Page Instance facilities.) In short, “Rich Templating” measures the variety and diversity of renderings that can be produced from a physical “base.”

Community Server Licensing (updated June, 2009):

Annual (CPU-based)

One Time (CPU-based)

Product Code





10 Blogs, 10 Forums, 10 Media Galleries, 5 Groups and unlimited WIKIS




25 Blogs, 25 Forums, 25 Media Galleries, 10 Groups and unlimited WIKIS




50 Blogs, 50 Forums, 50 Media Galleries, 15 Groups and unlimited WIKIS




UNLIMITED Blogs, Forums, Media Galleries, Groups and WIKIS along with the ENT Mail Gateway, ENT RSS Feed Syndication & Job Server

All annual licenses include updates/upgrades and technical support, for the year. The perpetual (one-time) licensing includes first-year support and maintenance; on-going support and maintenance is priced at the standard 20% per annum. Their à-la-cart pricing offers much needed flexibility in fine-tuning the licensing to match your immediate needs, relieving customers of the burden of having to pay for features that will remain dormant for the foreseeable future. Promotional pricing is occasionally available as well; for instance, a 4-CPU enterprise license was offered at a one-time price of 216K (a 25% discount) if purchased before the end of June (‘09).

CPU-based licensing (…Software Licensing Gets Complicated):

1 physical CPU 1 core = 1 CPU license

1 physical CPU 2 core = 1 CPU license

1 physical CPU 4 core = 2 CPU license

CPU-based licensing can impose a significant penalty on high-volume sites. For instance, deploying the Enterprise edition on a 6-server farm that I’ve previously worked on (5 dual-core duo servers and 1 quad-core duo) would cost 72x2x5+72x4 = $1,008,000 + ~200K/year. In this vein, it’s worth highlighting that hardware can be configured to keep a CPU dormant for licensing purposes, until such time that volume necessitates it [3].

Release Schedules:

  • Typically only 2 Service packs per version;
  • One major release and one minor release ~6 months;
  • Weekly bug fixes


Unfortunately, some of the exciting references here are covered under NDA, but suffice it to say that adoption is *the* key driver of CS’s continued success; Telligent counts some impressive names as loyal, happy customers. CS positions itself not as a build/buy decision, but as a hybrid; a platform designed to be extended by internal teams and worldwide-partners alike; a layer in your solution stack to address social community features. And, not surprisingly, this niche that they’ve carved out for themselves is curiously absent in both SiteCore and SharePoint. Whether convergence along these lines ultimately sees CS absorbed into an established .NET-based WCM leader or whether the leaders evolve their own social layer remains to be seen; my bet’s on the former.


FourRoads, perhaps the premier partner in Telligent’s network, provides a number of add-ons to the platform: CS Publisher for WCM; Commerce for E-Commerce; and Nexus, a connector for FaceBook.


Interesting background on Lawrence Liu and Marc Smith [1]:

“On numerous occasions during my time at Microsoft as both the Community Lead and Social Computing Technical Product Manager for SharePoint, I relied on Marc's expertise in refining and validating my ideas and concepts around social media and online communities. Although many people have come to know me as a "community guru," I would gladly admit that it is Marc, who was the "man behind the curtain" and provided me with the sociological research and hard data that backed up much of my hypotheses for why humans have an innate sense of sharing and belonging and how technographic personas such as Asker, Answerer, and Connector can be derived and quantified from specific types of social interactions and metrics. I look forward to working closely with Marc in the coming months to better align and more tightly integrate social analytics into Telligent's overall platform strategy.”

[1] – CPU-based Licensing:,289142,sid80_gci1249814,00.html

[2] – Four Roads Product Catalogue:

[3] – Disabling a CPUs:

/NUMPROC specifies the number of CPUs that can be used on a multiprocessor system. Example: /NUMPROC=2 on a four-way system will prevent Windows from using two of the four processors.

[4] - Telligent announces release of social analytics tool and hiring of Chief Social Scientist

Friday, June 12, 2009

A day at the Office


The new digs of IMC Trading in the Netherlands, Courtesy of Arbitrage Ali. The mahogany and the crown mouldings are quite inspiring. I had the pleasure of meeting with one of the IMC-originals, Tibor Bejczy, this time last year and continue to be fascinated (at an arms-length, of course) by IMC and their arbitrage business.

Tuesday, June 9, 2009

Google Apps Needs Structured Data – Gadgets 101

Update (Sept 24, 2010): This hasn’t really changed yet, but see Ryan’s post on using Google Apps as an Online Database…a nice abstraction in the meantime.

To test-drive the extensibility of the Google Sites (Google Apps) offering, I set out on a simple mission to create a page that contains a number of addresses plotted on a Google Map, with the goal of having the addresses maintained as “structured data” and not “content”. Unless I’ve totally overlooked a section of the docs, which is admittedly quite possible, it doesn’t look like API support around surfacing structured content (announcements, custom-lists) residing in Google Sites exists just yet; instead, it seems that one is relegated to using the spreadsheet as the backing data store for the time being.

I’m confident that this will change very soon, but it’s a reality we must work with for now.

Before posting the actual end-result, I wanted to summarize some of the stumbling blocks for first-timers:

1. Google Gadget Editor (GGE) only works in FireFox (do not try it in Chrome or IE) [3].

2. Despite excerpts in the Gadget Specification that require nocache parameters to be respected, many hosts appear to ignore this directive; iGoogle and Google Sites are two such hosts that seem to ignore this.

3. In order to truly have un-cached access to your Gadget, you need to install the Developer Gadget first [1].

4. Sizing: to give your gadget the ability to resize itself, you need to add the following to your spec:

  • A <Require feature="dynamic-height"/> tag (under <ModulePrefs>) to tell the gadget to load the dynamic-height library.
  • A call to the JavaScript function _IG_AdjustIFrameHeight() whenever there is a change in content, or another event occurs that requires the gadget to resize itself.

[1] – A Must Read

[2] – Sizing Issues

[3] – GGE

Monday, June 8, 2009

Why does the c# compiler emit Activator.CreateInstance when instantiating a generic type with a new() constraint?

It's a great question that was unfortunately learned the hard way (through WinDbg rather than reflector or ILDasm). A colleague of mine (Herman Chan) was kind enough to pass on his findings – in summary, be very careful of the number of instantiations you have per request on generic types marked with new().

We were fully aware of the costs associated with late-bound instantiation [3] when building our ORM solution using reflection, and measures were taken to compensate for it; but we didn't have a good sense that Generic types required the same level of attention!

The 2nd link provides a nice summary [2] – note the distinction between CreateInstance() and CreateInstance<T>():

Running 10000000 iterations of creation test.  
Direct Call 00:00:00.5320932
Delegate Wrapper 00:00:00.8127212
Generic New 00:00:20.2164442
Activator.CreateInstance 00:00:43.3707797


[1] –
[2] –
[3] -,guid,0d573ba5-1228-419f-bd69-065f53fc64a8.aspx

Saturday, June 6, 2009

Can your corporate *internet* site run on Google Sites for $50/year?

That's it; $50/year is essentially the total cost of ownership (assuming a 1-user scenario). And so far, I think the answer is a resounding yes, why not; even in its infancy, the OOTB features and Gadget extensibility cover most scenarios. The search, forms, and light-weight WCM capabilities offered by Sites are sufficient for most corporate internet sites.

Surprisingly, Google isn't prepared to push this tangent just yet [1]:

Does Google host websites too? Google Sites is designed to make it easy for employees to create and collaborate on internal sites for their projects, teams and departments. You can also make a public website with Google Sites, but most businesses prefer to go with a traditional web hosting solution for their public sites.

If you're looking for more dynamic or advanced web solutions for your public web site, you may want to run Google Apps in addition to a web host. Any web host that provides the technology or platform you need to run your services should work with Google Apps. Two of our web host partners that register domains are Enom and Go Daddy.

And note that the $50/user fee is for the Premier Edition – you can register up to 50 accounts on the free edition as well. Upgrading to Premier gives you a few benefits, including video, Postini and a variety of service-specific features. Another distinction is the number of mailboxes available under each plan: with Premier, you can still have multiple email addresses without the need to add additional user accounts – this can be done by adding alias addresses to your existing account.

I would suspect that it’s a matter of when and not if the Sites offering matures to the level of specifically targeting internet-site scenarios. Take the Google Maps usage guidelines, for instance:

There is no limit on the number of page views you may generate per day using the Maps API. However, if you expect more than 500,000 page views per day, please contact us in advance so we can provision additional capacity to handle your traffic.

Hosting internet sites requires a level of capacity planning that intranet-usage, mainly by small-medium Google Apps adopters, doesn’t; but once the necessary infrastructure is in place, I would expect this stance to change.

I will be sure to chronicle our journey down this path; in the meantime, here is a link to Brian Johnson of KC Cloud Solutions who is one of the forerunners, helping newcomers sort out some of the quirky limitations in the early version of Google Sites:

[1] -