Wednesday, July 20, 2011

What’s Wrong with ASP.NET Web Forms?

What a great description of all the deficiencies of ASP.NET Web Forms! The following is an excerpt from Pro ASP.NET MVC 3 Framework by Adam Freeman and Steven Sanderson (Apress).

What’s Wrong with ASP.NET Web Forms?
Traditional ASP.NET Web Forms development was a great idea, but reality proved more complicated.
Over time, the use of Web Forms in real-world projects highlighted some shortcomings:
•  View State weight: The actual mechanism for maintaining state across requests
(known as View State) results in large blocks of data being transferred between the
client and server. This data can reach hundreds of kilobytes in even modest web
applications, and it goes back and forth with every request, frustrating site visitors
with slower response times and increasing the bandwidth demands of the server.
•  Page life cycle: The mechanism for connecting client-side events with server-side
event handler code, part of the page life cycle, can be extraordinarily complicated
and delicate. Few developers have success manipulating the control hierarchy at
runtime without getting View State errors or finding that some event handlers
mysteriously fail to execute.
•  False sense of separation of concerns: ASP.NET’s code-behind model provides a
means to take application code out of its HTML markup and into a separate code-
behind class. This has been widely applauded for separating logic and
presentation, but in reality, developers are encouraged to mix presentation code
(for example, manipulating the server-side control tree) with their application
logic (for example, manipulating database data) in these same monstrous code-
behind classes. The end result can be fragile and unintelligible.
•  Limited control over HTML: Server controls render themselves as HTML, but not
necessarily the HTML you want. Prior to ASP.NET 4, the HTML output usually
failed to comply with web standards or make good use of Cascading Style Sheets
(CSS), and server controls generated unpredictable and complex ID values that are
hard to access using JavaScript. These problems are reduced in ASP.NET 4, but it
can still be tricky to get the HTML you expect.
•  Leaky abstraction: Web Forms tries to hide away HTML and HTTP wherever
possible. As you try to implement custom behaviors, you frequently fall out of the
abstraction, which forces you to reverse-engineer the postback event mechanism
or perform obtuse acts to make it generate the desired HTML. Plus, all this
abstraction can act as a frustrating barrier for competent web developers. 
•  Low testability: The designers of ASP.NET could not have anticipated that
automated testing would become an essential component of software
development. Not surprisingly, the tightly coupled architecture they designed is
unsuitable for unit testing. Integration testing can be a challenge, too.
ASP.NET has kept moving. Version 2.0 added a set of standard application components that can
reduce the amount of code you need to write yourself. The AJAX release in 2007 was Microsoft’s
response to the Web 2.0/AJAX frenzy of the day, supporting rich client-side interactivity while keeping
developers’ lives simple. The most recent release, ASP.NET 4, produces more predictable and standards-
compliant HTML markup, but many of the intrinsic limitations remain.

Tuesday, September 21, 2010

Old unresolved IE (pre 9) DOM bugs hinder its ability to work with HTML5


I've just started to read "Introducing HTML5" book http://goo.gl/DDa9 (my first HTML5 tutorial) and it's simply wonderful. I enjoy its concise but very live language a lot.


On a page 11 http://goo.gl/IAi3 Bruce Lawson said, "The way to cajole IE into applying CSS to HTML5 is to use JavaScript. Why? This is an inscrutable secret, and if we told you we'd have to kill you. (Actually, we don't know.) If you add
the following JavaScript into the head of the page

<script>
document.createElement(‘header’);
document.createElement(‘nav’);
document.createElement(‘article’);
document.createElement(‘footer’);
</script>

IE will magically apply styles to those elements, provided that there is a <body> element in the markup"

I think I know  - why.
Two years ago I found an article in a famous among .NET developers blog by Rick Strahl http://goo.gl/a4Cy about a very unfortunate Internet Explorer feature: it automatically creates matching global JavaScript objects for all DOM elements on the page based on their IDs. This clutters global space and often leads to clashes with user-defined JavaScript objects, which often leads to "Object doesn’t support this property or method" errors being displayed.

I bet these things are related: for IE to operate normally it needs all the DOM elements on the page to be in a global namespace. JavaScript snippet does just that - puts HTML5 'header', 'nav', etc. elements into a global namespace!

I already blogged http://goo.gl/ducY about this and another IE issue and tried desperately to notify IE staff... well, they don't like to be notified at all, they don't like bug reports.

#twitter

An update of 10/19/2010: I got a nice message from Microsoft Connect Team saying that a bug was resolved in IE 9. They said, This issue was resolved in Internet Explorer 9 Platform Preview Build 3 released on 6/23/2010… The fix prevents the error message. Note, IE still allows the DOM element to exist as a global javascript object. (Bold-italic mine, V.K.)

Well, I’m not sure that keeping DOM elements in global JavaScript namespace is a good idea (other browsers don’t do it), but at least they found a workaround. It’s interesting to see how correctly it now supports HTML 5 and if document.createElement() trick described above is still necessary.

Friday, July 02, 2010

About Internet Explorer DOM bugs

Almost four year ago I wrote a post http://goo.gl/VKTc in my blog describing incorrect behavior of window.onblur event in IE and a workaround. At that time I tried hardly to submit a bug report to Microsoft, but couldn't find a way to do that. My blog isn't really that popular, but I received numerous thanks from web developers for posting a workaround.

Two years ago I found an article in a famous among .NET developers blog by Rick Strahl http://goo.gl/a4Cy about another very unfortunate Internet Explorer feature: it automatically creates matching global JavaScript objects for all DOM elements on the page based on their id. This clatters global space and often leads to clashes with user-defined JavaScript objects, which often leads to "Object doesn’t support this property or method" errors being displayed.

Recently, thanks to Dimitri Glazkov's buzz about Enhanced Scripting in IE9 http://goo.gl/CO1v I asked the same questions on MSDN IE blog http://goo.gl/Un5g and got a suggestion to submit bug report to Microsoft Connect.

Then I found that someone already submitted "Incorrect behavior of window.onblur event" bug to IE blog http://goo.gl/6HlF. Unfortunately it was marked as "Won't fix" by IE 9 team. So, are we looking for another four years before this bug will be fixed? I doubt Internet Explorer would survive that long.

At least I went ahead and submitted "IE automatically creates matching global JavaScript objects for all DOM elements on the page based on their id" bug http://goo.gl/Eugd on Rick's behalf. Let's see if MS IE team would be willing to fix that.

Want to buzz about it?

An update of 10/19/2010: I got a nice message from Microsoft Connect Team saying the following:

“Greetings from Microsoft Connect!
This notification was generated for feedback item: IE automatically creates matching global JavaScript objects for all DOM elements on the page based on their id. which you submitted at the Microsoft Connect site.

Thank you for your feedback.
This issue was resolved in Internet Explorer 9 Platform Preview Build 3 released on 6/23/2010. Please verify the change and file a new feedback (or reactivate the existing one) if the problem persists.

The fix prevents the error message. Note, IE still allows the DOM element to exist as a global javascript object. (Bold-italic mine, V.K.)

Best regards,
The Internet Explorer Team
Thank you for using Microsoft Connect!”

Well, I’m not sure that keeping DOM elements in global JavaScript namespace is a good idea (other browsers don’t do it), but at least they found a workaround.
It was a first time in my life MS directly communicated to me, that’s nice.

Friday, May 07, 2010

What Facebook’s recent bug tells us

I was listening to latest TWIG when I heard that Facebook recently had a security breach. They explained that Facebook has a feature allowing to mimic a friend’s login for viewing on your profile by the eyes of that friend (“With Preview My Profile, users can view how their profile appears to any given Facebook friend”). And it happens that that friend’s login were almost real, allowing to see the live chats and friend requests of the friend in question.

OK, security breaches happen. They are discovered and then fixed. But what this particular case tells us about Facebook platform? I suspect, it tells a lot of negative and alarming things. Let me explain.

We have a similar feature in our Corporate Intranet written in a mix of classic ASP and ASP.NET. We call it Super User (SU) login. SU allows for selected administrators to login into Intranet as a different employee to debug some issues. In fact, SU login matches a regular login by 99%. One per cent of difference is that there is a special set of private user’s data which is visible only to user himself or to a user and strictly defined group of people, for example HR. In case an admin who uses SU is not in HR group, employee’s private data is not visible through SU login.

A several years ago security breaches like one which happened to Facebook was pretty common for us too. But now we virtually eliminated their possibility.
We use a regular ASP feature to store in memory current user’s identity – a Session variable. When SU login happens, user’s Session variable is actually reset to identity of user being SUed. It allows to fully mimic other user’s experience. In SU login, a second Session variable is set, keeping original user’s identity and indicating that we are in SU mode. When it comes to seeing some restricted private user’s data, code checks if user has permissions to see it and, if we are in SU mode, if actual user has rights to see it as well.

A real question is how system determines permissions (rights) of a particular user. Normally, Rights are attached either to user himself, or to Roles that user share. Initially, our Intranet used a bunch of ‘If’ statements in ASP code. If a user is HR, this is allowed on this page. If a user is DBA, he can do this and this. If you have a developer experience, it should be absolutely clear to you that such a system is very fragile and inconsistent. To break it, It’s enough to add a new page and forget some ‘If’ statement, it’s enough to modify Rights of a particular Role and forget to modify ‘Ifs’ on one page. It even much easier to add a new page and forget that you need to check not only rights of a current user, but also, in case it’s a SU login, rights of original user.
So, a system described above is really amateurish, fragile, hard to maintain, and no-professional.

Now you probably understand what I’m thinking about Facebook? As a web developer with 15 years of experience, I have a strong feeling, that Facebook, which had a bunch of developers of different levels, which grew up from a small system written in PHP for college students, suffers from the same inconsistent code, uses hard-coded ‘If’ statements to determine user’s rights and to mimic another user’s experience in “Preview My Profile”. This assumed amateurish inconsistent system in combination of its Facebook Über Alles syndrome looks especially dangerous and incapable of keeping users privacy.

Finally, a system managing users permissions looks as a relational structure in its essence. Basically, there should be tRight table, tRole table, and tRightOfRole link table (many-to-many relationships). It’s a bit more complex if Rights are assignable not only to Roles, but to individual Users as well. It gets some additional complexity if Role-Right combinations are different for different pages or sections of your system (web site). We introduced a notion of Scope. Both Roles and Rights are defined either globally or on a particular Scope within a system. Role-Right combination is to be assigned on a Scope too. A resulting relational system is extremely powerful and flexible, much more flexible than any system written based on Active Directory groups, like MS SharePoint (We tried to use MOSS 2007 and found it not flexible enough to accommodate for business rules).
And yes, we actually wrote such a relational system, called Roles Rights Management (RRM). Besides relational structure I described above, it uses some clever techniques and HttpModules, which allowed us to automate its usage. It is not a responsibility of particular programmer anymore, to properly check user’s permissions to allow/disallow viewing a page or to filter data. In most case, Security Trimming happens automatically.
That’s what I think Facebook programmers failed to implement.

Thursday, March 18, 2010

More Details on Microsoft's plans to ruin jQuery

Visual Studio Magazine: "Microsoft is working in a number of directions, including databinding, the script loader and contributing to development of templating functionality as part of the jQuery core."
Oh, no Microsoft! Please, please don't ruin jQuery! Don't make it as insanely overcomplicated as your own AJAX.NET. Don't mix together JavaScript client code with server-side "loaders". Don't you understand that RESTless WEB client is not the same as .NET server or Win desktop?!

Tuesday, February 23, 2010

Squeryl — Introduction

Squeryl — Introduction

"Squeryl is a strongly typed DSL (domain specific language) for SQL databases in which table rows are manipulated as Scala objects via an SQL like language"

I hate the idea. Aren't we already fed up with MS datasets which break n-Tier application structure by bringing excessive and over-complicated database only related properties into a domain layer?
To me it is clear, that there is only three legitimate ways of creating decoupled n-Tier applications:
1) Use hand-made Data Access Layer objects;
2) Use ORM tools like [N]Hibernate;
3) Use object-oriented databases like db4o

Wednesday, January 27, 2010

‘The WebForms Rant’ by karl Seguin


"ASP.NET WebForms is an ugly and messy framework that complicates an otherwise simple thing. ViewState, codebehind, postback, page lifecycle and databinding are things that you have to constantly program against."
"A framework that accepts that HTTP is stateless will always be simpler, cleaner and more powerful that a framework that doesn't."

Wow! Well done :)
I agree completely. Karl probably is a first developer whom I highly respect (as my virtual teacher too) and who is stating the matter straight and without usual 'polites'.

One more important argument against WebForms is that it leads to hard-to-avoid "Web page was expired" problem due to inability to implement Post-Redirect-Get (PRG) pattern.
See http://pro-thoughts.blogspot.com/2009/06/classic-aspnet-improper-abstractions.html

I also really like a classic I Spose I’ll Just Say It: You Should Learn MVC article by Rob Conrey.
Finally, recent MVC or WebForms: It's more about client side vs server side article by Ian Cooper is good too.

Don’t forget to read comments under all those articles!

Thursday, December 17, 2009

My new Android blog

I decided to start a separate blog fully dedicated to Android usage and development. You're very welcome to visit Wonderful Android: Using and Developing.

Tuesday, November 03, 2009

How to install missing LaserJet 4MPlus driver on Windows 7

 

I found on HP forum site the following advice:

The Laserjet 4M drivers (and many others)  are actually available through the Windows Update process.  Try the following:  go to the Printers folder, select Add a Printer, select the appropriate port, then when the "Chose a Printer Model" dialog comes up select "Windows Update". 

Unfortunately, there was one strange Windows feature or probably a bug... as it always happens with Windows. A first bug I discovered in Windows 7, which is actually much better than its predecessors.

When I tried to installed my LaserJet 4MPlus network printer on Win 7 64 bit by clicking Add Printer, as you described above, there was NO Windows Update button visible. Finally, I selected an incorrect printer model and installed that printer driver. After doing it, I right-clicked on a printer, chose Printer Properties => Advanced => New Driver and ... what a miracle: Windows Update button suddenly appeared!

If you ask me how did I find this way, I would answer: a long practice working with a buggy Windows software.

Tuesday, June 02, 2009

Classic ASP.NET: improper abstractions and ruined REST

http://herdingcode.com/?p=183

1. No one ever anywhere answered the simple question: how to avoid POST operations in a classic ASP.NET when GET operations should be used instead according to a common sense and REST.
POST - for changing data, GET - for reading. No one found any decent way to avoid "Web page was expired"  messages resulting from improper usage of POST. No way to PRG.

To me, this fact alone makes “classic” ASP.NET’s Web Forms approach inappropriate.

2. Yes you can find a workaround for anything, but you’re ending up battling against a framework rather than using it most of the time. And maybe even worse: you’re battling against MS tutorials and books which teach you that ASP.NET-in-24-hours-dataset-no-custom-code-monolithic development.

3. It’s pretty obvious to me, that event-driven model moved from Windows desktop development to Web does not correspond to Request/Response nature of Web and makes programming much more complicated, than even with a plain old ASP. I’d be pretty happy to have lain ASP + C# 3.5 + Intellisense, and believe me, my code would be at the same time more user-friendly and better structured than monstrous code-behind of those ugly master/detail ASP.NET pages.

4. I have nothing against Http Modules, Handlers, etc. They are powerful tools.
But I personally hate Web Forms so much, I think that all those words about Web Forms better in one situations and MVC in another is just a precaution: people understand that MVC is better but are afraid to talk.

Finally, everything is already said in this article and all the comments below.

Monday, June 01, 2009

“Mega” menu? Is it like almost unusable Office 2007 menu?

Usability guru Jakob Nielsen thinks that “mega” menus are good. But is it something similar to Office 2007’s “ribbon” menu?

Office 2007 menu is awful. It's a major step back from a standard and simple Office 2003 menu. Really hard to find what you need, you never sure which main menu bar link to click. My daughter told me to uninstall Office 2007 from her machine because she cannot find anything. If "mega" menu is something like that, I strongly disagree with Jakob Nielsen. The less pictures and other unnecessary graphical elements are in a menu - the better it is.

Wednesday, May 06, 2009

Rob Conery: I Suppose I’ll Just Say It: You Should Learn MVC

http://blog.wekeroad.com/blog/i-spose-ill-just-say-it-you-should-learn-mvc/

Comments under this article are as interesting as article itself.
I only hope I’ll be allowed to write my next ASP.NET project in ASP.NET MVC. I already bought Rob’s book.

Wednesday, February 11, 2009

Write deep Clone(). Forget about ICloneable.

I was chatting with my co-worker about implementing Clone() method for our hierarchy of classes. It's not easy to implement interfaces (think about ICloneable) for class hierarchies. Some tips might be found in Implementing Interfaces at C# Online.NET article, in Implementing Interfaces: ICloneable and IComparable article, and in Advantages of ICloneable? discussion.
As Zach said, it's easier just to implement a virtual Clone() method in a base class and to override it in derived classes as necessary.

I started to look for information about ICloneable and found the following quite unequivocal guidelines:

Caution. Avoid implementing ICloneable. As alarming as that sounds, Microsoft is actually making this recommendation. The problem stems from the fact that the contract doesn’t specify whether the copy should be deep or shallow. In fact, as noted in Krzysztof Cwalina and Brad Abrams’ Framework Design Guidelines: Conventions, Idioms, and Patterns for Reusable .NET Libraries (Boston, MA: Addison-Wesley Professional, 2005), Cwalina searched the entire code base of the .NET Framework and couldn’t find any code that uses ICloneable. Had the Framework designers and developers been using this interface, they probably would have stumbled across the omission in the ICloneable specification and fixed it.
However, this recommendation is not to say that you shouldn’t implement a Clone method if you need one. If your class needs a clone method, you can still implement one on the public contract of the class without actually implementing ICloneable.

Accelerated C# 2008

Because the contract of ICloneable does not specify the type of clone implementation required to satisfy the contract, different classes have different implementations of the interface. Consumers cannot rely on ICloneable to let them know whether an object is deep-copied or not. Therefore, we recommend that ICloneable not be implemented.
The moral of the story is that you should never ship an interface if you don't have both implementations and consumers of the interface. In the case of ICloneable, we did not have consumers when we shipped it. I searched the Framework sources and could not find even one place where we take ICloneable as a parameter.
x DO NOT implement ICloneable.
x DO NOT use ICloneable in public APIs.
x CONSIDER defining the Clone method on types that need a cloning mechanism. Ensure that the documentation clearly states whether it is a deep- or shallow-copy.

Framework Design Guidelines: Conventions, Idioms, and Patterns for Reusable .NET Libraries

Tuesday, December 30, 2008

Call a method only if a supplied argument is not Null

Suppose you have a C# method which only works correctly if its parameter value is not Null, otherwise it either throws an exception or returns wrong value. Sometimes you cannot guarantee that a parameter value is not Null, because it was passed from outside, was read from DB, etc. Instead of writing your custom code again and again, which checks a parameter value and only calls a method if it is not Null, it may be more convenient to re-use a common helper method. Overloaded CallIfArgumentNotNull() methods shown below are such helpers. I was somehow inspired by reading this article by Jon Skeet (lazy evaluation) .

    // If a first argument of type int? is not Null, a supplied method (second argument)
    // is called with a first argument passed in and its return value is returned.
    // If a first argument is Null, a default(T) is returned
    // (null for reference types, 0 from Integer, etc.)
    public static T CallIfArgumentNotNull<T>(int? arg, Func<int, T> func) {
      return (arg != null) ? func((int)arg) : default(T);
    }
 
    // If a first argument of type int? is not Null, a supplied method (second argument)
    // is called with a first argument passed in and its return value is returned.
    // If a first argument is Null, a third argument value is returned.
    public static T CallIfArgumentNotNull<T>(int? arg, Func<int, T> func, T defResult) {
      return (arg != null) ? func((int)arg) : defResult;
    }
 
    // If a first argument of type A? is not Null, a supplied method (second argument)
    // is called with a first argument passed in and its return value is returned.
    // If a first argument is null, a third argument value is returned.
    public static T CallIfArgumentNotNull<A, T>(Nullable<A> arg, Func<A, T> func, T defResult)
      where A : struct {
      return (arg != null) ? func((A)(arg)) : defResult;
    }

Thursday, November 20, 2008

An easy way to override Chrome bookmarks

It is quite easy to manually override Chrome “Other Bookmarks” with your Firefox bookmarks:
1) In Chrome, click on “Other Bookmarks” and, if you have any, right click on each bookmark and delete it.
2) Use “Import Bookmarks and Settings”, switch to import from Firefox, clear all checkboxes except ‘Bookmarks’ one and hit ‘Import’.
3) This will keep all imported bookmarks in one ‘folder’, which is actually good, because next time when you decide to override Chrome bookmarks, you will need to delete only one Other Bookamrks ‘folder’ by right-clicking on it.

(On WinXP machine Chrome keeps bookamrks in C:\Documents and Settings\-your-name-\Local Settings\Application Data\Google\Chrome\User Data\Default\Bookmarks file. This is a text file which looks similar to XML. )

Thursday, November 13, 2008

Configuring MS DTC on WinXP, Server 2003, and Server 2008 machines

Our N-Tier ASP.NET (C# 2008 / SQL Server)  application uses MS DTC transactions for database integrity. This allows a transaction to embrace several BL or DL method calls without sharing the same database connection.
A usage is simple:

using System.Transactions; 

using (TransactionScope myTransactionScope = new TransactionScope()) {
  // Do any database related work you need: call methods, open and close connections, update multiple tables.
  // Database integrity is preserved  even if one of methods which update database fails.

     myTransactionScope.Complete();
}

To support MS DTC all servers which run our application (including developers' machines) and servers which host a databases were properly configured. (Configuring developers' machines allowed to work properly with your local copy of application and with a remote database.

1) To configure DTC on Windows XP, Windows Server 2003 machine follow these instructions to open Component Services window and configure DTC.
Make sure that "Use local coordinator" check box is checked on MSDTC tab of computer properties window.
Make sure that "Network DTC access" is checked on MSDTC=>Security Configuration window, that both "Allow Inbound" and "Allow Outbound" are checked, and DTC Logon Account is set to "NT AUTHORITY\NetworkService".
2) On Windows Server 2008 machine it's pretty much the same, except that in Component Services window has My Computer -> Distributed Transaction Coordinator -> Local DTC node, on which you'll need to right-click and choose Properties.
3) After configuring DTC restart your machine.
4) Open Control Panel -> Administrative Tools -> Services and make sure that DTC is set to start automatically and is started.




Reference on using DTC with N-Tier application: http://imar.spaanjaars.com/QuickDocId.aspx?quickdoc=419