Wednesday, March 26, 2008

How to fill drop-down box with Enum values

A code snipet from http://imar.spaanjaars.com/QuickDocId.aspx?quickdoc=420
(see this code as well)

Suppose you have enum PersonType and want to populate lstPersonType drop-down box with its values.
It is very easy with Reflection:

public enum PersonType {
Friend = 0,
Family = 1,
Colleague = 2,
NotSet = -1
}

private void BindTypeDropDown()
{
FieldInfo[] myEnumFields = typeof(PersonType).GetFields();
foreach (FieldInfo myField in myEnumFields)
{
if (!myField.IsSpecialName && myField.Name.ToLower() != "notset")
{
int myValue = (int)myField.GetValue(0);
lstPersonType.Items.Add(new ListItem(myField.Name, myValue.ToString()));
}
}
}

Why I don't want to use LINQ to SQL

Here's a reason not to use LINQ to SQL for data mapping. I'm pretty sure the best way to start development is to use manual data mapping approach shown in Imar's article. You always can switch to some O/RM tool later, but if started with LINQ you'll be stuck with it. To me, using LINQ to SQL is pretty much the same as using DataSets: you lose an ability to design your own business layer object in such a way that it resembles business logic, rather than being dictated by database structure. LINQ to objects is a quite different story. It could be a very convenient tool to query your in-memory collections.

Dan Miser - Things I Don't Like About LINQ to SQL

Monday, March 24, 2008

More about Building Layered Web Applications using Imar Spaanjaars' approach

"Building Layered Web Applications with Microsoft ASP.NET 2.0" article published by Imar Spaanjaars differs from other articles on this topic.


  • It follows a good Domain-Driven Design design instead of relying DataSets (see my previous notes)

  • It presents a clean and ready-to-use code


Imar's article is intended to teach people by a good example. Obviously, he could not put everything necessary for a big application into one article. There are very interesting conversations located beneath each three parts of his article. I commented there as well and put too much stuff there. My fault. So, let me tell you here what I think about Imar's article and what I'm going to add in my implementation of his approach.

1. How to handle transactions in Business Layer (Bll) and allow DAL methods to share database connections without coupling Bll to features of database and OS. (See part 2 of Imar's article)


1) I downloaded Imar’s code and run it on Windows Server 2008 / SQL Server 2008 machine. It works fine. I could not find suggested settings for Microsoft Distributed Transaction Coordinator on Win Server 2008 machine, though. Does anyone know how to detect if it is running? Does the fact that Imar’s application, which uses TransactionScope object, runs without errors mean that MSDTC is running? Am I supposed to get errors if MSDTC is not running, or it would be silent?


2) As Imar mentioned in his answer to Math Random 's comment, in case MSDTC is not available, we would need to use SqlTransaction and therefore all DAL methods involved in saving related data would need to share the same Connection object.

We could pass connection object between DAL method calls inside Bll’s ContactPersonManager.Save() method. What I don't like here is that we're making an internal structure of Bll method ContactPersonManager.Save() dependant on external circumstances (whether MSDTC is available and whether we need to pass SQL Server connection around). Ideally, Bll method shouldn’t care about such things not related to business logic; it only should care about integrity of its objects and for its objects being able to save/retrieve themselves consistently. It would be nice to de-couple Bll method from such external things.


I guess, we need to add a level of indirection here. I would add one more [static ?] DAL class called ConnectionManager. This class would be responsible for providing connections to individual DAL objects (their methods) and for managing transactions. If necessary, Bll methods would call ConnectionManager.BeginTransaction() method to start a transaction. Internally, ConnectionManager would either use MSDTC -> TransactionScope if it is available, or open a SQL Server connection and start SqlTransaction on it otherwise. (Bll methods wouldn't care about those details.)

Then, Bll methods would call individual DAL methods the same way as ContactPersonManager.Save() calls AddressDB.Save(), EmailAddressDB.Save(), and PhoneNumberDB.Save() right now. There would be no need to pass around connection object: each DAL method would obtain a connection from ConnectionManager using ConnectionManager.GetConnection() method. It could be the same shared connection, or it could be a new connection every time. For example, in case we're using SqlTransaction, ConnectionManager would provide DAL methods with the same connection which was opened during ConnectionManager.BeginTransaction() method call.


DAL methods would not call myConnection.Close() methods directly, as they do in Imar 's code. Instead, they will call ContactPersonManager.CloseConnection(myConnection). ContactPersonManager object would then either close a connection or keep it open, depending on if it is still needed (for a pending SqlTransaction).
We also can wrap SqlConnection into our CustomConnection class overriding Close() and Dispose() methods in such a way, that they will ask ContactPersonManager object if connection should be really closed. That would allow us to work with "using" blocks.


It looks like such a design would allow us to decouple Bll layer from the specifics of a particular server (like availability of MSDTC transactions or using SQL Server or Oracle transactions instead.
What do you think? Am I re-creating a wheel here? I have a feeling that such solution already exists, but I don’t know it because of my ignorance.

Wednesday, March 12, 2008

Preventing Duplicate Record Insertion on Page Refresh

A colleague of mine just pointed me to this article. Personally, I never POSTed to the same page in my classic ASP and PHP applications. Never! I always POSTed to a different page with no HTML but only ASP/DB processing code and then redirected back or to an appropriate ASP/HTML page. To my believe, Web application should clearly distinguish client side processing and server side processing, I never liked Microsoft's attempts to mimic distributed computing by desktop-like ASP.NET page with Web Controls which pretend to be both server and client side. Obviously, it is not only my believe: even Anders Hejlsberg said something similar with his conversation with Bruce Eckel. Thus, .NET team started to look at MVC model too.

However, there are thousands of loyal ASP.NET developers who use classic ASP.NET approach for years. So, I bet, there should be a solution which is common and approved by Microsoft as a standard one. You simply cannot develop in ASP.NET without resolving this issue. So, could some of you, experienced ASP.NET developers, point me (a newbie in .NET world) to a standard solution?

Saturday, March 08, 2008

Foundations of Programming and The Code Wiki Book by Karl Seguin

Karl Seguin allowed me to copy and display on my site PDF copies of his articles. I found it more convenient than to download original ZIP versions (zipped PDF of Foundations of Programming was created by Tim Barcz).
According to Karl, you are allowed to copy, distribute and display articles, provided that you always attribute articles to him, do not use them for commercial purposes and do not alter in any way.
Unzipped PDF documents allow for reading online. These articles are very interesting. Look at Karl's The Code Wiki site and read his blog.

For better understanding of what N-Tier and Domain-Driven Design are, I would probably recommend to read 2 chapters of The Code Wiki Book first, then to read Foundations of Programming article and finally, to read Building Layered Web Applications with Microsoft ASP.NET 2.0 article by Imar Spaanjaars as a good and simple practical example of N-Tier DDD approach.

Friday, March 07, 2008

Identity starts from 0 instead of 1. SQL Server bug?

We experience a strange problem. Identity field of a table is set to [1,1] by generating script. Every time, when we drop the whole database, and then re-create it by running script in SQL Server 2008 Management Studio, and then insert a record into that table programmatically (C# code), SQL Server set identity field value of that first record to 0 instead of 1!
Then we also programmatically delete all records from that table and then execute
DBCC CHECKIDENT('" + targetTableName + "', RESEED, 0)

All next runs of the same C# function inserts records correctly, with identity field starting from 1 as expected.

So, again, identity field does not want to behave correctly if database was just created by script. After inserting a record and deleting it, everything works fine.

Is it SQL Server 2008 bug? Any idea on possible workaround?

Thursday, March 06, 2008

Scala and F#

I have heard what Bruce Eckel, Ian Cooper, and David Pollak said about Scala and decided to give it a try. I downloaded Scala and bought a pre-print PDF edition of Programming in Scala.
It is cool! It is extremely interesting. Martin Odersky's book is aimed to beginners and is outstanding. The only thing which bothers me is lack of Scala IDE.

Since Scala is a functional language related to ML languages and running on Java VM, and F# is a functional language related to OCaml and running on .NET platform, I decided to try learning both languages simultaneously. I installed F# addition to Visual Studio.NET and bought Don Syme's book. As everything born in Microsoft, F# already has Visual Studio.NET IDE support. On the other hand, "Expert F#" is harder to read, and F#'s syntax looks more strange for OOP programmer than Scala's one.
But it's interesting too! What i simportant, C# also got some functional features, so that's all inter-related.

Named constants and Enums: why and how to reconcile them against database tables?

Enums are very convenient, because they make code developer-friendly. If a method takes enum parameter, IntelliSense would help developer to choose a correct parameter value (and compiler would catch passing incorrect integer). On the other side, it is often a good idea to put put corresponding values into database lookup tables. It would allow to use primary/foreign keys to ensure data integrity (suppose, that your application is not the only way to read or modify data, so you want to be sure data is correct). Question is, how to synchronize enums with database?

Because VBScript does not have named enums, I used named string constants instead. One technique I employed in my Roles Rights Management (RRM) system was to auto-generate VBScript constants by reading lookup table and using eval() function during Application startup. It's much more developer-friendly to allow calls like

RRManager.HasRight (cnstCanSeePage, ControlNumber)

than to force a developer to use numbers like

RRManager.HasRight (1, ControlNumber)

Again, in C# we would use enums instead of string constants. The question is:
* is it better to auto-generate those VBScript string constants (or to auto-generate C# enums; I hope it's possible) from database values or is it better to reconcile hard-coded string constants / enums against database values on Application startup? What are ramifications of using each approach? *
If I remember correctly, Imar Spaanjaars also touched upon this issue in a discussion beneath his article.

Auto-generation of enums might be dangerous. Suppose that someone deleted a row from a database and as a result enumValueOne is not generated any more. Then, if another programmer calls AnObject.DoSomething(enumValueOne) application would probably crash.
On the other side, if you don't auto-generate, but rather reconcile enums against database on Application startup, and someone deleted a database record, your Application will immediately tell you about that problem and just won't start. It's safer. But it's less convenient to use this approach if available enum values changes frequently: now you keep essentially the same data in two places, so you have to modify code every time you add or delete a record from database. Not good!

I think the answer is:
- If a list of possibilities is fixed and is not going to be changed frequently, do not auto-generate Enumerations. Instead, reconcile them against database values on Application startup.
- If a list of possible enum values changes frequently, use auto-generation from a database.

What do you think?