Architecture Dilemmas: O/R Mapping Why/When (Part III)
Posted 2006. 8. 25. 20:02This is the last part of a three part series on O/R mapping. Part I provided some background on the subject. Part II described the benefits of O/R mapping and analyzed some of the costs of using O/R mapping. Part III (this post) provides my opinion and some guidance.
O/R mapping is not a panacea, as was shown in the previous post. Using O/R mapping incurs several costs (which are sometimes hidden at first glance). Nevertheless, using O/R mapping provides a good balance between the need to bridge the gap between an OO model and a relational one versus the time and effort needed to provide that bridge. O/R mapping is especially useful if you are also following Domain Driven Design principles which support a rich and meaningful (domain) object model.
Several years ago (good) O/R mappers were hard to find. The first real O/R mapper I used was TopLink (now Oracle Toplink)--indeed the Java world seems to be leading the adoption of O/R mapping into mainstream programming. Now there's Hibernate (which is rather popular), JDO (I think the first O/R mapping standard), and now the Java Persistence API as part of EJB 3.0.
On the .Net side there's a wide range of solutions, starting with NHibernate (the .Net version of Hibernate) and many commercial and open source solutions. Microsoft is following this trend and will be introducing two new O/R mapping frameworks: LinQ for SQL (formerly known as "DLinQ") and LinQ for Entities (built above ADO.NET entity framework).
When a solution has mostly with data-entry screens or simple CRUD operations, a viable option is to use the ActiveRecord pattern. ActiveRecord is basically a simplistic O/R mapping (actually it can be considered as an R/O mapping as it is the table row that is the mapped to an object). While ActiveRecord provides an anemic domain model it can still be useful in the scenario mentioned above. The added benefit of ActiveRecord mapping is that it is simple to generate this kind of mapping automatically (e.g. what Rails or MonoRail do).
DAL (Data Access Layer) or direct data access are a good options when the solution is very data centric and/or database intensive. A classic example for this would be a reporting application. iBAT is is a variant on the DAL theme (I consider it an XML-based DAL). The good news is that the SQL is externalized from the code and can be tweaked and updated independently. The downside include lacking documentation as a lot of mapping files (mapping per query/SQL statement).
Lastly, on smaller projects which are not very data intensive, you can also consider using an object-oriented database (such as Versant FastObjects, Objectivity, or db4O). While I wouldn't use them to create the next version of the NYSE data center, they can make life very easy where the data requirements are modest.
There are many ways to get to the data from the object model. Each approach has it place and sometimes it is worthwhile using more than one in a project. Whatever approach you take. I believe it is a good practice to consider utilizing the Hexagonal architecture principle and keep the objects clean from the data access code (POJO/POCO).
- Filed under : S/W Architecture