http://www.codetoad.com/java_Hibernate.asp
http://www.codeguru.com/cpp/data/mfc_database/sqlserver/article.php/c10079/
Introducing Hibernate ORM using a Simple Java Application
Saritha S.V
July 12, 2005
Introduction
As you all know, in today's enterprise environments working with
object-oriented software and a relational database can be cumbersome and
time consuming. Typically, in an enterprise application, if you are passing
objects around and sometimes reach the point where you want to persist them,
you will open a JDBC connection, create an SQL statement and copy all your
property values over to the PreparedStatement or into the SQL string you are
building. This may be easy for a small value object, but for an object with
many properties, you may face many difficulties. As myself have experienced
and most of you Java Programmers have seen, the object-relational gap
quickly becomes very wide if you have large object models. Thus, the
activities involved in persisting data are tedious and error-prone. So, we
go for Object-relational mapping (O/R mapping) which is a common requirement
of many software development projects.
Why ORM?
The term object/relational mapping (ORM) refers to the technique of mapping
a data representation from an object model to a relational data model with a
SQL-based schema. So, what can an ORM do for you? A ORM basically intends to
takes most of that burden of your shoulder. With a good ORM, you have to
define the way you map your classes to tables once; which property maps to
which column, which class to which table, and so forth.
With a good ORM, you can take the plain Java objects you use in the
application and tell the ORM to persist them. This will automatically
generate all the SQL needed to store the object. An ORM allows you to load
your objects just as easily: A good ORM will feature a query language
too. The main features include:
-
Less error-prone code
-
Optimized performance all the time
-
Solves portability issues
-
Reduce development time
Hibernate
Hibernate is in my opinion the most popular and most complete open source
object/relational mapping solution for Java environments. Hibernate not only
takes care of the mapping from Java classes to database tables (and from
Java data types to SQL data types), but also provides data query and
retrieval facilities and can significantly reduce development time;
otherwise, spent with manual data handling in SQL and JDBC. It manages the
database and the mapping between the database and the objects.
Hibernate's goal is to relieve the developer from 95 percent of common data
persistence related programming tasks. Hibernate adapts to your development
process, no matter if you start with a design from scratch or work with a
legacy database.
Hibernate generates SQL for you, relieves you from manual result set
handling and object conversion, and keeps your application portable to all
SQL databases. Hibernate allows you to store, fetch ,update and delete any
kind of objects. Hibernate lets you develop persistent classes following
common Java idiom — including association, inheritance, polymorphism,
composition, and the Java collections framework.
The Hibernate Query Language, designed as a minimal object-oriented
extension to SQL, provides an elegant bridge between the object and
relational worlds.
Key features include:
-
Integrates elegantly with all popular J2EE application servers, Web containers and in standalone applications. Hibernate is typically used in Java Swing applications, Java Servlet-based applications, or J2EE applications using EJB session beans.
-
Free/open source. Hibernate is licensed under the LGPL (Lesser GNU PublicLicense).critical component of the JBoss Enterprise Middleware System (JEMS) suite of products.
-
Natural programming model. Hibernate supports natural OO idiom; inheritance, polymorphism, composition, and the Java collections framework.
-
Extreme scalability. Hibernate is extremely performant, has a dual-layer cache architecture, and may be used in a cluster.
-
The query language. Hibernate addresses both sides of the problem; not only how to get objects into the database, but also how to get them out again.
-
EJB 3.0. Implements the persistence API and query language defined by EJB 3.0 persistence.
What Is a Persistent Class?
Hibernate provides transparent persistence; the only requirement for a
persistent class is a no-argument constructor. In a persistent class, no
interfaces have to be implemented and no persistent superclass has to be
extended. The Persistent class can be used outside the persistence context.
Persistent classes are classes in an application that implement the entities
of the business problem. Let me illustate this with a simple eg : login
entity. The persistent class can be like:
public class Login {
/** persistent fields */
private String userName;
private String userPassword;
/** default constructor */
public Login () {
}
public String getUserName() {
return this.userName;
}
public void setUserName(String userName) {
this.userName = userName;
}
public String getUserPassword() {
return this.userPassword;
}
public void setUserPassword(String userPassword) {
this.userPassword = userPassword;
}
}
Persistent classes have, as the name implies, transient and also persistent
instance stored in the database. Hibernate makes use of persistent objects
commonly called as POJO (Plain Old Java Object) programming model along with
XML mapping documents for persisting objects to the database layer. POJO
refers to a normal Java object that does not serve any other special role or
implement any special interfaces of any of the Java frameworks like a Java
Bean.
The Ultimate Goal
Take advantage of those things that relational databases do well, without
leaving the Java language of objects/classes. I will say, the ultimate aim
is: Do less work, Happy DBA.
Hibernate Architecture
I believe Hibernate provides persistence as a service, rather than as a
framework. I will show two common architectures incorporating Hibernate as a
persistence layer. As I have already explained, for persisting objects
Hibernate makes use of persistent objects commonly called as POJO, along
with XML mapping documents.
In Web (two-tiered) Architecture, Hibernate may be used to persist JavaBeans
used by servlets/JSPs in a Model/View/Controller architecture.
In Enterprise (three-tiered), Architecture Hibernate may be used by a
Session EJB that manipulates persistent objects.
Typical Hibernate code
sessionFactory = new Configuration().configure().buildSessionFactory();
Session session = sessionFactory.openSession();
Transaction tx = session.beginTransaction();
//Data access code
session.save(newCustomer);
tx.commit();
session.close();
The first step in a Hibernate application is to retrieve Hibernate
Session. Hibernate Session is the main runtime interface between a Java
application and Hibernate. SessionFactory allows applications to create a
Hibernate session by reading the Hibernate configuration file
hibernate.cfg.xml. After specifying transaction boundaries, the application
can make use of persistent Java objects and use the session to persist to
the databases.
Getting Started with Hibernate
-
Download and install Hibernate. Hibernate is available for download at http://www.hibernate.org/.
-
Include the hibernate.jar file in the working directory.
-
Place your JDBC driver jar file in the lib directory.
-
Edit the Hibernate configuration files, specifying values for your database. (Hibernate will create a schema automatically.)
A Simple Java Application
Here is a sample application that I developed; it uses Hibernate.
[... check org for the rest ...]
documented on: 2008-02-12
http://www.kuro5hin.org/story/2006/3/11/1001/81803
By mirleid in Op-Ed
Mon Mar 13, 2006
ORM stands for Object Relational Mapping. At its most basic level, it is a
technique geared towards providing an application with an object-based view
of the data that it manipulates.
I have been using ORM in the scope of my professional activities for the
better part of three years. I can't say that it has been a smooth ride, and
I think other people might benefit from an account of my experiences.
Hence this story.
ORM: a primer
The basic building block of any application written in an OO language such
as Java is the object. As such, the application is basically a more or less
large collection of interacting objects. This paradigm works relatively well
up to a point. It is when such an application is required to deal with
something with a completely different worldview, such as a database, that
the brown matter definitely hits the revolving propeller-shaped
implement. The term Object-Relational impedance mismatch term was coined to
represent this difference in worldviews.
The basic purpose of ORM is to allow an application written in an object
oriented language to deal with the information it manipulates in terms of
objects, rather than in terms of database-specific concepts such as rows,
columns and tables. In the Java world, ORM's first appearance was under the
form of entity beans.
There are some problems with entity beans:
-
They are J2EE constructs: as such, they cannot be used in a J2SE application
-
The fact that they require the implementation of specific interfaces (and life-cycle methods) pollutes the domain model that you are trying to build
-
They had serious shortcomings in terms of what could be achieved with them (the whole Fast Lane Reader (anti-)pattern issue and others like it)
The first problem does not kill you, but it also does not make you
stronger. In fact, the dependency on a container implies that proper unit
testing of entity beans is convoluted and difficult. The second problem is
where the real pain lies: the programming model and the sheer number of
moving parts will make sure that building a moderately complex, working
domain model expressed as entity beans becomes a frustrating and tortuous
exercise.
Enter transparent persistence: this is an approach to object persistence
that asserts that designers and developers should never have to use anything
other than POJOs (Plain Old Java Objects), freeing you from the obligation
to implement life-cycle methods. The most common frameworks that claim to
provide transparent persistence for Java objects today are JDO, Hibernate
and TopLink. At this point, I'd like to clarify that I am not about to
discuss the great JDO vs EJBernate 3.0 religious wars, so, don't even think
about it.
Hibernate and TopLink are reflection-based frameworks, which basically means
that they use reflection to create objects and to access their
attributes. JDO on the other hand is a bytecode instrumentation-based
framework. While this difference might not seem to be immediately relevant
to you, please bear with me: its significance will become apparent in due
course.
ORM: the 50000 feet view
At a high level, you need to perform the following tasks when using an ORM
framework:
-
Design and code your domain model (as in, the POJO, java bean-like classes that represent the data that your application requires)
-
Derive your database schema from the domain model (I can hear the protests: again, please bear with me)
-
Create the metadata describing how the object map to the database and what their relationships are
Assuming that you have sufficiently detailed requirements and use cases, the
first step is a well-understood problem with widely accepted techniques
available for its solution. As such, we'll consider the first step as a
given and not dwell on it.
The second step is more controversial. The easy way to do it is to create a
database schema that mimics the domain model: each class maps to its own
table, each class attribute maps to a column in the given table,
relationships are represented as foreign keys. The problem with this is that
the database's performance is highly dependant on how "good" the schema is,
and this "straight" way of creating one generates, shall we say, sub-optimal
solutions. If you add to that the fact that you will be constrained (by the
very nature of the ORM framework that you are using) in terms of the
database optimisation techniques that you can use, and that that
one-class-one-table approach will tend to generate a disproportionately
large number of tables, you realise pretty soon that the schema that you
have is, by DBA standards, a nightmare.
The only way to solve this conundrum is to compromise. From both ends of the
spectrum. Therefore, using an ORM tool does not really gel with waterfall
development, for you'll need to continually revisit your domain model and
your database schema. If you're doing it right, changes at the database
schema level will only imply changes at the metadata level (more on this
later). Obviously, and by the same token, changes at the domain model level
will should only imply changes in the metadata and application code, but not
on the database (at least not significant changes).
Creating the metadata for mapping your domain model to the database is where
it gets interesting. At a high level, the basic construct available to you
is something called a mapping. Depending on which framework you use, you
might have different types available to you doing all kinds of interesting
stuff, but there is a set that is commonly available:
-
Direct to field
-
Relationship
A direct to field mapping is the basic type of mapping that you use when you
want to map a class attribute of some basic type such as string directly
onto a VARCHAR column. A relationship mapping is the one that you use when
you have an attribute of a class that holds a reference to an instance of
some other class in your domain model. The most common types of relationship
mappings are "one to one", "one to many" or "many to many".
At this juncture, we need an example to illustrate the use of these
mappings. Let us consider the domain model for accounts in a bank; you'll
need:
-
A Bank class
-
An Account class
-
A Person class
-
A Transaction class
The relationships between them are as follows:
-
Bank has a "one to many" relationship with Account, meaning that a bank holds a lot of accounts, but that the accounts can only belong to one bank. This translates into the Bank class having an attribute of type List holding references to its accounts and that the Account class has an attribute of type Bank holding a reference to the owning Bank instance (this reference is commonly called the "back link" in ORM-speak, because it is used to generate the SQL that will populate the list on Bank: something like SELECT * FROM ACCOUNT WHERE BACK_LINK_TO_BANK_INSTANCE = BANK_INSTANCE_PRIMARY_KEY)
-
Account has a "many to many" relationship with Person, meaning that an account belongs to one or more persons and that a person may have one or more accounts. In code terms this translates into Account having an attribute of type List holding references to instances of Person and Person having an attribute of type List holding references to instances of Account. It should be noted that this relationship has database schema side effects: it normally requires the creation of a relation table that holds the primary keys of Account and Person objects.
-
Account has a "one to many" relationship with Transaction (see the relationship between Bank and Account)
-
Transaction has a "one to one" relationship with Account, meaning that a transaction is to be executed against a single account (this is a simplification for the sake of this example). In code terms, this means that Transaction has an attribute of type Account that holds the reference to the Account instance it is to be performed against.
Please note that the terminology that I am about to use is somewhat
TopLink-biased, but you should be able to find out the appropriate Hibernate
or JDO equivalent without too much trouble. Anyway, once you have figured
out the relationships between classes in your domain model, you need to
create the metadata to represent them. Prior to Tiger (JDK 5.0), this was
typically done via a text file containing something vaguely XML-like
describing the mappings (and a lot of other stuff). If you are lucky, you'll
have access to a piece of software that facilitates the creation of the
metadata file (TopLink provides you with something called the Mapping
Workbench, and I understand that there are Eclipse plug-ins for Hibernate).
With TopLink and Hibernate, once you have the metadata file you are away. If
you are using JDO, there is an extra step required, which is to instrument
your class files (remember that JDO uses bytecode instrumentation), but this
is relatively painless, since most JDO implementations provide you with Ant
tasks that automate it for you.
IRL ORM
What follows is an account of my experience (and exploits) with
TopLink. Some of the issues encountered will be, as such, somewhat specific,
but I think that most of them are generic enough to hold for most ORM
frameworks.
The first problem that you face is documentation. It is not very good,
ambiguous, and only covers the basics of the framework and its
use. Obviously, this problem can be solved by getting an expert from Oracle
(they own TopLink): I guess that that sort of explains why the documentation
isn't very good.
The second problem that you face is that if you are doing something real (as
in, not just playing around with the tool, but actually building a system
with it), you typically have more than one person creating mappings. You
would have thought that Oracle would have considered that when creating the
Mapping Workbench. They did not. It is designed to be used by one person at
a time, and there's no chance that you can use it in a collaborative
development environment. Additionally, it represents the mapping project
(the Mapping Workbench name for the internal representation of your mapping
data) in such a huge collection of files that storing them in a VCS is an
exercise in futility. So, mapping your domain model becomes a project
bottleneck: only one person at a time can edit the project, after all. As
such, the turnaround time for model changes and updates impinges quite a lot
on the development teams, since they can play around with the domain model
in memory, but they can't actually test their functionality by trying to
store stuff to the database.
When you finally get a metadata file that holds what you need to move your
development forward, and you run your code, you start receiving angry
e-mails from the project DBA, reading something like "What in the name of
all that is holy you think you are doing to my database server?" [… the
rest half of TopLink specific problems omitted …]
EJB 3.0 helps here
by ttfkam on Mon Mar 13, 2006
New versions of Hibernate implement the EJB 3.0 EntityManager interface. So
instead of separate XML schema definition and mapping files, you simply
annotate the POJOs and go.
The downside is that persistence info is in your Java source. The upside is,
well, that persistence info is in your Java source.
And using EJB 3.0 means that you can swap between Hibernate in standard J2SE
apps, JBoss, Glassfish, and the others simply.
and Spring makes it even better
by Lars Rosenquist on Mon Mar 20, 2006
By using the TransactionProxyFactoryBean and the HibernateTransactionManager
to manage your transactions. Use the OpenSessionInViewInterceptor to support
lazy initializing outside of a Hibernate Session scope (e.g. web context).
Spring Framework data access classes…
by claes on Mon Mar 13, 2006
seem to help a lot.
It seems as if every time I start a new project I go around and around with
exactly the same issues — how "close to the database" or "how close to the
object model" to put the interface.
Currently I think if you know something about databases you're better off
starting with a schema that really, truly, reflects the actual "business
objects" of your application. Then wrap this in a couple of DAO (buzzword,
blech) classes (The Spring Framework classes help here), and deal with it.
Once you get the schema right, it tends to stay put. In our current major
project I end up actually doing something at the SQL> prompt about once a
month or so, mostly for debugging. That tells me that the model is good —
matching both the underlying logic, as well as being easy to get at from the
Java classes.
To go up a level, there are a couple of things like ORM (HTML page
generation is another) where there are constantly new frameworks, new
buzzword, and "better" ways to do things. My feeling is that when this goes
on for a while, it just means that the problem is just plain hard, and if
magic frameworks haven't fixed it in the past, they arent' going to fix it
in the future.
Thanks for the write up.
With Hibernate
by skyknight on Sun Mar 12, 2006
You open a session, which means getting back a session object from a
factory, with which you will associate new objects and from which you will
load existing objects. Such object manipulations are surrounded by the
initiation and termination of transactions, for which you can specify the
isolation level. I don't know about other frameworks, but Hibernate does
take transactions seriously.
ORM is definitely not a tool that people should use if they don't have a
solid understanding of relational database technology and issues, or perhaps
more generally an understanding of computer architecture. Rather, it should
be used by people who have substantial experience writing database
applications and have after much hard-won experience gotten tired with the
grinding tedium of manually persisting and loading objects to and from
relational databases. You need the understanding of relational databases so
that you can get good performance from an ORM, and without it you'll have
the horrible performance that the original piece characterizes in its
anecdotes.
I've been dealing with the ORM problem for 5+ years, with a brief escape for
grad school. I've written raw SQL. I've used home grown ORM frameworks
written by other people. I've written my own substantial ORM frameworks in
each of Perl, Python and Java. I've actually done it twice in Perl, with my
latest instantiation being pretty good, and yet still being dwarfed in
capability by Java's Hibernate. As such, I've recently started learning
Hibernate. Hibernate is extremely complicated, and most certainly not for
the weak of heart or for a junior programmer with no relational database
experience, but it is also extremely powerful. In learning Hibernate I've
been very appreciative of many of the hard problems that it solves, problems
with which I have struggled for years, in many cases unsuccessfully.
Mind you, even with Hibernate, ORM is still ugly. The fact that you need to
persist your objects to a database is largely an artifact, an accidental
component of your development process stemming from limitations in today's
technology, not an intrinsic facet of the thing that you're trying to
accomplish. Also, ORM is inherently duplicative, in that you end up defining
your data model twice, as well as a mapping between the two instantiations
of it. Such is life… It would be nice if we had "object servers", as well
as cheap and performant non-volatile RAM, but we don't, and we aren't going
to have such things for well over a decade at least, not in reliable
versions anyway.
As someone who has slogged through implementing his own ORM on a few
occasions, I can say that it is a great learning experience, but if your
goal is a production quality system, then you should probably use something
like Hibernate. The existence of Hibernate alone is probably a strong
argument for using Java when writing an application that requires complex
ORM. I don't know that C# has solved the problem, but I haven't looked,
honestly.
Mileage varies
by Scrymarch on Sat Mar 11, 2006
I've used Hibernate on a few projects now and been pretty happy with it.
I've found it a definite productivity increase on raw JDBC - there's simply
less boilerplate, and hence less stupid typo errors. The overwhelmingly
most common class -> table relationship is 1:1, so you cut out a lot of code
of the
account.setAccountTitle( rs.getString(DataDictionary.ACCOUNT_TITLE) );
account.setAccountBalance( rs.getInteger(DataDictionary.ACCOUNT_BALANCE) );
collection.add(account);
variety.
It does irritate me that you end up with HQL strings everywhere, but you
ended up with SQL strings everywhere before, so shrug. Really the syntax
should be checked at compile time, instead of implicitly by unit tests. Such
a tool shouldn't even be that hard to write, but I guess I'm lazy. I'd be
uneasy letting devs near HQL without a decent knowledge of SQL. For
mapping, we used xdoclet or hand editing the result of schema-generated xml
files. Usually the same developer would be adding tables or fields, the
relevant domain objects, and required mappings.
Now I think about it though, every time I've used ORM I've laid out the data
model first, or had legacy tables I had to deal with. Inheritance in the
domain model tended to be the interface driven variety rather than involving
a lot of implementation inheritance. Relational database have a pretty good
track record on persistance, maybe you could let them have a little more
say.
We did still get burnt a bit by lazy loading. We were working with DAOs
which had been written with each method opening and closing a session. So
sometimes objects would make it out to a higher tier without having the
right dependant detail-style attributes loaded, which throws a lazy loading
exception. We got around this by moving the session control up into the
business layer over time. This is really where it should have been in the
first place, not being able to go:
session.open(); // or txn.start or whatever
data
data
think
data
think
session.close()
is kind of crazy.
These projects were with small teams on the scale of half a dozen
developers. Sounds like you were on a bigger project, had higher
interpersonal communication overheads, etc. Just to put all my bias cards on
the table, I gave a little yelp of pain when you said "waterfall".
Hibernate
by bugmaster on Mon Mar 13, 2006
I've had some very happy experiences with Hibernate. It lets you specify
lazy-loading strategies, not-null constraints, caching strategies, subclass
mapping (joned-subclass, union-subclass, table-per-subclass-hierarchy),
associations, etc… All in the mapping file. And it has the Hibernate Query
Language that looks, feels and acts just like SQL, but is about 100x
shorter. Hibernate rules.
Once you know ORMs, most of these problems go away
by Wolf Keeper on Tue Jul 10, 2007
I'm familiar with Hibernate. I can't speak for the others. The learning
curve is steep, but once you've got it you can build applications very fast.
-
'Derive your database schema from the domain model.'
Hibernate is flexible enough to work with an existing schema. You can write your Hibernate objects to work with your database, and not vice versa. You lose the ability to use polymorphism in some of your database objects, but ORM remains very handy.
-
'The first problem that you face is documentation.'
The Hibernate website is a font of information, and the Hibernate In Action book remains the only IT tome I have read cover to cover. It is outstanding.
-
'you typically have more than one person creating mappings.'
Hibernate handles that fine. The XML files governing Hibernate behavior are human-readable, and can be team developed just like any other Java code.
-
'you realise that that happens because the ORM framework will not, by default, lazy load relationships'
The importance of lazy load relationships is highly documented in Hibernate and the default behavior for a few years now.
-
'This time, though, in order to make a call on whether a relationship should be lazily loaded, you need to trawl through all the use cases, involve a bunch of people, and come up with the most likely usage and access scenarios for each of the classes in your domain model.'
You have to do this whether you use an ORM or just JDBC. Your JDBC code can just as easily hit the database too often. Either way, the research should be done before the application is built and not after.
-
' The problem is that reflection-based ORM frameworks figure out what needs to be flushed to the database (as in, what you created or updated, and what SQL needs to be generated) by comparing a reference copy that they keep against the instance that you modified. As such, and at the best of times, you are looking at having twice as many instances of a class in memory as you think you should have.'
I believe Hibernate simply tracks whether an object has been changed and does not keep a reference copy. Regardless, there's well documented guidelines for evicting objects you don't need from memory to cap memory use. And RAM is cheap.
-
'At a high level, the only way that you can get around this is to actually figure out which classes are read-only from the application standpoint.'
Again, whether you use ORM or SQL and JDBC, identifying read-only classes is part of application development. Setting up read-only caches of objects that don't change is easy.
-
'Surrogate keys'
I have to disagree that surrogate keys are a drawback. Put a unique constraint on the columns you would have preferred as a primary key (i.e. what would have been the "intelligent keys"). Then you can do joins, updates, deletes, etc… using the intuitive primary key column and the application can chug right along with surrogate keys.
It's also worth mentioning that Hibernate also has easy tools for using
straight SQL and for using prepared statements and languages like Transact
SQL, PL/SQL, or PL/pgSQL.
I can't say ORM is the best solution for mapping between object oriented
languages and databases. But for big applications, it's much easier than
rolling your own JDBC code for all of the database work. Someone skilled on
Hibernate with input in your project planning could have made life
*tremendously* easier.
Comments from a successful ORM-using architect
by Ufx on Mon Mar 13, 2006
I've had the pleasure of using an ORM for multiple projects, and my
experiences have compelled me to reply to your article. Our platform is .Net,
which offers some more flexibility in the reflection space due to generic
type information being preserved. This is particularly useful when figuring
out what type your lists contain.
First, let me first state the requirements of one of my projects. There is
a client application that must support the following:
-
Occasional disconnections from the master database, usually caused by an internet service interruption. While disconnected, all changes must be stored, and as much work as possible must be allowed to continue. Upon reconnection, changes must be merged with the master database and the user notified of any conflicts.
-
Extremely granular security of virtually every user-updateable field.
-
General safety against malicious messages. Par for the course here - nobody must hit our webservices unless expressly authorized to do so.
-
Excellent performance. The application will be in a setting where work must be done very quickly.
As an architect and developer working this project, I added my own required
features above those required by the application:
-
Minimal configuration. The point is to reduce work, not generate more of a different kind.
-
SOA data transaction batches. We have code that should function either on the client side or in our own internal network. Transactions must be handled in both cases transparently.
-
Simple optimizations for lazy and eager loading. I don't want to create custom DTOs just because we need a little more of the object graph than usual.
-
Transparent security. Outside of the UI layer, security breaches are an exceptional condition and shouldn't need boilerplate nastiness to verify that a transaction is authorized.
-
Transparent disconnected functioning. Except in very specific circumstances, data access code should not care whether or not it is being executed in the disconnected environment.
-
Transparent concurrency control. Again, code should generally not care about handling concurrency errors unless there is a specific exception, and these should be handled in a generic fashion.
-
Ability to execute user-defined functions or stored procedures when necessary.
-
Transparent cache control. Accessing an object ought to have one interface, regardless of whether or not the object is cached.
We currently use an ORM that meets all of the above requirements. Allow me
to share my thoughts on some very big issues we had to solve regarding these
requirements, and mix in some responses to your issues.
As far as configuration is concerned, the system uses a configurable
conventions class that allows programmatic configuration of defaults, and it
uses attribute decorators for all other configuration. I know that this
ties our domain model to a specific data model, but in our situation that
tradeoff wasn't so bad. Furthermore, my experience is that data schema and
object schema are usually versioned together anyway. Contention for
configuration is exactly the same as contention for the domain objects
themselves, so there are rarely problems. I'm surprised that your system
did not allow you to split the configuration files by some logical
boundaries that would've reduced the contention issues you had.
The key factor to easy mapping is consistency. Most mapping issues arise
out of essentially idiomatic models being twisted for no good reason. Put
your foot down: Everything must follow the idioms unless there is a *very*
good reason not to. Usually that reason is performance, and when the need
arises you most likely have to introduce an extra step in your mapping in
the form of a DTO. While this reduces the transparency of the system, in my
experience the need for these is rare enough not to have to worry about it
as the vast majority of the system should be plenty performant by default.
If it isn't, you're using the wrong technology!
Developer productivity is the most important resource. The more time you
save with an excellent model, the more time you have to work on
optimizations when the need arises. Normally you should not be coding with
consistency-breaking optimizations in mind. Typically when confronted with
a performance problem, we need to either eagerly load something, or we need
to cache something, or we need to create an index. Most performance issues
can be resolved in a matter of minutes.
Your statement about reflection-based ORM frameworks needing to keep copies
of the object in memory isn't entirely accurate. Our system does not do
this, and instead relies on user code to tell the data layer what to send
back to the database. I find this works rather well, because most of the
time you definitely know what has changed.
Security-wise, the rich metadata that an ORM provides was a godsend when
writing our security system. Because there is exactly one gateway in which
all transactions flow, and this gateway had all of the information necessary
to perform a verification, even our extremely granular security was easy to
implement in a generic manner.
Our disconnected architecture was also aided by the metadata and design of
the ORM. When we went into disconnected mode, queries were simply re-routed
to a SqlLite driver instead of the default webservices driver. Also, the
single point of entry for all returned data allowed for easy caching of that
data.
Most good ORM systems can use natural or surrogate key structures. My
preference is for surrogate keys, because let's face it: Natural keys change
and developers are not always experts in their application's domain. Not
every developer can make the call as to what is a natural key 100% of the
time and what is a natural key 95% of the time. It's far easier to drop a
unique constraint than it is to change your key structure when the
inevitable happens.
I understand that our requirements are quite different from those that exist
in the web-based world. The world will be a happier place for us developers
when we can dump the web as an application platform and replace it with an
application delivery platform.
documented on: 2008-02-12
http://java-source.net/open-source/persistence
Hibernate is a powerful, ultra-high performance object/relational
persistence and query service for Java. Hibernate lets you develop
persistent objects following common Java idiom - including
association, inheritance, polymorphism, composition and the Java
collections framework. Extremely fine-grained, richly typed object
models are possible. The Hibernate Query Language, designed as a
"minimal" object-oriented extension to SQL, provides an elegant
bridge between the object and relational worlds. Hibernate is now
the most popular ORM solution for Java.
The SQL Maps framework will help to significantly reduce the amount
of Java code that is normally needed to access a relational
database. This framework maps JavaBeans to SQL statements using a
very simple XML descriptor. Simplicity is the biggest advantage of
SQL Maps over other frameworks and object relational mapping tools.
To use SQL Maps you need only be familiar with JavaBeans, XML and
SQL. There is very little else to learn. There is no complex scheme
required to join tables or execute complex queries. Using SQL Maps
you have the full power of real SQL at your fingertips. The SQL Maps
framework can map nearly any database to any object model and is
very tolerant of legacy designs, or even bad designs. This is all
achieved without special database tables, peer objects or code
generation.
ObJectRelationalBridge (OJB) is an Object/Relational mapping tool
that allows transparent persistence for Java Objects against
relational databases.
Torque is a persistence layer. Torque includes a generator to
generate all the database resources required by your application and
includes a runtime environment to run the generated classes.
Castor is an open source data binding framework for Javatm?. It's
basically the shortest path between Java objects, XML documents and
SQL tables. Castor provides Java to XML binding, Java to SQL
persistence, and then some more.
Cayenne is a powerful, full-featured Java Object Relational Mapping
framework. It is open source and completely free. One of the main
Cayenne distinctions is that it comes with cross-platform modeling
GUI tools. This places Cayenne in the league of its own, making it a
very attractive choice over both closed source commerical products
and traditional "edit your own XML" open source solutions.
TriActive JDO (TJDO) is an open source implementation of Sun's JDO
specification (JSR 12), designed to support transparent persistence
using any JDBC-compliant database. TJDO has been deployed and
running successfully in many commercial installations since 2001.
JDBM is a transactional persistence engine for Java. It aims to be
for Java what GDBM is for other languages (C/C++, Python, Perl,
etc.): a fast, simple persistence engine. You can use it to store a
mix of objects and BLOBs, and all updates are done in a
transactionally safe manner. JDBM also provides scalable data
structures, such as HTree and B+Tree, to support persistence of
large object collections.
Prevayler is the free-software Prevalence layer for Java.
Ridiculously simple, Prevalence is by far the fastest and most
transparent persistence, fault-tolerance and load-balancing
architecture for Plain Old Java Objects (POJOs).
JPOX is a full compliant implementation of Java Data Objects (JDO)
API. The JDO API is a standard interface-based Java model
abstraction of persistence. JPOX is free and released under an
OpenSource license, and so the source code is available for download
along with the JDO implementation.
Speedo is an open source implementation of the JDO ™
specification.
Jaxor is a code-generating OR mapping tool which takes information
defined in XML relating to the relational entities to be mapped and
generates classes, interfaces and finder objects which can be used
from any Java application (including JFC/Swing, J2EE and
command-line tools). The actual code generation is handled by
Velocity templates rather than by fixed mechanisms within the tool.
This flexability allows easy tailoring and modification to the
formatting or even the code that gets generated.
pBeans is a Java based persistence framework and an
Object/Relational (O/R) database mapping layer. It is designed to be
simple to use and completely automated.
SimpleORM is Java Object Relational Mapping open source project
(Apache style licence). It provides a simple but effective
implementation of object/relational mapping on top of JDBC at low
cost and low overhead. Not even an XML file to configure!
Simple data storage API to help developers to focus on their
applications instead of writing JDBC code.
XORM is an extensible object-relational mapping layer for Java
applications
Space4J (or S4J, for short) is a free prevalence implementation in
Java. Prevalence is a concept started by Klaus Wuestefeld on how to
store data in a real object oriented way, using only memory
snapshots, transaction logs and serialization. In addition to the
basic functionality it offers transparent support for clustering,
passivation and indexing.
O/R Broker is a convenience framework for applications that use
JDBC. It allows you to externalize your SQL statements into
individual files, for readability and easy manipulation, and it
allows declarative mapping from tables to objects. Not just
JavaBeans objects, but any arbitrary POJO. This is the strength of
O/R Broker compared to other similar persistence tools. It allows
the developer to design proper object-oriented classes for
persistence, without having to be forced to use JavaBeans only, and
it has full support for inheritance hierarchies and circular
references. One of the design goal for O/R Broker was simplicity,
which means that there are only 4 public classes, and an XML Schema
file for validation of the very simple XML configuration file. In
short, O/R Broker is a JDBC framework that does not dictate domain
class design, whether that be having to conform to the JavaBeans
spec, having to extend a superclass or implement an interface, but
allows persistence independence and proper object-oriented class
design.
Mr. Persister is an object persistence API, that makes it possible
to read and write objects of any size to and from relational
databases. The implemented/planned features includes easier JDBC
operations via JDBC templates (Spring style), automatic connection /
transaction handling, object relational mapping, dynamic reports,
connection pooling, caching, replication, debugging and more.
A small (less then 50KB) persistence framework.
JDBCPersistence is yet another OR mapping layer. It differs from its
peers in both implementation and API. Using bytecode generation at
its core the framework generates classes that implement JDBC Logic
which is specific to a table/bean pair. JDBCPersistence generates
persistor classes at runtime on demand incurring no noticeable
overhead on the development process. The entire framework
configuration is done via API, which significantly improves start-up
time and reduces library size.
A lightweight persistence framework for Java (JDK5.0 or later).
Extremely simple to use, works whith annotations, does not require
any configuration/mapping file, runs whith any JDBC compliant
database (no SQL dialects to configure) and supports multi-device
persistence.
Super CSV is a CSV package for processing CSV files. Super CSV is
designed around solid Object-oriented principles, and thus aims to
leverage your Object-oriented code, making it easier to write and
maintain. Super CSV offers the following features:
-
The ability to read/write POJO beans, Maps and String lists.
-
The ability to easily convert input/output to integers, dates,
trimming strings, etc…
-
The ability to easily verify data conforms to some specification,
such as number ranges, string sizes, uniqueness and even optional
columns.
-
The ability to read/write data from anywhere in any encoding. As
long as you provide a Reader or a Writer.
-
Support for Windows, MAC and Linux line breaks.
-
Configurable separation character, space character end end of
line character (for writing files)
-
Correctly handling of characters such as " and ,
-
Operates on streams rather than filenames, enabling you to
read/write CSV files e.g. over a network connection
Velosurf is a database mapping layer library for the Apache Velocity
template engine. It provides automatic database mapping of tables
and relationships without any code generation. It is a lightweight
placeholder to heavy persistence systems.
Ebean is Object Relational Mapping Persistence Layer. Ebean is
designed to be easy to learn and use. It follows the mapping
specification of JPA with annotations such as @Entity, @OneToOne,
@OneToMany etc. Ebean also has a relational api when you want to
bypass ORM in favour of using your own SQL for fetching, updating
and calling stored procedures.
PAT stands for "Persistent Applications Toolkit". Like many other
software it simplifies developing of persistence layers of business
applications. It does it by providing object-oriented environment
for persistence of your objects: POJOs And.. it really does it. PAT
provides almost transparent data layer for a business application.
It employs state-of-the-art techniques to achieve it. These are OO,
AOP (JBossAOP), Java, Prevayler, Ant, JUnit, Log4j, @@annotations
and others. It cooperates well with web applications: Struts (and
possibly other web frameworks), Tomcat, JBoss AS. (AOP term:
persistence aspect)
Daozero reduces DAO codes based on Spring & iBatis. The old way is
to write codes and invoke iBatis API explicitly, but daozero
implements DAO interfaces in runtime, invokes iBatis API for
developers atuomaticaly. Replace old DAOs with it directly.
Persist is a minimalist Java ORM/DAO library designed for high
performance and ease of use. Unlike most ORM tools, Persist does not
aim to completely isolate application code from relational
databases. Instead, it intends to minimize the effort to handle data
stored in a RDBMS through JDBC, without making compromises on
performance and flexibility.
QLOR (Q-LOGIC Object Relational Mapping Framework) is a performant
Object/Relational Mapping and persistence/query framework for Java.
It's easy to use and deploy with other technologies. It is heightly
structured and designed. Features:
-
O2O, O2M and M2M database mappings.
-
Multiple primary keys without modifying the class diagram.
-
Cascading primary keys on associations.
-
Easy inheritance and multi inheritance mapping.
-
Programmatic mappings.
-
Multi-files project mapping.
-
Declarative database encryption function.
-
Multi-database support.
-
Clean and easy to read persistence layer log and more…
SeQuaLite is a data access framework over jdbc. Features include
CRUD operations, Lazy-Load, Cascading, Paging, on-the-fly SQL
generation. It helps reduce development time effectively.
ODAL is database persistence framework with emphasis on
maintainability and performance. Features include query API, ORM,
data validation/conversion, stored procedure support, code
generation. Minimal dependencies. Short startup time.
jPersist is an extremely powerful object-relational database
persistence API that manages to avoid the need for configuration and
annotation; mapping is automatic. It uses JDBC, and can work with
any relational database and any type of connection resource. It uses
information obtained from the database to handle mapping between the
database and Java objects, so mapping configuration is not needed,
and annotation is not needed. In fact, there is no configuration
needed at all.
NetMind BeanKeeper is an O/R (object/relation) mapping library. Its
task is to map java objects to a relational database, and to offer a
powerful query service to retrieve them.
It is a feature-rich implementation of the persistence part of
Enterprise Java Beans 3.0, also known as the Java Persistence API
(JPA), and is available under the terms of the Apache Software
License. OpenJPA can be used as a stand-alone POJO persistence
layer, or it can be integrated into any EJB3.0 compliant container
and many lightweight frameworks.
documented on: 2008-02-12