SSWUGtv – Technology Bit
With Stephen Wynkoop
How does development change as people use tablets, and how can you match your standards to their expectations? Found out more in this weeks Technology Bit from SSWUG.
Watch the Show
EAV – Yes or No
David Pool writes about his Antipathy for Entity Attribute Value data models at http://www.sqlservercentral.com/articles/data+model/95918/. For those of you not familiar with EAV, it stands for Entity Attribute Value, which is a data structure defined to allow users to define data attributes at Run Time, without creating bogus placeholders in your normalized data models for things you may not have known would be needed.
Personally, I love the EAV data model. As David correctly warns, when used appropriately it can provide value. So, I read David’s blog in full, agreeing with his description of what this data model is, places where it can be of value, places where it is often abused, and difficulties arising from implementing the model. He concludes his blog with the question, “I would like to leave you with a final thought. The generic nature of an EAV model is such that there can only be a limited number of ways to produce one that is optimally designed. This being the case where are the reference patterns, books and white papers on EAV?”
Now I am in a quandary…I like his presentation, yet don’t like the conclusion. Because, without doing a thorough search, I would guess he is right that “there are no reference patters”; and EAV is a common structure for which a pattern could readily be defined. Why is that the case?
I am wondering that if there really are no reference patterns, does that mean EAV should not exist as a data modeling pattern? If so, are there other data modeling patterns that should be thrown out because they have not been codified in some reference treatise?
I read very little on Type/Subtype as a pattern. In fact, I recently experienced the wrath of an ORM developer because Entity Framework doesn’t like type/subtype data modeling. The model was considered, “Over Engineered”. My answer is, “No, the model is not at fault; it is the optimal solution for a data layer. It is not easily leveraged to your auto-code-generating tool having a schema that varies from your objects.
So, do I de-normalize my type/subtype model to make code generation easier? The answer is a bit complex, and fodder for a future editorial.
Let me leave the discussion off there for today. I hope I have your interest. I hope, even more, that you are interested enough to respond to btaylor@sswug with your reasoning and/or experience
Cheers,
Ben
$$SWYNK$$
Featured Article(s)
Partitioning Relational Data – (Part – 2)
In this second part of partitioning relational data article series, I’ll show you how you can convert existing table to partition table, add, modify, remove or switch partitions, and how you can view data in a specific partition.
Featured White Paper(s)
Encryption & Key Management for Microsoft SQL Server 2008/2012
Simplify encryption and key management on your SQL Server. Data thieves are targeting SMB companies because of their inadequa… (read more)
Featured Script
ExportToWord
This stored procedure can be used to export the text string into Microsoft Word. You can pass the text string and the file … (read more)