Editorials

Hoarding Data

Hording Data
I recently read yet one more article on the need to meet massive scale storage requirements, and how current technologies are not able to scale with reasonable performance to terabyte, petabyte or even larger scales.

Who needs this kind of storage? How big does a company have to be in order to require a terabyte of storage? Have we always needed this kind of storage, or is this a new phenomenon?

One trend I am finding is that we are more likely to keep data around for a longer period of time. Mining data has become a real asset to many companies. Rather than moving data to archival schemes we are now keeping much more data on line for a longer period of time. By keeping this data live we have the ability to mine that data for meaningful trends.

This brings out a number of new challenges. The data is used very differently than contemporary data. We tend to try and place this into large warehouses structured in one form or another. But if we don’t know how we intend to mine the data, we need to retain it with the most detail possible.

What are the trends you are finding at your company? Are they hesitant to purge or archive data? Are you finding needs for more and more storage capacity? Are your real time systems degrading in performance due to the volume of older, stale data?

Tell me what you think by writing to btaylor@sswug.org.

Cheers,

Ben

$$SWYNK$$

Featured Article(s)
DB2 Buffer Pool Essentials (Part 1)
If you use DB2 for z/OS as your database management system you know that you need to keep a watchful eye on your memory. DB2 just loves memory. Some might say that the more memory you throw at DB2, the better. But simply adding more and more memory to your DB2 buffer pools is not always the best approach though. The issue is more about how to practically manage the memory at DB2’s disposal.

Featured White Paper(s)
All-At-Once Operations
Written by Itzik Ben-Gan with SolidQ

SQL supports a concept called all-at-onc… (read more)