Editorials

Using extra computing capacity

Using extra computing capacity
Older computer systems required optimized code and hardware capacity in order to produce the quickest results. Those familiar with the CRAY super computers know what this is about. The CRAY was designed physically to have the shortest wires to different components in order to get optimum performance from all subsystems.

Today we have multi-gigabit network speeds between computing platforms at reasonable distances. We have commodity based pricing on memory. We have multi-core CPUs as a standard. Disk storage costs have continued to drop, and we even have solid stage storage with competitive pricing. All of these incredible achievements allow us to write software in ways we would never have considered years before.

To me, the biggest shift in software development, resulting from the increased capacity of computing and network systems, has been distributed software. For example, in the old days we would have a report directly read from data storage with an intimate knowledge of the structure of the data using tools like PL1, RPG or COBOL.

Later we extended this with two tiers using tools like Tuxedo, Crystal Reports, or even Access or Paradox for presentation while mining remote data stores. This worked well as long as network distances were short and speeds were adequate.

Today a mature enterprise level software application has three or more layers, each performing more limited tasks. These layers and network usage result in lower performance than software of old, were it not for the hardware advances. So, why do we do it? Because software developed in multiple layers is:

  • More easily changed as requirements change
  • Allows replacement/Modification of a single layer without changing the entire application
  • More scalable because loads can be distributed across multiple computing devices
  • Enables more security if access to inner layers is implemented
  • Has a higher potential for code re-usability across systems

All of these benefits are primarily for those systems intended to grow over time as needs change. If we value software that can have a longer life expectancy because it can change along with your company, then this is one of the best uses of the increased computation capacity of modern computer and network systems.

Would you like to share other ways we can use the increased capacity of our newer computer systems? If so, please share by dropping me a note at btaylor@sswug.org.

Cheers,

Ben

SSWUGtv – New Employee Risk
With Stephen Wynkoop
Should you take on a promising employee that you know you’ll have to spend substantial time training, but could pay off? Laura Rose shares her experience with us on this edition of SSWUGtv. Watch the Show

$$SWYNK$$

Featured Article(s)
System Health Session Dashboard — sp_server_diagnostics and more
In this post, we will look at how to get information from the three other components which I had not covered in my last post, i.e. the IO Subsystem, Query Processing and System. The IO Subsystem component will help you track IO Latch Timeouts, Number of long IOs reported by the SQL Server database engine and the longest pending IO that was reported along with the file name. This data is quite helpful in determining the issues related to I/O performance for the queries that are being executed against the SQL Server instance.

Featured White Paper(s)
How to Use SQL Server’s Extended Events and Notifications to Proactively Resolve Performance Issues
read more)

Featured Script
Extract and compare date "YYMM"
Posted May 4, 2004 Here’s a script I use in a DTS package. I need to extract data from a table based on current year and pr… (read more)