Grid Computing – Definitions, Advantages, Reviews & Comparison with Cloud Computing 2020
Most industries today have become so dynamic that organisations have to consistently seek and adapt to change, in order to survive and prosper. Factors like more diversified customer preferences, technological advances, increased competitive threats and an intensified global economy are among the forces inducing change. Organisations need to become more adaptable embracing Charles Darwin’s view that “it is not the strongest of the species that survives, nor the most intelligent, but the one that is the most adaptable to change”.
A survey conducted by PricewaterhouseCoopers in March 2004 shows that 47% of the CEO’s of the US’s fastest growing companies believe that their most critical success factor is having flexible strategies to respond to accelerating business changes. However, many recently implemented Information Systems still tend to ignore this need for flexibility and at times are hard to scale and Customize, thereby limiting the ability of an enterprise to react fast to its evolving business needs.
In the last two decades we constantly experienced a dramatic change in the way we store and process digital information. Every few years there has been an industry break point; an important new computing concept that changed radically the way computers are used and Information Systems are implemented. Examples include graphical and more user-friendly interfaces, the client server concept and the Internet. Such factors have somehow aided and contributed to position computers as a necessary commodity.
Additionally, with the constant drop in the cost of hardware, and better and cheaper network bandwidth, computers have become even more ubiquitous. The Internet has evolved tremendously and is today considered as probably the most effective communication medium. Whilst technology tends to evolve in a non-linear fashion, Moore’s Law has ensured that processing power has been increasing exponentially.
Though this is contributing to easier hoarding and dissemination of information, ICT professionals today still face tough challenges. ICT budgets grew rapidly in the late 90’s in anticipation of the Y2K problem. In these last years many ICT departments have been even asked to cut their budgets while they were expected to continue providing an appropriate information infrastructure so as to enable the organisations to augment their products and possibly gain a competitive edge. Hardware replacement cycles are perceived to have increased. Generally speaking, ICT budgets did not grow in these last years in line with the computational needs of the organisations; whilst workloads are still increasing, the capacities to handle them are not.
In some cases increasing a firm’s computational needs might end up in a lot of computational power which is not appropriately utilized. Why? Consider for example the utilization of a server machine. Most of the time its real processing capacity is not used at all. However maybe sometimes because a large and long process is executed or the number of connected users temporarily increase, the server might end up experiencing a processing overload.
It has been estimated that on average a desktop computer uses only about 5% to 8% of its processing power (EuropeanCeo, 2005). Whilst, as Hendry (2004) reports, load balancing can aid in the distribution of processing and communication activity, similar servers that experience spikes in processor usage are barely used for the rest of the day and eventually end up with a large amount of unused computing capacity.
So the inevitable questions are, is it really feasible to increase and upgrade the firm’s single source of computational power if most of the time the existing processing power is not being used? How can we ensure that a firm’s computational resources are well balanced and allocated, so as to minimize wastage and eventually, justify any further investment in the ICT infrastructure?
The basic concept that gives insight to the answer to these questions extends back to the 70’s when the notion of distributed computing was born. Today, we are seeing increasing interest among business communities in what is termed as, Grid Computing.
Grid Computing Definition
World-renowned organisations are promoting the Grid in a big way and several definitions can be found. It has become a fashionable term. Dr. Ian Foster, a professor at the University of Chicago and director of the Distributed Systems Lab at Argonne National Laboratory, a pioneer in Grid Computing, provided his definition for the layman as being the “technology to enable the sharing of computing resources across institutional boundaries”.
Research firm, Gartner, Inc., defines grid computing as a way to solve computing tasks using resources that are shared by more than one owner and coordinated to solve more than one problem.
The concept of Grid Computing was initially popular among academics, research and scientific communities. It was used for functions that required a substantial amount of computing power. However in these last years, an increasing number of organisations are early adopting and trying to reap benefits from this technology.
When tallying up all the processing power that these PC’s provide, it’s like having one big supercomputer. Grid technologies also played a major role in identifying the world’s largest known prime number. This was part of the Marsenne project where scientists identified the 43rd Marsenne Prime 230,402,457-1. – a figure that contains 9,152,052 digits.
Within business communities, the Grid concept is far more popular among large corporations. Baum, the publishing editor for Oracle Corporation, states that these corporations are initially attracted by the amount of savings that the technology can provide. Mainstay Partners conducted an ROI study to evaluate the enterprise grid technology platforms currently in use at seven participating companies. It was concluded that the adaptation of grid technology yielded an average of 43 percent savings in hardware cost.
Much of the savings were credited to the shift from a large symmetric multiprocessor server to a number of lower cost servers. With the use of Grid technology the latter setup delivered similar or at times even more computational power than the larger system, however with fewer costs. Baum’s report adds that the grids within these companies were being used for a variety of applications, including enterprise resource planning (ERP), decision support, customer relationship management (CRM), and supply chain management (SCM).
Still, companies that operate in the financial services industry, drug discoveries and weather modeling are initially more prone to benefit from Grid technologies, as they are involved in complex scientific and mathematical calculations and therefore require an added amount of computational power. So are companies that tend to process large amounts of data for their business intelligence activities. However, organisations are increasingly being enticed to adopt Grid technologies even for their transnational based systems, given that Grids may further facilitate storage space Issues.
Challenges faced by Grid Computing
IDC, the market intelligence and advisory services firm, are referring to Grid computing as the fifth generation of computing, after client-server and multi-tier.
Yet, according to IDC, the technology still needs to be ‘normalized’ and has to overcome various challenges. IDC believes that these concerns, in some cases, are more perception than reality, and as organisations gain more experience with this distributed approach, their concerns will be laid to rest.
Additionally, a research conducted by the 451 Group shows that software licensing, security and bandwidth matters are among the things that can disturb grid roll outs.
Grid Computing Applications
- Distributed Super-computing
- High Throughput Computing
- On Demand Computing
- Data Intensive Computing
- Collaborative Computing
- Solve Complex Problems in No Time
- Integration with Other Organizations
Types of Grid Computing
- Computational Grid
- Data Grid
- Collaborative Grid
- Manuscript Grid
- Scavenging Grid
- Modular Grid
Components of Grid Computing
- Security: single sign-on, authentication, authorization, and secure data transfer
- Resource Management: remote job submission and management
- Data Management: secure and robust data movement
- Information Services: directory services of available resources and their status
- Application Programming Interfaces (APIs) to the above facilities
- C bindings (header files) needed to build and compile programs
Grid Computing Examples
- NASA Information Grid
- High Energy Physics E Data Grid
Grid Computing Vs. Cloud Computing
|GRID COMPUTING||CLOUD COMPUTING|
|Grid computing is for Application-oriented.||Cloud computing is for Service-oriented.|
|In Grid computing, resources are shared among multiple computing units for processing a single task.||In cloud computing, all the resources are managed centrally and are place over different servers in clusters.|
|Grid computing is a collection of Interconnected computers and networks that can be called for large scale processing tools.||In cloud computing, more than one computer coordinates to resolve the problem together.|
|Grid computing is operated within a corporate network||Cloud computing can be accessed via the Internet.|
|In this, Grids are mainly owned and managed by an organization within its premises.||The cloud servers are owned by infrastructure providers and are placed in physically various locations.|
|It offers a shared pool of computing resources based on needs.||Cloud computing includes dealing with a common problem using a varying number of computing resources.|
Whilst Grid computing still needs to find broad acceptance in the commercial space, yet, market analysts state that the technology is here to stay. As Tom Hawk, the general manager of Grid computing for IBM says, “The Web is about sharing information. The grid is about sharing resources”.