Andrea Attanasio

Andrea Attanasio


Andrea Attanasio is associated with the Center of Excellence on High-Performance Computing and Dipartimento di Elettronica, Informatica e Sistemistica, Universita degli Studi della Calabria, Rende, Italy.


rss  subscribe to this author

Andrea Attanasio

Gianpaolo Ghiani


Gianpaolo Ghiani is a professor at the Faculty of Engineering at the University of Salento (Lecce, Italy). Gianpaolo spent his PhD and post-doc years working on optimixation problems related to logistics and manufacturing planning and control (vehicle routing, plant location, facility layout, job sequencing and similar topics). In his recent years, his focus is in sequential (multi-stage) decision processes. And has also done work on vehicle routing problems, including the fast computation of point-to-point quickest paths on very large time-depending road networks. He has also published over 50 papers to international journals on these topics.


rss  subscribe to this author

Andrea Attanasio

Lucio Grandinetti


Lucio Grandinetti is a full professor on the Faulty of Engineering at the University of Calabria in Rende, Italy, where he stands as Vice Rector and Coordinator of the Ph.D. Degree Program in Management Science and Engineering. His degrees are in Electronic Engineering and Systems Science, having graduated from the University of Pisa, Italy and the University of California at Berkeley and performing post-doctoral research at the University of Southern California, Los Angeles and the University of Dundee, Scotland. He has contributed to over 110 publications on the subject of high-performance computing, been a member of the IEEE Committee on Parallel Processing, and served as the editor for an MIT Press series of books on Advanced Computational Methods and Engineering. Currently, Grandinetti is the Director of the University of Calabria’s Centre of Excellence on HPC and the co- managing director of a Supercomputing Centre jointly established by the University of Calabria and the NEC Corporation.


rss  subscribe to this author

Andrea Attanasio

Emanuela Guerriero


Emanuela Guerriero graduated in Computer Engineering from the Universita' degli Studi di Lecce in 1999. She got a PhD in Operation Research from the University of Calabria in 2003. In January 2004 she joined the Università of Lecce as Researcher of Operations Research (Dipartimento di Ingegneria dell'Innovazione).

Her main research interests concern combinatorial optimization models for planning and scheduling problems, with applications in manufacturing and logistics. She published on international journals, including Journal of Optimization Theory and Applications, International Transactions in Operational Research, European Journal of Operational Research, Parallel Computing, International Journal of Production Research, International Journal of Production Economics.


rss  subscribe to this author

Francesca Guerriero Biography

Francesca Guerriero is associated with the Center of Excellence on High-Performance Computing and Dipartimento di Elettronica, Informatica e Sistemistica, Universita degli Studi della Calabria, Rende, Italy.


rss  subscribe to this author


Grid Computing:
Operations Research Methods for Resource Management and Scheduling Published: October 25, 2013 • Service Technology Magazine Issue LXXVII PDF

Abstract: The grid computing paradigm is replacing localized computer clusters or the traditional "supercomputer" model for research and industry enterprises requiring increasingly large amounts of processing power and storage. Computational grids are distributed systems integrating geographically- separate resources into a single, massively parallel network well-suited to complex processing tasks. This article introduces grid computing, lays out a taxonomy for comparing and analyzing the existing grid resource management systems, discusses the optimization and coordination of computational grids, and presents quantitative methods for managing grid resources. Local scheduling, external scheduling, and coordination of grid endpoints are highlighted as methods of resource management. Techniques and ongoing research projects are also discussed for improving the economy of resource bundles in flat, hierarchal, and cell structure grids.


A "computational grid" is a collection of geographically distributed, loosely coupled, heterogeneous, non-dedicated computing resources, belonging to several organizations, which provide computing services without users know the location and features of the involved resources [REF-1] (Figure 1). The resources shared and aggregated in order to provide the required services may include: processors; nodes with arithmetic, vector, graphic, signal processing facilities; storage systems; data sources, etc. Grids are inspired by electricity power production, transmission and trading [REF-2]. However, unlike power grids:

  • computing services may have a relatively complex structure (e.g., a job may be constituted by several precedence constrained tasks, and may be characterized by a deadline and a budget);
  • it is unlikely that individual resources (e.g., individual CPU, memory, databases, etc.) are useful on their own; therefore, resource allocation on a grid is mainly concerned with suitable combinations (bundles) of resources;
  • resource proximity may be relevant in the resource selection process because of the finite bandwidth and latency.

In addition, it is worth noting that, while in a power grid both fixed and variable costs are relevant, in a computational grid variable costs are often negligible.


Figure 1

In the long run it is expected that all the grids will be part of a unique world wide grid which will be a sort of generalization of the world wide web. In fact, in the same way an user reads a web page containing pieces of information located on multiple computers around the world (without knowing where these data come from), the world wide grid will allow using more advanced computing services without being aware of which resources are utilized. A grid user submits a task request by means of a Graphic User Interface (GUI), giving some high level specifications (e.g., a set of precedence constrained tasks, the applications to be used, input data requirements, a deadline, an available budget, etc.). The Grid Resource Management System (GRMS) plays the role of finding and allocating feasible resources (CPU cycles, storage, bandwidth, etc) in order to satisfy the user request. Then, the GRMS monitors the correct task processing, and notifies the user when the results are available. The GRMS must be able to utilize the residual resources of each organization being part of the grid (i.e., the resources left by internal users). As a consequence, an organization may act as a "grid resource provider" or as a "grid resource user" depending on whether its own "capacity" is less than or greater than its internal demand (Figure 2).

From an economic point of view, grids are motivated by the following observation: if user demands are uncorrelated, the aggregated capacity required by a grid system is significantly smaller than the sum of the capacities that the organizations would have in a traditional computing system. This phenomenon (known in the supply chain literature as risk pooling) can be explained qualitatively as follows: in a grid, if the demand from an organization is higher than the average, then there will probably be another organization whose demand is below average. Hence, capacity originally allocated to an organization can be reallocated to the other and, as a result, an lower overall capacity will be required.


Figure 2 - Available resources minus internal demand for a sample organization.

In a grid environment, a key issue is the optimization and the coordination of computing resource usage, which is the topic of this paper. Indeed allocation and scheduling approaches developed for massively parallel processor (MPP) schedulers are based on a number of hypotheses which are not satisfied in a grid environment:

  • all resources lie within a single administrative domain;
  • an MPP scheduler is in control of all resources;
  • the resource pool is invariant;
  • the impact caused by contention from other applications in the system on application execution performance is minimal:
  • all computation resources and all communication resources exhibit similar performance characteristics.

The GRMS provides three fundamental services: resource dissemination and discovery as well as job scheduling [REF-3]. This is accomplished in conjunction with other grid components such as applications, grid toolkits, native operating systems, billing and accounting systems, and security components (Figure 3). Grid toolkits (made up of compilers, development environments and run time systems) allow applications to describe their resource requirements. The RMS uses the services of the native operating systems (OSs) to execute applications. In a grid environment, "local" users may submit jobs to nodes which are managed by the nodes' OSs (not by the GRMS). In addition, the GRMS interacts with a security component (that implements the authorization and authentication functions of the grid) and with the billing and accounting systems.


Figure 3 - Resource Management System Context.

To meet the requirements of a grid environment, computer scientists in the mid-1990's began exploring the design and development of tailored methods for specifying application requirements, as well as translating these requirements into computational resources and network-level quality-of-service parameters, and arbitrating between conflicting demands. As shown in the subsequent sections, the current tools for resource management are relatively simple and do not allow to fully exploit the potential of computational grids. On the other hand, it is expected that Operations Research methods become necessary in order to support advanced features such as hard Quality of Service, a grid economy paradigm, etc.

A Taxonomy of Grid Resource Management Systems

Several projects, such as Globus [Ref-4], Legion [Ref-5], AppleS [Ref-6], Condor [Ref-7] and Nimrod/G [Ref-8], are currently underway with the aim to design and implement a grid resource management system. This section summarizes the main features of such GRMSs while the remaining sections are mostly focused on methodological issues which need to be further exploited in the near future.

Our GRMS classification is based on the taxonomy described in Krauter et al. and divides the GRMSs according to the following features: application target, machine organization, resource model characterization and scheduling peculiarities. We first summarize the taxonomy.

Application Target

Depending on the focus and application target, grids are classified into computational grids, data grids and service grids. A computational grid aggregates a large computational capacity in order to solve highly demanding applications (such as the ones utilized for solving grand challenge problems). Data grids provide an infrastructure for synthesizing new information from data repositories (e.g., digital libraries and data warehouses) distributed in a wide area network. Finally, service grids (including on demand, collaborative and multimedia grid systems) provide services that cannot be provided by any single machine.

Machine Organization

Another meaningful classification is based on machine organization which affects the communication patterns and the scalability of the system. To this respect, grids may have a flat, hierarchical or cell structure. In a flat organization all machines can directly communicate with each other while in a hierarchical organization machines are subdivided into a number of levels and each machine can communicate with the machines directly above it or below it, or peer to it in the hierarchy. Finally, in a cell organization machines are clustered into cells and the machines within any cell can communicate between themselves using a flat organization. Moreover, a designated machine in each cell acts as a gateway.

Quality of Service Support

The level of support for extensive Quality of Service (QoS) will gain increasing relevance as soon as grid applications become more sophisticated and demanding. At present, sometimes jobs are queued in for hours before being actually processed, leading to degraded QoS. The QoS is not only concerned with network bandwidth but also extends to the processing and storage capabilities of the grid nodes. To this respect, GRMS can be of three types. A GRMS that provides the ability to specify QoS at job submission time and can reserve resources in advance accordingly is said to provide hard QoS support. At present most grid systems provide soft QoS since they cannot enforce non-network QoS guarantees. This is mainly due to non real-time operating systems which do not allow the specification of service levels for running jobs.

Scheduler Organization

Grid scheduling systems can be classified into centralized, hierarchical and decentralized. In a centralized controller scheme, it is assumed that a single machine controls, or has sight of, the scheduling policies for all the nodes. In a hierarchical system, the resource controllers are organized as a hierarchy in which higher level resource controllers manage bigger units of resources while lower level controllers schedule smaller units of resources. Finally, in a fully decentralized approach there are no dedicated resource controller machines and the resource allocation and scheduling is done, e.g., on the basis of some sort of market trading. In a grid system each administrative domain is expected to have its own resource utilization policy. Broadly speaking, policies can be system-centric (if they try to maximize system throughput) or application-centric (if they try to minimize an application completion time or lateness). Because of the uncertainty affecting machine speeds, job processing efforts as well as request arrival times, schedules need to be re-examined and the job executions reordered from time to time. Rescheduling can be periodic or event driven (i.e., triggered by certain system events such as the arrival of a new job).

Table 1 reports a summary of the main features of the most popular systems. For the sake of brevity, we are not describing these systems in detail. Instead, we underline that grid resource management seems to be still in its infancy. In particular, most systems still use a centralized architecture.

While a centralized approach is suitable for cluster computing systems, it does not fit large grid environments with machines distributed across multiple organizations and administrative domains. Indeed, the main reasons for rejecting a centralized approach are:

  • a centralized scheduler would be a bottleneck (allocation and scheduling problems to be solved are NP-hard and large scale so that huge computing resources would be necessary just for generating reasonably good feasible solutions);
  • some organizations and administrative domains might not be available to share information on their capacity and/or their computing demand and/or their available budget.

On the basis of the previous analysis, the architectural design of grid scheduling system needs to be based on a decentralized or hierarchical approach. Indeed a GRMS should comprise both local schedulers (LS) and external schedulers (ES) (superscheduler or metascheduler). A LS (e.g., Condor

(Thrain et al., 2005) and PBS [Ref-9]) manages a single site (e.g., a cluster or a supercomputer) in a centralized way while an ES allocates jobs to sites according to a distributed mechanism. It is worth noting that at present LSs often employ very simple (and quite inefficient) scheduling policies based on a first come first served, prioritized queue or round-robin policy, or on a greedy heuristic. Another drawback of most current LSs is that they consider a single conventional objective (e.g., the maximization of system throughput, the optimization of resource utilization, etc.) and do not address QoS issues in a satisfactory way. The main effort toward this direction has been made at a local scheduler level by the economy- based scheduler Nimrod-G (Buyya et al.), which is part of the GRACE project [Ref-10]. Originally developed for parameter-sweep applications, Nimrod-G makes use of two main scheduling strategies (depending on the user preferences):

  • minimize job completion time with a give budget constraint (the so-called time optimization);
  • minimize cost within a given deadline (the so-called cost optimization).

In order to manage QoS at a superscheduling level, an economy-driven resource management system needs to be set up. For an overview of economic models (i.e., commodity market model, posted price model, auctions, etc.) the reader is referred to R. Buyya's thesis [Ref-11]. In an economy-driven GRMS:

  • resource providers pursuit the best possible return on their investment (or the maximization of their resource utilization);
  • resource users try to have their problems solved within a specified timeframe and budget.

The price of resources vary over time (based on supply and demand), allowing important jobs to be scheduled during "peak" hours and less important jobs during "off peak" hours. Studies on grid economy are at a very preliminary stage. From a theoretical perspective [Ref-12], a quite general framework is given by the general equilibrium theory (GET) which aims at determining resource prices in such a way supply meets demand for each commodity and the different players optimize their use of resources at the current price levels. The GET describes conditions under which the existence and uniqueness of such a general Pareto-optimal solution hold.

Finally, most GRMS have been implemented following a monolithic architecture and are hard to extend to different applications.

Grid Economy and Resource Management

Current grid computing systems operate on a zero settlement basis (in which users give their resources for free in order to contribute to relevant research projects) or on the basis of an award process. However, it is expected that, as soon as commercial grids become a reality, resources will be traded on an economic basis which will be able to support a hard QoS.

Zero Settlement Basis Grid

Examples of such projects are the SETI@home project [Ref-13], which aims at finding evidence of extraterrestrial life, the XPulsar@home project [Ref-14], which studies a mathematical model of a pulsar, the Intel CURE project [Ref-15], which aims at evaluating the curing potential of hundreds of millions of molecules in order to design new drugs, the FightAIDS@Home project [Ref-16], the RSA Factoring by Web project [Ref-17], etc. In these projects, the overall job is subdivided into independent tasks. For instance, in the SETI@home project, a telescope located in Puerto Rico repeatedly scans about 25% of the sky and records a range of radio frequencies centered at the hydrogen line. This data is delivered to a server at the University of California, Berkeley. The data is decomposed into tasks each described using about 300 KB. Each task needs to be processed using a Fast Fourier Transformation to detect spikes of frequencies significantly above the noise level. Each task can be performed independently. The processing of one task takes about 10 hours on a typical personal computer. During the time when tasks are processed, new signals are received, and so new tasks continue being created.

2K On Demand Service Grid Flat (Hierarchical in the Future) Soft network QoS, one level hierarchical scheduler for network resources, decentralized scheduler for other resources
AppleS Computational Grid Resource model provided by the underlying Globus, Legion, or Netsolve middleware services. Centralized scheduler, predictive heuristic state estimation, online rescheduling
Bond On Demand Service Grid Flat Hard QoS, decentralized scheduler, predictive pricing models, online rescheduling
Data Grid
Computational Grid
Hierarchical No QoS, hierarchical schedulers, extensible schUduling policy
Condor Computational Grid Flat No QoS, centralized scheduler
Darwin Network Oriented
Service Grid
Hierarchical Hard QoS, hierarchical scheduler, non-predictive state estimation, online rescheduling
Globus Grid Toolkit (for developing computational data, service Grids) Soft QoS, lower level services for resource allocation or co-allocation including resource reservation. Higher-level tools (like Nimrod-G) perform scheduling
Multimedia Service Grid Flat Hard QoS, decentralized scheduler
MOL Computational Grid Hierarchical Cells No QoS, decentralized scheduler
MSHN Computational & Service Grids Flat Hard QoS, centralized scheduler, predictive heuristics for state estimation, event driven rescheduling

Table 1 - Existing GRMS main features

A client-server architecture is utilized in which the server can: register any volunteer and send there a client program that can process any task, send a task to a client program that contacted the server, and receive the result of processing of a task from a client program. Any client computer processes at most one task at any time. The server stores a database of about 2 million volunteers, half a million of which actively participate in the processing, and keeps track of which volunteer processed which task. A task is sometimes processed more than once (by different volunteers) to verify the results using an independent volunteer.

Award-based Grids

An example of such a project is the National Science Foundation's Partnerships for Advanced Computational Infrastructure (PACI) Program which manages a number of computing resources at several US research institutions (including the Distributed Terascale Facility). There are several ways of obtaining access to PACI resources [Ref-18]. A scientist, engineer or educator who has a joint appointment with a US university or a US Federal agency can obtain access to PACI resources depending on his/her eligibility and requirements. Smaller allocations, including startup allocations, are handled by individual sites while larger applications are considered by the National Resource Allocations Committee (NRAC) twice annually. Requests should be clearly linked to the scientific goals and should be supported by providing performance and parallel-scaling information for the application(s) to be used. Currently, the PACI Program does not allocate resources other than CPU time. It is anticipated that the allocation of non-CPU resources (such as data storage or dedicated networking resources) may be required in the future. CPU resources are requested in the form of Service Units (SUs). In general, an SU is approximately equivalent to either one CPU hour or one wall clock hour on one CPU of the system of interest. The exact definition of an SU on each platform is defined in the technical documentation associated with each resource available through the listing of PACI resources [Ref-19].

Economy-based Grids

Economy-based GRMSs provide a distributed mechanism for allocating resources to jobs at a superscheduler level while enforcing QoS constraints. Market-based GRMSs force users to compete so that the different levels of importance they give to their job become explicit and resources can be allocated accordingly. Furthermore, market-based schedulers provide a feedback signal that prevents the user from submitting unbounded amounts of work. Negotiation may be done on a monetary basis or with a dummy currency in order to share the benefit of the cooperation among organizations. According to the economic theory, auctions are among the most suitable means for achieving this result. (For material explaining auction design in a grid setting, please refer to the complete text, Grid Computing: The New Frontier of High Performance Computing, edited by Lucio Grandinetti.)

Architecture of a Grid Resource Management System

Grid users submit their jobs from any one of a number of entry points (EPs). Some privileged users have direct access to the resources of some administrative domain while others are managed by an ES. An ES acts as a broker, i.e. it is in charge of 63 dividing the job into a number of tasks and allocating each task to a site (in such a way QoS constraints are satisfied). For each site (an administrative domain or a part of it), a LS is responsible for determining job sequencing, local resource allocation and data transfer scheduling. In general, on receipt of a job request, the ES interrogates a number of controlled LSs to ascertain whether the task can be executed on the available resources and meet the user-specified deadline (feasibility check). If this is not the case the ES attempts to locate a LS, managed by another ES, that can meet the task requirements.

If a LS cannot be located within a preset number of search steps, the task request is either rejected or passed to a scheduler that can minimize the deadline failure. When a suitable site is located, the task request is passed from the ES to the selected LS. The choice of specific algorithms for each component defines a particular GRMS.

In practice, the above described architecture can be expanded in order to improve its flexibility and/or to implement grid economy features. In particular, the following variants are expected to be considered in the near future:

  • the ES might split an user job into a number of simpler tasks to be submitted to different LSs;
  • the feasibility check performed by LSs might accept a job only if its price is greater than the value of the required resources;
  • as new important and urgent local jobs are submitted, a LSs might try to re-allocate pending low-priority jobs to other LSs through an ES.

Figure 4 - Grid scheduling architecture.

A Blueprint for Grid Resource Management

As pointed out in previous sections, the literature on grid resource management is still in its infancy. As a result there are several research lines that deserve to be followed. In the remainder of this paper we report and discuss a list of the most remarkable issues.

Local Scheduling

The research activity should be focused on devising enhanced semi on-line heuristics for the multiprocessor task scheduling problem [Ref-20, 21, 22]. Unlike traditional sequential and parallel computing environments (where only "soft" QoS is guaranteed, and scheduling must be performed very quickly), in a grid environment it is worth spending a greater effort in order to obtain better schedules (i.e. schedule with a better trade-off between economic and QoS objectives). In order to improve the algorithms proposed in Caramia et al. and Festa et al., our basic idea is to allow a modification of the current schedule any time a new user request arrives. In order to implement this strategy, new tailored metaheuristics [Ref-23] should be devised in order to provide much better solutions in a short amount of time. The so called metaheuristic algorithms can play a very important role in solving hard combinatorial problems. They include multistart methods, genetic algorithms, simulated annealing, tabu search, variable neighborhood descent, variable neighborhood search, and grasp. It is worth noting that, in order to provide good results, metaheuristics need to be tailored to handle the problem at hand.

Secondly, the research activity should be based on a number of meaningful generalizations of the on- line problems tackled in Caramia et al. and Festa et al. The main features of these problems are listed below:

  • precedence constrained tasks;
  • variable profile tasks (as detailed later in the book);
  • different processor classes (e.g., processors with/without vector facilities, etc);
  • data transfer constraints;
  • maximization of the overall net revenue (job price minus resource value).

Thirdly, the high degree of dynamism of computational grids should be taken into account explicitly in the resource management system. This requires that variations of resource availability (e.g., processor speed and network bandwidth) and service requests (e.g., job arrival times) are suitably characterized. Then such (probabilistic) characterizations need to be taken into account in computationally efficient rescheduling algorithms.

External Scheduling

As explained in the previous sections, economy-based GRMSs provide a distributed mechanism for allocating resources to jobs at a superscheduler level while enforcing QoS constraints. Because of the peculiar nature of grid resources and services, combinatorial auctions are the most suitable mechanism. With this regard, three main research issues deserve attention. Firstly, a grid bidding language should be devised in such a way applications' agents are able to express alternative resource needs. Secondly, efficient grid auction mechanisms need to be developed. A mechanism is defined as the specification of all possible bidding strategies available to participants, and of an outcome function that maps these strategies to an allocation of resources (who gets what?) and corresponding payments that participants need to make or receive. Finally, efficient heuristics should be devised for the winner determination problems associated to the selected auction mechanisms.


Although grid resource management is still a relatively new field, there are already numerous methods for improving the economy and proficiency of computational grids. Local and external scheduling are the two primary research channels for developing grid management blueprints. Grid signals and data received on a zero-settlement or award basis are becoming a tremendous resource for developing better computational grids. Machine and scheduler organization research are also in the works for making grid resource management more efficient. As the forecasting, scheduling, and coordination of computational resources continues to improve, systems and applications will have access to faster and more-streamlined computational grids. With the continual advancement of these computational grids, not only are the services that they perform improved, but so are the applications and systems that rely on their services.

This article is based on material found in "Operations Research Methods for Resource Management and Scheduling in a Computational Grid: a Survey," written by Andrea Attanasio, Gianpaolo Ghiani, Lucio Grandinetti, Emanuela Guerriero, and Francesca Guerriero. The chapter is included in the book Grid Computing: The New Frontier of High Performance Computing, edited by Lucio Grandinetti.

Visit the publisher's page to learn about purchasing a copy of this book:


[Ref-1] Foster and C. Kesselman (editors), The Grid: Blueprint for a Future Computing Infrastructure, Morgan Kaufmann Publishers, San Francisco, USA, 1999.

[Ref-2] M. Chetty and R. Buyya. Weaving computational Grids: How analogous are they with electrical Grids? Journal of Computing in Science and Engineering, 4, 61-71, 2001.

[Ref-3] K. Krauter, R. Buyya and M. Maheswaran. A taxonomy and survey of grid resource management systems for distributed computing, Software - Practice & Experience, 32, 135-164,2002.

[Ref-4] Foster and C. Kesselman. Globus: A Metacomputing Infrastructure Toolkit. International Journal of Supercomputer Applications, 11, 115-128, 1997.

[Ref-5] S. Chapin, J. Karpovich and A. Grimshaw. The Legion Resource Management System, Proceedings of the 5th Workshop on Job Scheduling Strategies for Parallel Processing, Birmingham, UK, 1999.

[Ref-6] F. Berman and R. Wolski. The AppLeS Project: A Status Report, Proceedings of the 8th NEC Research Symposium, Berlin, Germany, 1997.

[Ref-7] D. Thain, T. Tannenbaum and M. Livny. Distributed computing in practice: the Condor experience. Concurrency and Computation: Practice and Experience, 17, 323-356, 2005.

[Ref-8] R. Buyya, D. Abramson and J. Giddy. Nimrod/G: An Architecture for a Resource Management and Scheduling System in a Global Computational Grid, International Conference on High Performance Computing in Asia-Pacific Region (HPC Asia 2000), Beijing, China. IEEE Computer Society Press, USA, 2000.

[Ref-9] B. Bode, D. Halstead, R. Kendall, and D. Jackson. PBS: The portable batch scheduler and the maui scheduler on linux clusters. Proceedings of the 4th Linux Showcase and Conference, Atlanta, GA, USENIX Press, Berkley, CA, 2000.

[Ref-10] GRACE (Grid Architecture for Computational Economy) project,

[Ref-11] R. Buyya. Economic-based Distributed Resource Management and Scheduling for Grid Computing, Ph.D. Thesis, Monash University, Melbourne, Australia, 2002.

[Ref-12] T. Sandholm. Distributed Rational Decision Making. In: G. Weiss (Editor), Multiagent Systems, The MIT Press, Cambridge, MS, 201-258, 1999.

[Ref-13] Korpela, E., Werthimer, D., Anderson, D., Cobb and J., Lebofsky. Seti@home – massively distributed computing for SETI. Computing in Science & Engineering, 3, 78-83, 2001 (

[Ref-14] Weth, C, Kraus, U., Freuer, J., Ruder, M., Dannecker, R., Schneider, R., Konold, M. Ruder. XPulsar@home - Schools help Scientists. Proceedings of CCGrid, Brisbane, Australia, 588-594, 2001.

[Ref-15] The Intel(r) Philanthropic Peer-to-Peer Program, (

[Ref-16] The Olson Laboratory FightAIDS@Home project, (

[Ref-17] The RSA Factoring By Web project (

[Ref-18] PACI and DTF Resource Allocations Policies, Vl.O.a, Modified 09 March 2004, available at


[Ref-20] M. Drozdowski. Scheduling multiprocessor tasks. An overview, European Journal of Operations Research 94, 215- 230, 1996.

[Ref-21] M. Caramia, S. Giordani, A. Iovanella. Grid scheduling by on-line rectangle packing, Networks 44, 106-119, 2004.

[Ref-22] P. Festa, G. Ghiani, L. Grandinetti and E. Guerriero. Semi on-line multiprocessor task scheduling, Technical Report, Center of Excellence for High Performance Computing, Universita della Calabria, 2005.

[Ref-23] F. W. Glover, G. A. Kochenberger (Eds). Handbook of Metaheuristics (International Series in Operations Research & Management Science), Springer, Berlin, Germany, 2003.


No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, photocopying, recording, scanning or otherwise, except as permitted under Sections 107 or 108 of the 1976 United States Copyright Act, without either the prior written permission of the Publisher, or authorization through payment of the appropriate per-copy fee to:

      Copyright Clearance Center
      222 Rosewood Drive, Danvers, MA 01923
      978-750-8400, fax 978-750-4744

Portions of this work are from Grid Computing: The New Frontier of High Performance Computing, edited by Lucio Grandinetti and published by Elsevier, Copyright 2005 Elsevier B.V. All rights reserved.