Accepted Papers - December 1st, 2009
Power-aware Provisioning of Cloud Resources for Real-time Services
Kyong Hoon Kim, Anton Beloglazov and Rajkumar Buyya
Abstract: Reducing energy consumption has been an essential technique for Cloud resources or datacenters, not only for operational cost, but also for system reliability. As Cloud computing becomes emergent for Anything as a Service (XaaS) paradigm, modern real-time Cloud services are also available throughout Cloud computing. In this work, we investigate power-aware provisioning of virtual machines for real-time services. Our approach is (i) to model a real-time service as a real-time virtual machine request; and (ii) to provision virtual machines of datacenters using DVFS (Dynamic Voltage Frequency Scaling) schemes. We propose several schemes to reduce power consumption and show their performance throughout simulation results.
Performance and Deployment Evaluation of a Parallel Application in an on-premises Cloud Environment
Giacomo Mc Evoy, Bruno Schulze and Eduardo LM Garcia
Abstract: In this paper we present the case study of an application of a parallel simulation optimization deployed in an on-premises Cloud. The compute-intensive application uses a Master/ Worker model, supporting communication over both Java RMI and Globus Grid Services between the nodes. The Master deploys Workers over an Eucalyptus Cloud using the Nimbus Context Broker for just-in-time configuration and runtime Worker discovery. The computational performance of the Workers under different communication mechanisms and deployment scenarios is presented in an attempt to evaluate the use of Virtual Machines in a Cloud as a tool to achieve application scaling. The deployment of this particular application was crafted to support on-the-fly addition of working nodes. The case study suggests a deployment pattern that shapes some requirements and considerations of a scalable Globus-driven Platform as a Service Cloud.
Towards a Middleware for Configuring Large-scale Storage Infrastructures
David Eyers, Ramani Routray, Rui Zhang, Peter Pietzuch and Douglas Willcocks
Abstract: The rapid proliferation of cloud and service-oriented computing infrastructure is creating an ever increasing thirst for storage within data centers. Ideally management applications in cloud deployments should operate in terms of high-level goals, and not present specific implementation details to administrators. Cloud providers often employ Storage Area Networks (SANs) to gain storage scalability. SAN configurations have a vast parameter space, which makes them one of the most difficult components to configure and manage in a cloud storage offering. As a step towards a powerful cloud storage configuration platform, this paper introduces a SAN configuration middleware that aids management applications in their task of updating and troubleshooting heterogeneous SAN deployments. The middleware acts as a proxy between management applications and a central repository of SAN configurations. The central repository is designed to facilitate efficient querying of the best practice knowledge base. This allows the validation of SAN configurations against a knowledge base of best practice rules across cloud deployments. Management applications contribute local SAN configurations to the repository, and also subscribe to proactive notifications for configurations now no longer considered safe.
Semantic Middleware for E-science Knowledge Spaces
Joe Futrelle, Jeff Gaynor, Joel Plutchak, James Myers, Robert McGrath, Peter Bajcsy, Jason Kastner, Kailash Kotwani, Jong Sung Lee, Luigi Marini, Rob Kooper, Terry McLaren and Yong Liu
Abstract: The Tupelo semantic content management middleware implements Knowledge Spaces that enable scientists to locate, use, link, annotate, and discuss data and metadata as they work with existing applications in distributed environments. Tupelo is built using a combination of commonly-used Semantic Web technologies for metadata management, content management technologies for data management, and workflow technologies for management of computation, and can interoperate with other tools using a variety of standard interfaces and a client and desktop API. Tupelo´s primary function is to facilitate interoperability, providing a Knowledge Space "view" of distributed, heterogeneous resources such as institutional repositories, relational databases, and semantic web stores. Knowledge Spaces have driven recent work creating e-Science cyberenvironments to serve distributed, active scientific communities. Tupelo-based components deployed in desktop applications, on portals, and in AJAX applications interoperate to allow researchers to develop, coordinate and share datasets, documents, and computational models, while preserving process documentation and other contextual information needed to produce a complete and coherent research record suitable for distribution and archiving.
Enhancing the Efficiency of Resource Usage on Opportunistic Grids
Raphael de A. Gomes, Fábio Moreira Costa and Fouad Joseph Georges
Abstract: Opportunistic grid computing middleware platforms have as a main concern the need to guarantee that the performance of the machines that donate resources to the grid is not affected. This concern, together with the fact that it happens in an extremely dynamic environment, cause the adoption of a treatment based on the best-effort principle for grid applications. This means that efficient application management schemes are usually not employed, which results in less than optimal performance as grid applications often need to be restarted due to (often temporary) resource claims by local user applications. This paper presents a method to improve the performance of grid applications, taking into account resource usage profiles for local applications, trying to identify when such resource claims are temporary and avoiding actions such as the migration of grid tasks. The proposed approach was implemented as part of the InteGrade middleware and its evaluation shows promising results for the efficient management of grid applications.
Towards an adaptive middleware for opportunistic environment: a mobile agent approach
Vinicius Pinheiro, Alfredo Goldman and Fabio Kon
Abstract: The mobile agent paradigm has emerged as a promising alternative to overcome the construction challenges of opportunistic grid environments. This model can be used to implement mechanisms that enable application execution progress even in the presence of failures, such as those presented by the MAG middleware (Mobile Agents for Grids). MAG includes retrying, replication, and checkpointing as fault-tolerance techniques; they operate independently from each other and are not capable of detecting changes on resource availability. In this paper, we describe a MAG extension that is capable of migrating agents when nodes fail, that optimizes application progress by keeping only the most advanced checkpoint, and that migrates slow replicas.