BACK TO INDEX

Publications of year 2009
Thesis
  1. Travis Desell. Asynchronous Global Optimization for Massive-Scale Computing. PhD thesis, Rensselaer Polytechnic Institute, 2009. Keyword(s): distributed computing, scientific computing, middleware, grid computing.
    Abstract:
    As the rates of data acquisition and cost of model evaluation in scientific computing are far surpassing improvements in processor speed, the size of the computing environments required to effectively perform scientific research is increasing dramatically. As these computing environments increase in size, traditional global optimization methods, which are sequential in nature, fail to adequately address the challenges of scalability, fault tolerance and heterogeneity that using these computing systems entails. This thesis introduces asynchronous optimization strategies which while similar to their traditional synchronous counterparts, do not have explicit iterations or dependencies. This allows them to scale to hundreds of thousands of hosts while not being degraded by faults or heterogeneity. A framework for generic distributed optimization (FGDO) is presented, which separates the concerns of scientific model development, distributed computing and developing efficient optimization strategies; allowing researchers to develop these independently and utilize them interoperably through simple interfaces. FGDO has been used to run these asynchronous optimization methods using an astroinformatics problem which calculates models of the Milky Way galaxy on thousands of processors in RPI’s BlueGene/L supercomputer and to run the MilkyWay@Home volunteer computing project, which currently consists of over 25,000 active computing hosts. A simulation environment was also implemented in FGDO, which allowed asynchronous optimization to be examined in a controlled setting with benchmark optimization problems. Results using the simulated environment show that the asynchronous optimization methods used scale to hundreds of thousands of computing hosts, while the traditional methods do not improve or even degrade as more computing hosts are added. Additionally, the asynchronous optimization methods are shown to be largely unaffected by increasing heterogeneity in the computing environment and also scale similarly in a computing environment modeled after MilkyWay@Home. This thesis presents strong evidence of the need for novel optimization methods for massive scale computing systems and provides effective initial work towards this goal.

    @PhdThesis{desell-phd-2009,
    author = {Travis Desell},
    title = {Asynchronous Global Optimization for Massive-Scale Computing},
    school = {Rensselaer Polytechnic Institute},
    year = 2009,
    keywords = {distributed computing, scientific computing, middleware, grid computing},
    pdf = {http://wcl.cs.rpi.edu/theses/desell-phd-thesis.pdf},
    abstract = {As the rates of data acquisition and cost of model evaluation in scientific computing are far surpassing improvements in processor speed, the size of the computing environments required to effectively perform scientific research is increasing dramatically. As these computing environments increase in size, traditional global optimization methods, which are sequential in nature, fail to adequately address the challenges of scalability, fault tolerance and heterogeneity that using these computing systems entails. This thesis introduces asynchronous optimization strategies which while similar to their traditional synchronous counterparts, do not have explicit iterations or dependencies. This allows them to scale to hundreds of thousands of hosts while not being degraded by faults or heterogeneity. A framework for generic distributed optimization (FGDO) is presented, which separates the concerns of scientific model development, distributed computing and developing efficient optimization strategies; allowing researchers to develop these independently and utilize them interoperably through simple interfaces. FGDO has been used to run these asynchronous optimization methods using an astroinformatics problem which calculates models of the Milky Way galaxy on thousands of processors in RPI’s BlueGene/L supercomputer and to run the MilkyWay@Home volunteer computing project, which currently consists of over 25,000 active computing hosts. A simulation environment was also implemented in FGDO, which allowed asynchronous optimization to be examined in a controlled setting with benchmark optimization problems. Results using the simulated environment show that the asynchronous optimization methods used scale to hundreds of thousands of computing hosts, while the traditional methods do not improve or even degrade as more computing hosts are added. Additionally, the asynchronous optimization methods are shown to be largely unaffected by increasing heterogeneity in the computing environment and also scale similarly in a computing environment modeled after MilkyWay@Home. This thesis presents strong evidence of the need for novel optimization methods for massive scale computing systems and provides effective initial work towards this goal.} 
    }
    


Articles in journal, book chapters
  1. Kaoutar El Maghraoui, Travis Desell, Boleslaw K. Szymanski, and Carlos A. Varela. Malleable Iterative MPI Applications. Concurrency and Computation: Practice and Experience, 21(3):393-413, March 2009. Keyword(s): distributed computing, concurrent programming, middleware, grid computing, scientific computing.
    Abstract:
    Malleability enables a parallel application's execution system to split or merge processes modifying granularity. While process migration is widely used to adapt applications to dynamic execution environments, it is limited by the granularity of the application's processes. Malleability empowers process migration by allowing the application's processes to expand or shrink following the availability of resources. We have implemented malleability as an extension to the process checkpointing and migration (PCM) library, a user‐level library for iterative message passing interface (MPI) applications. PCM is integrated with the Internet Operating System, a framework for middleware‐driven dynamic application reconfiguration. Our approach requires minimal code modifications and enables transparent middleware‐triggered reconfiguration. Experimental results using a two‐dimensional data parallel program that has a regular communication structure demonstrate the usefulness of malleability.

    @article{elmaghraoui-malleability-ccpe-2008,
    author = {Kaoutar El Maghraoui and Travis Desell and Boleslaw K. Szymanski and Carlos A. Varela},
    title = {Malleable Iterative MPI Applications},
    journal = {Concurrency and Computation: Practice and Experience},
    year = 2009,
    volume = 21,
    number = 3,
    pages = {393-413},
    month = {March},
    pdf = {http://wcl.cs.rpi.edu/papers/elmaghraoui-malleability-ccpe-2008.pdf},
    keywords = {distributed computing,concurrent programming, middleware, grid computing, scientific computing},
    abstract = {Malleability enables a parallel application's execution system to split or merge processes modifying granularity. While process migration is widely used to adapt applications to dynamic execution environments, it is limited by the granularity of the application's processes. Malleability empowers process migration by allowing the application's processes to expand or shrink following the availability of resources. We have implemented malleability as an extension to the process checkpointing and migration (PCM) library, a user‐level library for iterative message passing interface (MPI) applications. PCM is integrated with the Internet Operating System, a framework for middleware‐driven dynamic application reconfiguration. Our approach requires minimal code modifications and enables transparent middleware‐triggered reconfiguration. Experimental results using a two‐dimensional data parallel program that has a regular communication structure demonstrate the usefulness of malleability.} 
    }
    


Conference articles
  1. Nathan Cole, Heidi Jo Newberg, Malik Magdon-Ismail, Travis Desell, Carlos Varela, and Boleslaw Szymanski. A Study of the Sagittarius Tidal Stream Using Maximum Likelihood. In ASTRONOMICAL DATA ANALYSIS SOFTWARE AND SYSTEMS XVIII, volume 411, Quebec, pages 221-225, November 2009. Keyword(s): distributed computing, scientific computing.
    Abstract:
    Modern surveys are producing enormous amounts of data that can only be navigated via the use of the ever increasing computational resources available. For example, the SDSS has taken a large amount of photometric data that can be used to discover and study substructure in the Galactic spheroid. A maximum likelihood method was developed and applied to color-selected F turnoff stars from two stripes of SDSS data, to determine the spatial characteristics of the Sagittarius dwarf tidal debris that exists within these stripes. The Sagittarius tidal debris in stripes 79 and 86 were detected at the positions (l,b,R) = (163.311 °, -48.400 °, 30.23 kpc) and (l,b,R) = (34.775 °, -72.342 °, 26.08 kpc) and were found to have a FWHM of 6.53±0.54 kpc and 5.71±0.26 kpc and also to contain ≈9,500 and ≈16,700 F turnoff stars, respectively. The debris pieces are axially aligned with the directions (^X,^Y,^Z) = (0.758 kpc, 0.254 kpc, -0.600 kpc) and (^X,^Y,^Z) = (0.982 kpc, 0.084 kpc, 0.167 kpc), respectively. The results of probabilistically separating the tidal debris from the stellar spheroid are also presented.

    @inproceedings{cole-adass2008,
    title = "A Study of the Sagittarius Tidal Stream Using Maximum Likelihood",
    author = "Nathan Cole and Heidi Jo Newberg and Malik Magdon-Ismail and Travis Desell and Carlos Varela and Boleslaw Szymanski",
    booktitle = "ASTRONOMICAL DATA ANALYSIS SOFTWARE AND SYSTEMS XVIII",
    address = "Quebec",
    volume = 411,
    month = Nov,
    year = 2009,
    pages = {221-225},
    keywords = {distributed computing, scientific computing},
    abstract = {Modern surveys are producing enormous amounts of data that can only be navigated via the use of the ever increasing computational resources available. For example, the SDSS has taken a large amount of photometric data that can be used to discover and study substructure in the Galactic spheroid. A maximum likelihood method was developed and applied to color-selected F turnoff stars from two stripes of SDSS data, to determine the spatial characteristics of the Sagittarius dwarf tidal debris that exists within these stripes. The Sagittarius tidal debris in stripes 79 and 86 were detected at the positions (l,b,R) = (163.311 °, -48.400 °, 30.23 kpc) and (l,b,R) = (34.775 °, -72.342 °, 26.08 kpc) and were found to have a FWHM of 6.53±0.54 kpc and 5.71±0.26 kpc and also to contain ≈9,500 and ≈16,700 F turnoff stars, respectively. The debris pieces are axially aligned with the directions (^X,^Y,^Z) = (0.758 kpc, 0.254 kpc, -0.600 kpc) and (^X,^Y,^Z) = (0.982 kpc, 0.084 kpc, 0.167 kpc), respectively. The results of probabilistically separating the tidal debris from the stellar spheroid are also presented.} 
    }
    


  2. Travis Desell, Malik Magdon-Ismail, Boleslaw Szymanski, Carlos A. Varela, Heidi Newberg, and Nathan Cole. Robust Asynchronous Optimization for Volunteer Computing Grids. In Proceedings of the 5th IEEE International Conference on e-Science (eScience2009), Oxford, UK, pages 263-270, December 2009. Keyword(s): distributed computing, scientific computing, middleware, grid computing.
    Abstract:
    General-Purpose computing on Graphics Processing Units (GPGPU) is an emerging field of research which allows software developers to utilize the significant amount of computing resources GPUs provide for a wider range of applications. While traditional high performance computing environments such as clusters, grids and supercomputers require significant architectural modifications to incorporate GPUs, volunteer computing grids already have these resources available as most personal computers have GPUs available for recreational use. Additionally, volunteer computing grids are gradually upgraded by the volunteers as they upgrade their hardware, whereas clusters, grids and supercomputers are typically upgraded only when replaced by newer hardware. As such, MilkyWay@Home’s volunteer computing system is an excellent testbed for measuring the potential of large scale distributed GPGPU computing across a large number of heterogeneous GPUs. This work discusses the implementation and optimization of the MilkyWay@Home client application for both Nvidia and ATI GPUs. A 17 times speedup was achieved for double-precision calculations on a Nvidia GeForce GTX 285 card, and a 109 times speedup for double-precision calculations on an ATI HD5870 card, compared to the CPU version running on one core of a 3.0 GHz AMD Phenom(tm)II X4 940. Using single-precision calculations was also evaluated which further increased performance 6.2 times for ATI card, and 7.8 times on the Nvidia card but with some loss of accuracy. Modifications to the BOINC infrastructure which enable GPU discovery and utilization are also discussed. The resulting software enabled MilkyWay@Home to use GPU applications for a significant increase in computing power, at the time of this publication approximately 216 teraflops, which would place the combined power of these GPUs between the 11th and 12th fastest supercomputers in the world.

    @InProceedings{desell-asyncopt-escience-2009,
    author = {Travis Desell and Malik Magdon-Ismail and Boleslaw Szymanski and Carlos A. Varela and Heidi Newberg and Nathan Cole},
    title = {Robust Asynchronous Optimization for Volunteer Computing Grids},
    booktitle = {Proceedings of the 5th IEEE International Conference on e-Science (eScience2009)},
    pages = {263-270},
    year = 2009,
    address = {Oxford, UK},
    month = {December},
    keywords = {distributed computing, scientific computing, middleware, grid computing},
    pdf = {http://wcl.cs.rpi.edu/papers/escience2009.pdf},
    abstract = {General-Purpose computing on Graphics Processing Units (GPGPU) is an emerging field of research which allows software developers to utilize the significant amount of computing resources GPUs provide for a wider range of applications. While traditional high performance computing environments such as clusters, grids and supercomputers require significant architectural modifications to incorporate GPUs, volunteer computing grids already have these resources available as most personal computers have GPUs available for recreational use. Additionally, volunteer computing grids are gradually upgraded by the volunteers as they upgrade their hardware, whereas clusters, grids and supercomputers are typically upgraded only when replaced by newer hardware. As such, MilkyWay@Home’s volunteer computing system is an excellent testbed for measuring the potential of large scale distributed GPGPU computing across a large number of heterogeneous GPUs. This work discusses the implementation and optimization of the MilkyWay@Home client application for both Nvidia and ATI GPUs. A 17 times speedup was achieved for double-precision calculations on a Nvidia GeForce GTX 285 card, and a 109 times speedup for double-precision calculations on an ATI HD5870 card, compared to the CPU version running on one core of a 3.0 GHz AMD Phenom(tm)II X4 940. Using single-precision calculations was also evaluated which further increased performance 6.2 times for ATI card, and 7.8 times on the Nvidia card but with some loss of accuracy. Modifications to the BOINC infrastructure which enable GPU discovery and utilization are also discussed. The resulting software enabled MilkyWay@Home to use GPU applications for a significant increase in computing power, at the time of this publication approximately 216 teraflops, which would place the combined power of these GPUs between the 11th and 12th fastest supercomputers in the world.} 
    }
    


  3. Travis Desell, Anthony Waters, Malik Magdon-Ismail, Boleslaw Szymanski, Carlos A. Varela, Matthew Newby, Heidi Newberg, Andreas Przystawik, and Dave Anderson. Accelerating the MilkyWay@Home volunteer computing project with GPUs. In Proceedings of the 8th International Conference on Parallel Processing and Applied Mathematics (PPAM 2009), Wroclaw, Poland, pages 13 pp., September 2009. Keyword(s): distributed computing, scientific computing, middleware, grid computing.
    Abstract:
    General-Purpose computing on Graphics Processing Units (GPGPU) is an emerging field of research which allows software developers to utilize the significant amount of computing resources GPUs provide for a wider range of applications. While traditional high performance computing environments such as clusters, grids and supercomputers require significant architectural modifications to incorporate GPUs, volunteer computing grids already have these resources available as most personal computers have GPUs available for recreational use. Additionally, volunteer computing grids are gradually upgraded by the volunteers as they upgrade their hardware, whereas clusters, grids and supercomputers are typically upgraded only when replaced by newer hardware. As such, MilkyWay@Home’s volunteer computing system is an excellent testbed for measuring the potential of large scale distributed GPGPU computing across a large number of heterogeneous GPUs. This work discusses the implementation and optimization of the MilkyWay@Home client application for both Nvidia and ATI GPUs. A 17 times speedup was achieved for double-precision calculations on a Nvidia GeForce GTX 285 card, and a 109 times speedup for double-precision calculations on an ATI HD5870 card, compared to the CPU version running on one core of a 3.0 GHz AMD Phenom(tm)II X4 940. Using single-precision calculations was also evaluated which further increased performance 6.2 times for ATI card, and 7.8 times on the Nvidia card but with some loss of accuracy. Modifications to the BOINC infrastructure which enable GPU discovery and utilization are also discussed. The resulting software enabled MilkyWay@Home to use GPU applications for a significant increase in computing power, at the time of this publication approximately 216 teraflops, which would place the combined power of these GPUs between the 11th and 12th fastest supercomputers in the world.

    @InProceedings{desell-gpu-ppam-2009,
    author = {Travis Desell and Anthony Waters and Malik Magdon-Ismail and Boleslaw Szymanski and Carlos A. Varela and Matthew Newby and Heidi Newberg and Andreas Przystawik and Dave Anderson},
    title = {Accelerating the MilkyWay@Home volunteer computing project with GPUs},
    booktitle = {Proceedings of the 8th International Conference on Parallel Processing and Applied Mathematics (PPAM 2009)},
    pages = {13 pp.},
    year = 2009,
    address = {Wroclaw, Poland},
    month = {September},
    keywords = {distributed computing, scientific computing, middleware, grid computing},
    pdf = {http://wcl.cs.rpi.edu/papers/ppam2009.pdf},
    abstract = {General-Purpose computing on Graphics Processing Units (GPGPU) is an emerging field of research which allows software developers to utilize the significant amount of computing resources GPUs provide for a wider range of applications. While traditional high performance computing environments such as clusters, grids and supercomputers require significant architectural modifications to incorporate GPUs, volunteer computing grids already have these resources available as most personal computers have GPUs available for recreational use. Additionally, volunteer computing grids are gradually upgraded by the volunteers as they upgrade their hardware, whereas clusters, grids and supercomputers are typically upgraded only when replaced by newer hardware. As such, MilkyWay@Home’s volunteer computing system is an excellent testbed for measuring the potential of large scale distributed GPGPU computing across a large number of heterogeneous GPUs. This work discusses the implementation and optimization of the MilkyWay@Home client application for both Nvidia and ATI GPUs. A 17 times speedup was achieved for double-precision calculations on a Nvidia GeForce GTX 285 card, and a 109 times speedup for double-precision calculations on an ATI HD5870 card, compared to the CPU version running on one core of a 3.0 GHz AMD Phenom(tm)II X4 940. Using single-precision calculations was also evaluated which further increased performance 6.2 times for ATI card, and 7.8 times on the Nvidia card but with some loss of accuracy. Modifications to the BOINC infrastructure which enable GPU discovery and utilization are also discussed. The resulting software enabled MilkyWay@Home to use GPU applications for a significant increase in computing power, at the time of this publication approximately 216 teraflops, which would place the combined power of these GPUs between the 11th and 12th fastest supercomputers in the world.} 
    }
    


  4. Alexandre di Costanzo, Chao Jin, Carlos A. Varela, and Rajkumar Buyya. Enabling Computational Steering with an Asynchronous-Iterative Computation Framework. In Proceedings of the 5th IEEE International Conference on e-Science (eScience2009), Oxford, UK, pages 8 pp., December 2009. Keyword(s): distributed computing, scientific computing, middleware, grid computing.
    Abstract:
    In this paper, we present a framework that enables scientists to steer computations executing over large-scale grid computing environments. By using computational steering, users can dynamically control their simulations or computations to reach expected results more efficiently. The framework supports steerable applications by introducing an asynchronous iterative MapReduce programming model that is deployed using Hadoop over a set of virtual machines executing on a multi-cluster grid. To tolerate the heterogeneity between different sites, results are collected asynchronously and users can dynamically interact with their computations to adjust the area of interest. According to users' dynamic interaction, the framework can redistribute the computational overload between the heterogeneous sites and explore the user's interest area by using more powerful sites when possible. With our framework, the bottleneck induced by synchronisation between different sites is considerably avoided, and therefore the response to users' interaction is satisfied more efficiently. We illustrate and evaluate this framework with a scientific application that aims to fit models of the Milky Way galaxy structure to stars observed by the Sloan Digital Sky Survey.

    @InProceedings{costanzo-steering-escience-2009,
    author = {Alexandre di Costanzo and Chao Jin and Carlos A. Varela and Rajkumar Buyya},
    title = {Enabling Computational Steering with an Asynchronous-Iterative Computation Framework},
    booktitle = {Proceedings of the 5th IEEE International Conference on e-Science (eScience2009)},
    pages = {8 pp.},
    year = 2009,
    address = {Oxford, UK},
    month = {December},
    keywords = {distributed computing, scientific computing, middleware, grid computing},
    pdf = {http://wcl.cs.rpi.edu/papers/adc-escience-2009.pdf},
    abstract = {In this paper, we present a framework that enables scientists to steer computations executing over large-scale grid computing environments. By using computational steering, users can dynamically control their simulations or computations to reach expected results more efficiently. The framework supports steerable applications by introducing an asynchronous iterative MapReduce programming model that is deployed using Hadoop over a set of virtual machines executing on a multi-cluster grid. To tolerate the heterogeneity between different sites, results are collected asynchronously and users can dynamically interact with their computations to adjust the area of interest. According to users' dynamic interaction, the framework can redistribute the computational overload between the heterogeneous sites and explore the user's interest area by using more powerful sites when possible. With our framework, the bottleneck induced by synchronisation between different sites is considerably avoided, and therefore the response to users' interaction is satisfied more efficiently. We illustrate and evaluate this framework with a scientific application that aims to fit models of the Milky Way galaxy structure to stars observed by the Sloan Digital Sky Survey.} 
    }
    


Miscellaneous
  1. Travis Desell. Robust Asynchronous Optimization using Volunteer Computing Grids. The 5th Pan-Galactic BOINC Workshop, October 2009. Keyword(s): distributed computing, grid computing, middleware, scientific computing.
    Abstract:
    Volunteer computing grids offer significant computing power at relatively low cost to researchers, while at the same time generating public interest in different scientific projects. However, in order to be used effectively, their heterogeneity, volatility and restrictive computing models must be overcome. As these computing grids are open, incorrect or malicious results must also be handled. This paper examines extending the BOINC volunteer computing framework to allow for asynchronous global optimization as applied to scientific computing problems. The asynchronous optimization method used is resilient to faults and the heterogeneous nature of volunteer computing grids, while allowing scalability to tens of thousands of hosts. A work verification strategy that does not require the validation of every result is presented. This is shown to be able to effectively reduce the need for verification done to less than 30% of the reported results, without degrading the performance of the asynchronous search methods. An asynchronous version of particle swarm optimization (APSO) is presented and com- pared to previously used asynchronous genetic search (AGS) using the MilkyWay@Home BOINC computing project. Both search methods are shown to scale to MilkyWay@Home's current user base, over 75,000 heterogeneous and volatile hosts, something not possible for traditional optimization methods. APSO is shown to provide faster convergence to optimal results while being less sensitive to its search parameters. The verification strategy presented is shown to be effective for both AGS and APSO.

    @Misc{desell-asyncopt-boinc-2009,
    author = {Travis Desell},
    title = {Robust Asynchronous Optimization using Volunteer Computing Grids},
    howpublished = {The 5th Pan-Galactic BOINC Workshop},
    address = {Barcelona, Spain},
    month = {October},
    year = 2009,
    url = {http://milkyway.cs.rpi.edu/milkyway/download/boinc2009.ppt},
    keywords = {distributed computing, grid computing, middleware, scientific computing},
    abstract = {Volunteer computing grids offer significant computing power at relatively low cost to researchers, while at the same time generating public interest in different scientific projects. However, in order to be used effectively, their heterogeneity, volatility and restrictive computing models must be overcome. As these computing grids are open, incorrect or malicious results must also be handled. This paper examines extending the BOINC volunteer computing framework to allow for asynchronous global optimization as applied to scientific computing problems. The asynchronous optimization method used is resilient to faults and the heterogeneous nature of volunteer computing grids, while allowing scalability to tens of thousands of hosts. A work verification strategy that does not require the validation of every result is presented. This is shown to be able to effectively reduce the need for verification done to less than 30% of the reported results, without degrading the performance of the asynchronous search methods. An asynchronous version of particle swarm optimization (APSO) is presented and com- pared to previously used asynchronous genetic search (AGS) using the MilkyWay@Home BOINC computing project. Both search methods are shown to scale to MilkyWay@Home's current user base, over 75,000 heterogeneous and volatile hosts, something not possible for traditional optimization methods. APSO is shown to provide faster convergence to optimal results while being less sensitive to its search parameters. The verification strategy presented is shown to be effective for both AGS and APSO.} 
    }
    



BACK TO INDEX




Disclaimer:

This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All person copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.

Les documents contenus dans ces rpertoires sont rendus disponibles par les auteurs qui y ont contribu en vue d'assurer la diffusion temps de travaux savants et techniques sur une base non-commerciale. Les droits de copie et autres droits sont gards par les auteurs et par les dtenteurs du copyright, en dpit du fait qu'ils prsentent ici leurs travaux sous forme lectronique. Les personnes copiant ces informations doivent adhrer aux termes et contraintes couverts par le copyright de chaque auteur. Ces travaux ne peuvent pas tre rendus disponibles ailleurs sans la permission explicite du dtenteur du copyright.




Last modified: Wed Apr 3 16:12:48 2024
Author: led2.


This document was translated from BibTEX by bibtex2html