-
Nathan Cole,
Heidi Jo Newberg,
Malik Magdon-Ismail,
Travis Desell,
Carlos Varela,
and Boleslaw Szymanski.
A Study of the Sagittarius Tidal Stream Using Maximum Likelihood.
In ASTRONOMICAL DATA ANALYSIS SOFTWARE AND SYSTEMS XVIII,
volume 411,
Quebec,
pages 221-225,
November 2009.
Keyword(s): distributed computing,
scientific computing.
Abstract:
Modern surveys are producing enormous amounts of data that can only be navigated via the use of the ever increasing computational resources available. For example, the SDSS has taken a large amount of photometric data that can be used to discover and study substructure in the Galactic spheroid. A maximum likelihood method was developed and applied to color-selected F turnoff stars from two stripes of SDSS data, to determine the spatial characteristics of the Sagittarius dwarf tidal debris that exists within these stripes. The Sagittarius tidal debris in stripes 79 and 86 were detected at the positions (l,b,R) = (163.311 °, -48.400 °, 30.23 kpc) and (l,b,R) = (34.775 °, -72.342 °, 26.08 kpc) and were found to have a FWHM of 6.53±0.54 kpc and 5.71±0.26 kpc and also to contain ≈9,500 and ≈16,700 F turnoff stars, respectively. The debris pieces are axially aligned with the directions (^X,^Y,^Z) = (0.758 kpc, 0.254 kpc, -0.600 kpc) and (^X,^Y,^Z) = (0.982 kpc, 0.084 kpc, 0.167 kpc), respectively. The results of probabilistically separating the tidal debris from the stellar spheroid are also presented. |
@inproceedings{cole-adass2008,
title = "A Study of the Sagittarius Tidal Stream Using Maximum Likelihood",
author = "Nathan Cole and Heidi Jo Newberg and Malik Magdon-Ismail and Travis Desell and Carlos Varela and Boleslaw Szymanski",
booktitle = "ASTRONOMICAL DATA ANALYSIS SOFTWARE AND SYSTEMS XVIII",
address = "Quebec",
volume = 411,
month = Nov,
year = 2009,
pages = {221-225},
keywords = {distributed computing, scientific computing},
abstract = {Modern surveys are producing enormous amounts of data that can only be navigated via the use of the ever increasing computational resources available. For example, the SDSS has taken a large amount of photometric data that can be used to discover and study substructure in the Galactic spheroid. A maximum likelihood method was developed and applied to color-selected F turnoff stars from two stripes of SDSS data, to determine the spatial characteristics of the Sagittarius dwarf tidal debris that exists within these stripes. The Sagittarius tidal debris in stripes 79 and 86 were detected at the positions (l,b,R) = (163.311 °, -48.400 °, 30.23 kpc) and (l,b,R) = (34.775 °, -72.342 °, 26.08 kpc) and were found to have a FWHM of 6.53±0.54 kpc and 5.71±0.26 kpc and also to contain ≈9,500 and ≈16,700 F turnoff stars, respectively. The debris pieces are axially aligned with the directions (^X,^Y,^Z) = (0.758 kpc, 0.254 kpc, -0.600 kpc) and (^X,^Y,^Z) = (0.982 kpc, 0.084 kpc, 0.167 kpc), respectively. The results of probabilistically separating the tidal debris from the stellar spheroid are also presented.}
}
-
Travis Desell,
Malik Magdon-Ismail,
Boleslaw Szymanski,
Carlos A. Varela,
Heidi Newberg,
and Nathan Cole.
Robust Asynchronous Optimization for Volunteer Computing Grids.
In Proceedings of the 5th IEEE International Conference on e-Science (eScience2009),
Oxford, UK,
pages 263-270,
December 2009.
Keyword(s): distributed computing,
scientific computing,
middleware,
grid computing.
Abstract:
General-Purpose computing on Graphics Processing Units (GPGPU) is an emerging field of research which allows software developers to utilize the significant amount of computing resources GPUs provide for a wider range of applications. While traditional high performance computing environments such as clusters, grids and supercomputers require significant architectural modifications to incorporate GPUs, volunteer computing grids already have these resources available as most personal computers have GPUs available for recreational use. Additionally, volunteer computing grids are gradually upgraded by the volunteers as they upgrade their hardware, whereas clusters, grids and supercomputers are typically upgraded only when replaced by newer hardware. As such, MilkyWay@Home’s volunteer computing system is an excellent testbed for measuring the potential of large scale distributed GPGPU computing across a large number of heterogeneous GPUs. This work discusses the implementation and optimization of the MilkyWay@Home client application for both Nvidia and ATI GPUs. A 17 times speedup was achieved for double-precision calculations on a Nvidia GeForce GTX 285 card, and a 109 times speedup for double-precision calculations on an ATI HD5870 card, compared to the CPU version running on one core of a 3.0 GHz AMD Phenom(tm)II X4 940. Using single-precision calculations was also evaluated which further increased performance 6.2 times for ATI card, and 7.8 times on the Nvidia card but with some loss of accuracy. Modifications to the BOINC infrastructure which enable GPU discovery and utilization are also discussed. The resulting software enabled MilkyWay@Home to use GPU applications for a significant increase in computing power, at the time of this publication approximately 216 teraflops, which would place the combined power of these GPUs between the 11th and 12th fastest supercomputers in the world. |
@InProceedings{desell-asyncopt-escience-2009,
author = {Travis Desell and Malik Magdon-Ismail and Boleslaw Szymanski and Carlos A. Varela and Heidi Newberg and Nathan Cole},
title = {Robust Asynchronous Optimization for Volunteer Computing Grids},
booktitle = {Proceedings of the 5th IEEE International Conference on e-Science (eScience2009)},
pages = {263-270},
year = 2009,
address = {Oxford, UK},
month = {December},
keywords = {distributed computing, scientific computing, middleware, grid computing},
pdf = {http://wcl.cs.rpi.edu/papers/escience2009.pdf},
abstract = {General-Purpose computing on Graphics Processing Units (GPGPU) is an emerging field of research which allows software developers to utilize the significant amount of computing resources GPUs provide for a wider range of applications. While traditional high performance computing environments such as clusters, grids and supercomputers require significant architectural modifications to incorporate GPUs, volunteer computing grids already have these resources available as most personal computers have GPUs available for recreational use. Additionally, volunteer computing grids are gradually upgraded by the volunteers as they upgrade their hardware, whereas clusters, grids and supercomputers are typically upgraded only when replaced by newer hardware. As such, MilkyWay@Home’s volunteer computing system is an excellent testbed for measuring the potential of large scale distributed GPGPU computing across a large number of heterogeneous GPUs. This work discusses the implementation and optimization of the MilkyWay@Home client application for both Nvidia and ATI GPUs. A 17 times speedup was achieved for double-precision calculations on a Nvidia GeForce GTX 285 card, and a 109 times speedup for double-precision calculations on an ATI HD5870 card, compared to the CPU version running on one core of a 3.0 GHz AMD Phenom(tm)II X4 940. Using single-precision calculations was also evaluated which further increased performance 6.2 times for ATI card, and 7.8 times on the Nvidia card but with some loss of accuracy. Modifications to the BOINC infrastructure which enable GPU discovery and utilization are also discussed. The resulting software enabled MilkyWay@Home to use GPU applications for a significant increase in computing power, at the time of this publication approximately 216 teraflops, which would place the combined power of these GPUs between the 11th and 12th fastest supercomputers in the world.}
}
-
Travis Desell,
Anthony Waters,
Malik Magdon-Ismail,
Boleslaw Szymanski,
Carlos A. Varela,
Matthew Newby,
Heidi Newberg,
Andreas Przystawik,
and Dave Anderson.
Accelerating the MilkyWay@Home volunteer computing project with GPUs.
In Proceedings of the 8th International Conference on Parallel Processing and Applied Mathematics (PPAM 2009),
Wroclaw, Poland,
pages 13 pp.,
September 2009.
Keyword(s): distributed computing,
scientific computing,
middleware,
grid computing.
Abstract:
General-Purpose computing on Graphics Processing Units (GPGPU) is an emerging field of research which allows software developers to utilize the significant amount of computing resources GPUs provide for a wider range of applications. While traditional high performance computing environments such as clusters, grids and supercomputers require significant architectural modifications to incorporate GPUs, volunteer computing grids already have these resources available as most personal computers have GPUs available for recreational use. Additionally, volunteer computing grids are gradually upgraded by the volunteers as they upgrade their hardware, whereas clusters, grids and supercomputers are typically upgraded only when replaced by newer hardware. As such, MilkyWay@Home’s volunteer computing system is an excellent testbed for measuring the potential of large scale distributed GPGPU computing across a large number of heterogeneous GPUs. This work discusses the implementation and optimization of the MilkyWay@Home client application for both Nvidia and ATI GPUs. A 17 times speedup was achieved for double-precision calculations on a Nvidia GeForce GTX 285 card, and a 109 times speedup for double-precision calculations on an ATI HD5870 card, compared to the CPU version running on one core of a 3.0 GHz AMD Phenom(tm)II X4 940. Using single-precision calculations was also evaluated which further increased performance 6.2 times for ATI card, and 7.8 times on the Nvidia card but with some loss of accuracy. Modifications to the BOINC infrastructure which enable GPU discovery and utilization are also discussed. The resulting software enabled MilkyWay@Home to use GPU applications for a significant increase in computing power, at the time of this publication approximately 216 teraflops, which would place the combined power of these GPUs between the 11th and 12th fastest supercomputers in the world. |
@InProceedings{desell-gpu-ppam-2009,
author = {Travis Desell and Anthony Waters and Malik Magdon-Ismail and Boleslaw Szymanski and Carlos A. Varela and Matthew Newby and Heidi Newberg and Andreas Przystawik and Dave Anderson},
title = {Accelerating the MilkyWay@Home volunteer computing project with GPUs},
booktitle = {Proceedings of the 8th International Conference on Parallel Processing and Applied Mathematics (PPAM 2009)},
pages = {13 pp.},
year = 2009,
address = {Wroclaw, Poland},
month = {September},
keywords = {distributed computing, scientific computing, middleware, grid computing},
pdf = {http://wcl.cs.rpi.edu/papers/ppam2009.pdf},
abstract = {General-Purpose computing on Graphics Processing Units (GPGPU) is an emerging field of research which allows software developers to utilize the significant amount of computing resources GPUs provide for a wider range of applications. While traditional high performance computing environments such as clusters, grids and supercomputers require significant architectural modifications to incorporate GPUs, volunteer computing grids already have these resources available as most personal computers have GPUs available for recreational use. Additionally, volunteer computing grids are gradually upgraded by the volunteers as they upgrade their hardware, whereas clusters, grids and supercomputers are typically upgraded only when replaced by newer hardware. As such, MilkyWay@Home’s volunteer computing system is an excellent testbed for measuring the potential of large scale distributed GPGPU computing across a large number of heterogeneous GPUs. This work discusses the implementation and optimization of the MilkyWay@Home client application for both Nvidia and ATI GPUs. A 17 times speedup was achieved for double-precision calculations on a Nvidia GeForce GTX 285 card, and a 109 times speedup for double-precision calculations on an ATI HD5870 card, compared to the CPU version running on one core of a 3.0 GHz AMD Phenom(tm)II X4 940. Using single-precision calculations was also evaluated which further increased performance 6.2 times for ATI card, and 7.8 times on the Nvidia card but with some loss of accuracy. Modifications to the BOINC infrastructure which enable GPU discovery and utilization are also discussed. The resulting software enabled MilkyWay@Home to use GPU applications for a significant increase in computing power, at the time of this publication approximately 216 teraflops, which would place the combined power of these GPUs between the 11th and 12th fastest supercomputers in the world.}
}
-
Alexandre di Costanzo,
Chao Jin,
Carlos A. Varela,
and Rajkumar Buyya.
Enabling Computational Steering with an Asynchronous-Iterative Computation Framework.
In Proceedings of the 5th IEEE International Conference on e-Science (eScience2009),
Oxford, UK,
pages 8 pp.,
December 2009.
Keyword(s): distributed computing,
scientific computing,
middleware,
grid computing.
Abstract:
In this paper, we present a framework that enables scientists to steer computations executing over large-scale grid computing environments. By using computational steering, users can dynamically control their simulations or computations to reach expected results more efficiently. The framework supports steerable applications by introducing an asynchronous iterative MapReduce programming model that is deployed using Hadoop over a set of virtual machines executing on a multi-cluster grid. To tolerate the heterogeneity between different sites, results are collected asynchronously and users can dynamically interact with their computations to adjust the area of interest. According to users' dynamic interaction, the framework can redistribute the computational overload between the heterogeneous sites and explore the user's interest area by using more powerful sites when possible. With our framework, the bottleneck induced by synchronisation between different sites is considerably avoided, and therefore the response to users' interaction is satisfied more efficiently. We illustrate and evaluate this framework with a scientific application that aims to fit models of the Milky Way galaxy structure to stars observed by the Sloan Digital Sky Survey. |
@InProceedings{costanzo-steering-escience-2009,
author = {Alexandre di Costanzo and Chao Jin and Carlos A. Varela and Rajkumar Buyya},
title = {Enabling Computational Steering with an Asynchronous-Iterative Computation Framework},
booktitle = {Proceedings of the 5th IEEE International Conference on e-Science (eScience2009)},
pages = {8 pp.},
year = 2009,
address = {Oxford, UK},
month = {December},
keywords = {distributed computing, scientific computing, middleware, grid computing},
pdf = {http://wcl.cs.rpi.edu/papers/adc-escience-2009.pdf},
abstract = {In this paper, we present a framework that enables scientists to steer computations executing over large-scale grid computing environments. By using computational steering, users can dynamically control their simulations or computations to reach expected results more efficiently. The framework supports steerable applications by introducing an asynchronous iterative MapReduce programming model that is deployed using Hadoop over a set of virtual machines executing on a multi-cluster grid. To tolerate the heterogeneity between different sites, results are collected asynchronously and users can dynamically interact with their computations to adjust the area of interest. According to users' dynamic interaction, the framework can redistribute the computational overload between the heterogeneous sites and explore the user's interest area by using more powerful sites when possible. With our framework, the bottleneck induced by synchronisation between different sites is considerably avoided, and therefore the response to users' interaction is satisfied more efficiently. We illustrate and evaluate this framework with a scientific application that aims to fit models of the Milky Way galaxy structure to stars observed by the Sloan Digital Sky Survey.}
}