BACK TO INDEX

 Publications of year 2008
 Articles in journal, book chapters
1. Nathan Cole, Heidi Newberg, Malik Magdon-Ismail, Travis Desell, Kristopher Dawsey, Warren Hayashi, Jonathan Purnell, Boleslaw Szymanski, Carlos A. Varela, Benjamin Willett, and James Wisniewski. Maximum Likelihood Fitting of Tidal Streams with Application to the Sagittarius Dwarf Tidal Tails. Astrophysical Journal, 683:750-766, 2008. Keyword(s): distributed computing, astroinformatics, grid computing, scientific computing.
Abstract:
 We present a maximum likelihood method for determining the spatial properties of tidal debris and of the Galactic spheroid. With this method we characterize Sagittarius debris using stars with the colors of blue F turnoff stars in SDSS stripe 82. The debris is located at ( \alpha ,\delta ,R) = ( 31.37^{\circ }\pm 0.26^{\circ },0.0^{\circ },29.22\pm 0.20\ \mathrm{kpc}\,) , with a (spatial) direction given by the unit vector ( -0.991\pm 0.007\ \mathrm{kpc}\,,0.042\pm 0.033\ \mathrm{kpc}\,,0.127\pm 0.046\ \mathrm{kpc}\,) , in galactocentric Cartesian coordinates, and with \mathrm{FWHM}\, = 6.74\pm 0.06 kpc. This 2.5° wide stripe contains 0.9% as many F turnoff stars as the current Sagittarius dwarf galaxy. Over small spatial extent, the debris is modeled as a cylinder with a density that falls off as a Gaussian with distance from the axis, while the smooth component of the spheroid is modeled with a Hernquist profile. We assume that the absolute magnitude of F turnoff stars is distributed as a Gaussian, which is an improvement over previous methods which fixed the absolute magnitude at \overline{M}_{g_{0}} = 4.2 . The effectiveness and correctness of the algorithm is demonstrated on a simulated set of F turnoff stars created to mimic SDSS stripe 82 data, which shows that we have a much greater accuracy than previous studies. Our algorithm can be applied to divide the stellar data into two catalogs: one which fits the stream density profile and one with the characteristics of the spheroid. This allows us to effectively separate tidal debris from the spheroid population, both facilitating the study of the tidal stream dynamics and providing a test of whether a smooth spheroidal population exists.

@article{cole-apj-2008,
author = "Nathan Cole and Heidi Newberg and Malik Magdon-Ismail and Travis Desell and Kristopher Dawsey and Warren Hayashi and Jonathan Purnell and Boleslaw Szymanski and Carlos A. Varela and Benjamin Willett and James Wisniewski",
title = "Maximum Likelihood Fitting of Tidal Streams with Application to the Sagittarius Dwarf Tidal Tails",
journal = "Astrophysical Journal",
year = "2008",
volume = 683,
pages = "750-766",
pdf = {http://wcl.cs.rpi.edu/papers/cole-apj-2008.pdf},
keywords = {distributed computing, astroinformatics, grid computing, scientific computing},
abstract = {We present a maximum likelihood method for determining the spatial properties of tidal debris and of the Galactic spheroid. With this method we characterize Sagittarius debris using stars with the colors of blue F turnoff stars in SDSS stripe 82. The debris is located at ( \alpha ,\delta ,R) = ( 31.37^{\circ }\pm 0.26^{\circ },0.0^{\circ },29.22\pm 0.20\ \mathrm{kpc}\,) , with a (spatial) direction given by the unit vector ( -0.991\pm 0.007\ \mathrm{kpc}\,,0.042\pm 0.033\ \mathrm{kpc}\,,0.127\pm 0.046\ \mathrm{kpc}\,) , in galactocentric Cartesian coordinates, and with \mathrm{FWHM}\, = 6.74\pm 0.06 kpc. This 2.5° wide stripe contains 0.9% as many F turnoff stars as the current Sagittarius dwarf galaxy. Over small spatial extent, the debris is modeled as a cylinder with a density that falls off as a Gaussian with distance from the axis, while the smooth component of the spheroid is modeled with a Hernquist profile. We assume that the absolute magnitude of F turnoff stars is distributed as a Gaussian, which is an improvement over previous methods which fixed the absolute magnitude at \overline{M}_{g_{0}} = 4.2 . The effectiveness and correctness of the algorithm is demonstrated on a simulated set of F turnoff stars created to mimic SDSS stripe 82 data, which shows that we have a much greater accuracy than previous studies. Our algorithm can be applied to divide the stellar data into two catalogs: one which fits the stream density profile and one with the characteristics of the spheroid. This allows us to effectively separate tidal debris from the spheroid population, both facilitating the study of the tidal stream dynamics and providing a test of whether a smooth spheroidal population exists.}
}


 Conference articles
1. Nathan Cole, Heidi Jo Newberg, Malik Magdon-Ismail, Travis Desell, Boleslaw Szymanski, and Carlos Varela. Tracing the Sagittarius Tidal Stream with Maximum Likelihood. In CLASSIFICATION AND DISCOVERY IN LARGE ASTRONOMICAL SURVEYS, volume 1082, pages 216-220, October 2008. Keyword(s): distributed computing, scientific computing.
Abstract:
 Large scale surveys are providing vast amounts of data that can help us understand and study tidal debris more easily and accurately. A maximum likelihood method for determining the spatial properties of this tidal debris and the stellar Galactic spheroid has been developed to take advantage of these huge datasets. We present the results of studying the Sagittarius dwarf tidal stream in two SDSS stripes taken in the southern Galactic Cap using this method. This study was done using stars with the colors of blue F turnoff stars in SDSS. We detected Sagittarius debris at the positions (l,b,R) = (163.311°,−48.400°,30.23kpc) and (l,b,R) = (34.775°,−72.342°,26.08kpc). These debris pieces were found to have a FWHM of 6.53±0.54kpc and 5.71±0.26kpc and also to contain ≈9,500 and ≈16,700 F turnoff stars, respectively. The debris pieces were also found to have (spatial) directions of (X̂,Ŷ,Ẑ) = (0.758,0.254,‐0.600) and (X̂,Ŷ,Ẑ) = (0.982,0.945,0.167), respectively. Using the results of the algorithm, we have also probabilistically separated the tidal debris from the stellar spheroid and present those results as well.

@inproceedings{cole-aip2008,
title = "Tracing the Sagittarius Tidal Stream with Maximum Likelihood",
author = "Nathan Cole and Heidi Jo Newberg and Malik Magdon-Ismail and Travis Desell and Boleslaw Szymanski and Carlos Varela",
booktitle = "CLASSIFICATION AND DISCOVERY IN LARGE ASTRONOMICAL SURVEYS",
volume = 1082,
month = Oct,
year = 2008,
pages = {216-220},
keywords = {distributed computing, scientific computing},
abstract = {Large scale surveys are providing vast amounts of data that can help us understand and study tidal debris more easily and accurately. A maximum likelihood method for determining the spatial properties of this tidal debris and the stellar Galactic spheroid has been developed to take advantage of these huge datasets. We present the results of studying the Sagittarius dwarf tidal stream in two SDSS stripes taken in the southern Galactic Cap using this method. This study was done using stars with the colors of blue F turnoff stars in SDSS. We detected Sagittarius debris at the positions (l,b,R) = (163.311°,−48.400°,30.23kpc) and (l,b,R) = (34.775°,−72.342°,26.08kpc). These debris pieces were found to have a FWHM of 6.53±0.54kpc and 5.71±0.26kpc and also to contain ≈9,500 and ≈16,700 F turnoff stars, respectively. The debris pieces were also found to have (spatial) directions of (X̂,Ŷ,Ẑ) = (0.758,0.254,‐0.600) and (X̂,Ŷ,Ẑ) = (0.982,0.945,0.167), respectively. Using the results of the algorithm, we have also probabilistically separated the tidal debris from the stellar spheroid and present those results as well.}
}


2. Travis Desell, Boleslaw Szymanski, and Carlos A. Varela. An Asynchronous Hybrid Genetic-Simplex Search for Modeling the Milky Way Galaxy using Volunteer Computing. In Genetic and Evolutionary Computation Conference (GECCO 2008), Atlanta, Georgia, pages 921-928, July 2008. Keyword(s): distributed computing, astroinformatics, grid computing, scientific computing.
Abstract:
 This paper examines the use of a probabilistic simplex operator for asynchronous genetic search on the BOINC volunteer computing framework. This algorithm is used to optimize a computationally intensive function with a continuous parameter space: finding the optimal fit of an astronomical model of the Milky Way galaxy to observed stars. The asynchronous search using a BOINC community of over 1,000 users is shown to be comparable to a synchronous continuously updated genetic search on a 1,024 processor partition of an IBM BlueGene/L supercomputer. The probabilistic simplex operator is also shown to be highly effective and the results demonstrate that increasing the parents used to generate offspring improves the convergence rate of the search. Additionally, it is shown that there is potential for improvement by refining the range of the probabilistic operator, adding more parents, and generating offspring differently for volunteered computers based on their typical speed in reporting results. The results provide a compelling argument for the use of asynchronous genetic search and volunteer computing environments, such as BOINC, for computationally intensive optimization problems and, therefore, this work opens up interesting areas of future research into asynchronous optimization methods.

@inproceedings{desell-ags-gecco-2008,
author = "Travis Desell and Boleslaw Szymanski and Carlos A. Varela",
title = "An Asynchronous Hybrid Genetic-Simplex Search for Modeling the Milky Way Galaxy using Volunteer Computing",
booktitle = "Genetic and Evolutionary Computation Conference (GECCO 2008)",
year = "2008",
address = "Atlanta, Georgia",
month = "July",
pages = "921--928",
pdf = {http://wcl.cs.rpi.edu/papers/desell-gecco-2008.pdf},
keywords = {distributed computing, astroinformatics, grid computing, scientific computing},
abstract = {This paper examines the use of a probabilistic simplex operator for asynchronous genetic search on the BOINC volunteer computing framework. This algorithm is used to optimize a computationally intensive function with a continuous parameter space: finding the optimal fit of an astronomical model of the Milky Way galaxy to observed stars. The asynchronous search using a BOINC community of over 1,000 users is shown to be comparable to a synchronous continuously updated genetic search on a 1,024 processor partition of an IBM BlueGene/L supercomputer. The probabilistic simplex operator is also shown to be highly effective and the results demonstrate that increasing the parents used to generate offspring improves the convergence rate of the search. Additionally, it is shown that there is potential for improvement by refining the range of the probabilistic operator, adding more parents, and generating offspring differently for volunteered computers based on their typical speed in reporting results. The results provide a compelling argument for the use of asynchronous genetic search and volunteer computing environments, such as BOINC, for computationally intensive optimization problems and, therefore, this work opens up interesting areas of future research into asynchronous optimization methods.}
}


3. Travis Desell, Boleslaw Szymanski, and Carlos A. Varela. Asynchronous Genetic Search for Scientific Modeling on Large-Scale Heterogeneous Environments. In Proceedings of the 17th International Heterogeneity in Computing Workshop (HCW/IPDPS'08), Miami, FL, pages 12 pp., April 2008. IEEE. Keyword(s): distributed computing, scientific computing, middleware, grid computing.
Abstract:
 Use of large-scale heterogeneous computing environments such as computational grids and the Internet has become of high interest to scientific researchers. This is because the increasing complexity of their scientific models and data sets is drastically outpacing the increases in processor speed while the cost of supercomputing environments remains relatively high. However, the heterogeneity and unreliability of these environments, especially the Internet, make scalable and fault tolerant search methods indispensable to effective scientific model verification. The paper introduces two versions of asynchronous master-worker genetic search and evaluates their convergence and performance rates in comparison to traditional synchronous genetic search on both a IBM BlueGene supercomputer and using the MilkyWay@HOME BOINC Internet computing project 1. The asynchronous searches not only perform faster on heterogeneous grid environments as compared to synchronous search, but also achieve better convergence rates for the astronomy model used as the driving application, providing a strong argument for their use on grid computing environments and by the Milky Way@Home BOINC Internet computing project.

@InProceedings{desell-ags-hcw-2008,
author = {Travis Desell and Boleslaw Szymanski and Carlos A. Varela},
title = {Asynchronous Genetic Search for Scientific Modeling on Large-Scale Heterogeneous Environments},
booktitle = {Proceedings of the 17th International Heterogeneity in Computing Workshop (HCW/IPDPS'08)},
pages = {12 pp.},
year = 2008,
address = {Miami, FL},
month = {April},
publisher = {IEEE},
pdf = {http://wcl.cs.rpi.edu/papers/desell-ags-hcw-2008.pdf},
keywords = {distributed computing, scientific computing, middleware, grid computing},
abstract = {Use of large-scale heterogeneous computing environments such as computational grids and the Internet has become of high interest to scientific researchers. This is because the increasing complexity of their scientific models and data sets is drastically outpacing the increases in processor speed while the cost of supercomputing environments remains relatively high. However, the heterogeneity and unreliability of these environments, especially the Internet, make scalable and fault tolerant search methods indispensable to effective scientific model verification. The paper introduces two versions of asynchronous master-worker genetic search and evaluates their convergence and performance rates in comparison to traditional synchronous genetic search on both a IBM BlueGene supercomputer and using the MilkyWay@HOME BOINC Internet computing project 1. The asynchronous searches not only perform faster on heterogeneous grid environments as compared to synchronous search, but also achieve better convergence rates for the astronomy model used as the driving application, providing a strong argument for their use on grid computing environments and by the Milky Way@Home BOINC Internet computing project.}
}


 Internal reports
1. Jason LaPorte and Carlos A. Varela. Organic and Hierarchical Concentric Layouts for Distributed System Visualization. Technical report, Rensselaer Polytechnic Institute Worldwide Computing Laboratory, 2008. Keyword(s): distributed computing, distributed systems visualization.
Abstract:
 Distributed systems, due to their inherent complexity and nondeterministic nature, are programmed using high-level abstractions, such as processes, actors, ambients, agents, or services. There is a need to provide tools which allow developers to better understand, test, and debug distributed systems. OverView is a software toolkit which allows online and offline visualization of distributed systems through the concepts of entities and containers, which preserve the abstractions used at the programming level and display important dynamic properties, such as temporal (that is, when entities are created and deleted), spatial (that is, entity location and migration events) and relational (that is, entity containment or communication patterns). In this paper, we introduce two general layout mechanisms to visualize distributed systems: a hierarchical concentric layout that places containers and entities in a ring of rings, and an organic layout that uses the dynamic properties of the system to co-locate entities. We define visualization quality metrics such as intuitiveness, scalability, and genericity, and use them to evaluate the visualization layouts for several application communication topologies including linked lists, trees, hypercubes, and topologies arising from structured overlay networks such as Chord rings.

@TechReport{laporte-varela-overview-2008,
author = {Jason LaPorte and Carlos A. Varela},
title = {Organic and Hierarchical Concentric Layouts for Distributed System Visualization},
institution = {Rensselaer Polytechnic Institute Worldwide Computing Laboratory},
year = 2008,
pages = {8pp},
pdf = {http://wcl.cs.rpi.edu/papers/laporte-varela-overview-tr-2008.pdf},
keywords = {distributed computing, distributed systems visualization},
abstract = {Distributed systems, due to their inherent complexity and nondeterministic nature, are programmed using high-level abstractions, such as processes, actors, ambients, agents, or services. There is a need to provide tools which allow developers to better understand, test, and debug distributed systems. OverView is a software toolkit which allows online and offline visualization of distributed systems through the concepts of entities and containers, which preserve the abstractions used at the programming level and display important dynamic properties, such as temporal (that is, when entities are created and deleted), spatial (that is, entity location and migration events) and relational (that is, entity containment or communication patterns). In this paper, we introduce two general layout mechanisms to visualize distributed systems: a hierarchical concentric layout that places containers and entities in a ring of rings, and an organic layout that uses the dynamic properties of the system to co-locate entities. We define visualization quality metrics such as intuitiveness, scalability, and genericity, and use them to evaluate the visualization layouts for several application communication topologies including linked lists, trees, hypercubes, and topologies arising from structured overlay networks such as Chord rings.}
}


 Miscellaneous
1. Brian Boodman. Implementing and Verifying the Safety of the Transactor Model. Master's thesis, Rensselaer Polytechnic Institute, May 2008. Keyword(s): distributed computing, concurrent programming, coordination models, internet programming languages, formal verification.
Abstract:
 The transactor model is an extension of the actor model designed to tolerate failures in distributed systems. Transactors can provide guarantees about consistency of a distributed system's state in the face of message loss and temporary failures of computing nodes. The model introduces dependency information and a two-phase checkpointing protocol. The added dependency information enables transactors to track the interdependencies caused by communications between actors, making it possible to ensure that the state of the distributed system may contain machines which are not consistent with one another, the transactor model keeps track of the interdependencies between these machines, ensuring that such machines will roll back to a previous state if necessary in order to maintain consistency. Thus, the system will move from globally consistent state to globally consistent state

@MastersThesis{boodman-transactors-2008,
author = {Brian Boodman},
title = {Implementing and Verifying the Safety of the Transactor Model},
school = {Rensselaer Polytechnic Institute},
year = 2008,
month = {May},
pdf = {http://wcl.cs.rpi.edu/theses/BoodmanMSThesis.pdf},
keywords = {distributed computing, concurrent programming, coordination models, internet programming languages, formal verification},
abstract = {The transactor model is an extension of the actor model designed to tolerate failures in distributed systems. Transactors can provide guarantees about consistency of a distributed system's state in the face of message loss and temporary failures of computing nodes. The model introduces dependency information and a two-phase checkpointing protocol. The added dependency information enables transactors to track the interdependencies caused by communications between actors, making it possible to ensure that the state of the distributed system may contain machines which are not consistent with one another, the transactor model keeps track of the interdependencies between these machines, ensuring that such machines will roll back to a previous state if necessary in order to maintain consistency. Thus, the system will move from globally consistent state to globally consistent state}
}


2. C. Varela. Enabling Synchronous Computation on Volunteer Computing Grids. The 4th Pan-Galactic BOINC Workshop, September 2008. Note: Presentation video available. Keyword(s): distributed computing, grid computing, middleware, scientific computing.
@Misc{varela-sync-boinc-2008,
author = {C. Varela},
title = {Enabling Synchronous Computation on Volunteer Computing Grids},
howpublished = {The 4th Pan-Galactic BOINC Workshop},
address = {Grenoble, France},
month = {September},
year = 2008,
pdf = {http://wcl.cs.rpi.edu/papers/b5.pdf},
url = {http://boinc.berkeley.edu/trac/wiki/WorkShop08},
note = {Presentation video available.},
keywords = {distributed computing, grid computing, middleware, scientific computing},

}


BACK TO INDEX

Disclaimer:

This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All person copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.

Les documents contenus dans ces rpertoires sont rendus disponibles par les auteurs qui y ont contribu en vue d'assurer la diffusion temps de travaux savants et techniques sur une base non-commerciale. Les droits de copie et autres droits sont gards par les auteurs et par les dtenteurs du copyright, en dpit du fait qu'ils prsentent ici leurs travaux sous forme lectronique. Les personnes copiant ces informations doivent adhrer aux termes et contraintes couverts par le copyright de chaque auteur. Ces travaux ne peuvent pas tre rendus disponibles ailleurs sans la permission explicite du dtenteur du copyright.

Last modified: Mon Sep 27 16:45:51 2021
Author: cvarela.

This document was translated from BibTEX by bibtex2html