BACK TO INDEX

Publications of year 2015
Books and proceedings
  1. Elisa Gonzalez Boix, Philipp Haller, Alessandro Ricci, and Carlos A. Varela, editors. Proceedings of the 5th International Workshop on Programming based on Actors Agents & Decentralized Control, AGERE 2015, Pittsburgh, PA, USA, October 25-30, 2015, 2015. ACM. Keyword(s): concurrent programming, programming languages.
    Abstract:
    The 5th International Workshop on Programming based on Actors, Agents, and Decentralized Control (AGERE!), Pittsburgh, PA, USA, October 26, 2015, co-located with SPLASH 2015. This latest edition of AGERE!, an ACM SIGPLAN workshop, confirms its role as a unique venue in the research landscape bringing together researchers and practitioners interested in actors, agents and, more generally, high-level paradigms emphasizing decentralized control in thinking, modeling, developing, and reasoning about software systems. The fundamental turn of software to concurrency and distribution is not only a matter of performance, but also of design and abstraction. It calls for programming paradigms that, compared to current mainstream paradigms, allow us to more naturally think about, design, develop, execute, debug, and profile systems exhibiting different degrees of concurrency, autonomy, decentralization of control, and physical distribution. All stages of software development are considered interesting for the workshop, including requirements, modeling, formalization, prototyping, design, implementation, tooling, testing, and any other means of producing running software based on actors and agents as first-class abstractions. The scope of the workshop includes aspects that concern both the theory and the practice of programming using such paradigms, so as to bring together researchers working on models, languages and technologies, as well as practitioners using such technologies to develop real-world systems and applications

    @proceedings{varela-agere-2015,
    editor = {Elisa Gonzalez Boix and Philipp Haller and Alessandro Ricci and Carlos A. Varela},
    title = {{Proceedings of the 5th International Workshop on Programming based on Actors Agents & Decentralized Control, AGERE 2015, Pittsburgh, PA, USA, October 25-30, 2015}},
    booktitle = {AGERE 2015 Workshop Proceedings},
    publisher = {ACM},
    year = {2015},
    url = {http://dl.acm.org/citation.cfm?id = 2824815&CFID = 784586504&CFTOKEN = 85922668},
    keywords = {concurrent programming, programming languages},
    abstract = {The 5th International Workshop on Programming based on Actors, Agents, and Decentralized Control (AGERE!), Pittsburgh, PA, USA, October 26, 2015, co-located with SPLASH 2015. This latest edition of AGERE!, an ACM SIGPLAN workshop, confirms its role as a unique venue in the research landscape bringing together researchers and practitioners interested in actors, agents and, more generally, high-level paradigms emphasizing decentralized control in thinking, modeling, developing, and reasoning about software systems. The fundamental turn of software to concurrency and distribution is not only a matter of performance, but also of design and abstraction. It calls for programming paradigms that, compared to current mainstream paradigms, allow us to more naturally think about, design, develop, execute, debug, and profile systems exhibiting different degrees of concurrency, autonomy, decentralization of control, and physical distribution. All stages of software development are considered interesting for the workshop, including requirements, modeling, formalization, prototyping, design, implementation, tooling, testing, and any other means of producing running software based on actors and agents as first-class abstractions. The scope of the workshop includes aspects that concern both the theory and the practice of programming using such paradigms, so as to bring together researchers working on models, languages and technologies, as well as practitioners using such technologies to develop real-world systems and applications} 
    }
    


Conference articles
  1. Travis Desell and Carlos A. Varela. A Performance and Scalability Analysis of Actor Message Passing and Migration in SALSA Lite. In Agere Workshop at ACM SPLASH 2015 Conference, October 2015. Keyword(s): distributed computing, concurrent programming, programming languages.
    Abstract:
    This paper presents a newly developed implementation of remote message passing, remote actor creation and actor migration in SALSA Lite. The new runtime and protocols are implemented using SALSA Lite’s lightweight actors and asynchronous message passing, and provide significant performance improvements over SALSA version 1.1.5. Actors in SALSA Lite can now be local, the default lightweight actor implementation; remote, actors which can be referenced remotely and send remote messages, but cannot migrate; or mobile, actors that can be remotely referenced, send remote messages and migrate to different locations. Remote message passing in SALSA Lite is twice as fast, actor migration is over 17 times as fast, and remote actor creation is two orders of magnitude faster. Two new benchmarks for remote message passing and migration show this implementation has strong scalability in terms of concurrent actor message passing and migration. The costs of using remote and mobile actors are also investigated. For local message passing, remote actors resulted in no overhead, and mobile actors resulted in 30% overhead. Local creation of remote and mobile actors was more expensive with 54% overhead for remote actors and 438% for mobile actors. In distributed scenarios, creating mobile actors remotely was only 6% slower than creating remote actors remotely, and passing messages between mobile actors on different theaters was only 5.55% slower than passing messages between remote actors. These results highlight the benefits of our approach in implementing the distributed runtime over a core set of efficient lightweight actors, as well as provide insights into the costs of implementing remote message passing and actor mobility.

    @InProceedings{dessell-varela-agere-2015,
    author = {Travis Desell and Carlos A. Varela},
    title = {{A Performance and Scalability Analysis of Actor Message Passing and Migration in SALSA Lite}},
    booktitle = {Agere Workshop at ACM SPLASH 2015 Conference},
    year = 2015,
    month = {October},
    pdf = {http://wcl.cs.rpi.edu/papers/agere2015.pdf},
    keywords = {distributed computing, concurrent programming, programming languages},
    abstract = {This paper presents a newly developed implementation of remote message passing, remote actor creation and actor migration in SALSA Lite. The new runtime and protocols are implemented using SALSA Lite’s lightweight actors and asynchronous message passing, and provide significant performance improvements over SALSA version 1.1.5. Actors in SALSA Lite can now be local, the default lightweight actor implementation; remote, actors which can be referenced remotely and send remote messages, but cannot migrate; or mobile, actors that can be remotely referenced, send remote messages and migrate to different locations. Remote message passing in SALSA Lite is twice as fast, actor migration is over 17 times as fast, and remote actor creation is two orders of magnitude faster. Two new benchmarks for remote message passing and migration show this implementation has strong scalability in terms of concurrent actor message passing and migration. The costs of using remote and mobile actors are also investigated. For local message passing, remote actors resulted in no overhead, and mobile actors resulted in 30% overhead. Local creation of remote and mobile actors was more expensive with 54% overhead for remote actors and 438% for mobile actors. In distributed scenarios, creating mobile actors remotely was only 6% slower than creating remote actors remotely, and passing messages between mobile actors on different theaters was only 5.55% slower than passing messages between remote actors. These results highlight the benefits of our approach in implementing the distributed runtime over a core set of efficient lightweight actors, as well as provide insights into the costs of implementing remote message passing and actor mobility.} 
    }
    


  2. Shigeru Imai, Alessandro Galli, and Carlos A. Varela. Dynamic Data-Driven Avionics Systems: Inferring Failure Modes from Data Streams. In Dynamic Data-Driven Application Systems (DDDAS 2015), Reykjavik, Iceland, June 2015. Keyword(s): programming languages, data streaming, cyber physical systems.
    Abstract:
    Dynamic Data-Driven Avionics Systems (DDDAS) embody ideas from the Dynamic Data- Driven Application Systems paradigm by creating a data-driven feedback loop that analyzes spatio-temporal data streams coming from aircraft sensors and instruments, looks for errors in the data signaling potential failure modes, and corrects for erroneous data when possible. In case of emergency, DDDAS need to provide enough information about the failure to pilots to support their decision making in real-time. We have developed the PILOTS system, which supports data-error tolerant spatio-temporal stream processing, as an initial step to realize the concept of DDDAS. In this paper, we apply the PILOTS system to actual data from the Tuninter 1153 (TU1153) flight accident in August 2005, where the installation of an incorrect fuel sensor led to a fatal accident. The underweight condition suggesting an incorrect fuel indication for TU1153 is successfully detected with 100% accuracy during cruise flight phases. Adding logical redundancy to avionics through a dynamic data-driven approach can significantly improve the safety of flight.

    @InProceedings{imai-galli-varela-pilots-dddas-2015,
    author = {Shigeru Imai and Alessandro Galli and Carlos A. Varela},
    title = {Dynamic Data-Driven Avionics Systems: Inferring Failure Modes from Data Streams},
    booktitle = {Dynamic Data-Driven Application Systems (DDDAS 2015)},
    year = 2015,
    address = {Reykjavik, Iceland},
    month = {June},
    pdf = {http://wcl.cs.rpi.edu/papers/dddas2015.pdf},
    keywords = {programming languages, data streaming, cyber physical systems},
    abstract = {Dynamic Data-Driven Avionics Systems (DDDAS) embody ideas from the Dynamic Data- Driven Application Systems paradigm by creating a data-driven feedback loop that analyzes spatio-temporal data streams coming from aircraft sensors and instruments, looks for errors in the data signaling potential failure modes, and corrects for erroneous data when possible. In case of emergency, DDDAS need to provide enough information about the failure to pilots to support their decision making in real-time. We have developed the PILOTS system, which supports data-error tolerant spatio-temporal stream processing, as an initial step to realize the concept of DDDAS. In this paper, we apply the PILOTS system to actual data from the Tuninter 1153 (TU1153) flight accident in August 2005, where the installation of an incorrect fuel sensor led to a fatal accident. The underweight condition suggesting an incorrect fuel indication for TU1153 is successfully detected with 100% accuracy during cruise flight phases. Adding logical redundancy to avionics through a dynamic data-driven approach can significantly improve the safety of flight.} 
    }
    


Miscellaneous
  1. Matthew Hancock. Middleware Framework for Distributed Cloud Storage. Master's thesis, Rensselaer Polytechnic Institute, May 2015. Keyword(s): distributed computing, distributed cloud storage, cloud computing.
    Abstract:
    The device people use to capture multimedia has changed over the years with the rise of smartphones. Smartphones are readily available, easy to use and capture multimedia with high quality. While consumers capture all of this media, the storage requirements are not changing significantly. Therefore, people look towards cloud storage solutions. The typical consumer stores files within a single provider. They want a solution that is quick to access, reliable, and secure. Using multiple providers can reduce cost and improve overall performance. We present a middleware framework called Distributed Indexed Storage in the Cloud (DISC) to improve all aspects a user expects in a cloud provider. The consumer provides the middleware files, which get processed through user policies, and stored within the cloud. The process of uploading and downloading is essentially transparent. The upload and download performance happens simultaneously by distributing a subset of the file across multiple cloud providers that it deems fit based on policies. Reliability is another important feature of DISC. To improve reliability, we propose a solution that replicates the same subset of the file across different providers. This is beneficial when one provider is unresponsive, the data can be pulled from another provider with the same subset. Security has great importance when dealing with consumer’s data. We inherently gain security when improving reliability. Since the file is distributed using subsets, not one provider has the full file. In our experiment, performance improvements show when delivering and retrieving files compared to the standard approach. The results are promising, saving upwards of eight seconds in processing time. With the expansion of more cloud providers, the results are expected to improve.

    @MastersThesis{hancock-disc-2015,
    author = {Matthew Hancock},
    title = {Middleware Framework for Distributed Cloud Storage},
    school = {Rensselaer Polytechnic Institute},
    year = 2015,
    month = {May},
    pdf = {http://wcl.cs.rpi.edu/theses/hancock-disc-master.pdf},
    keywords = {distributed computing, distributed cloud storage, cloud computing},
    abstract = {The device people use to capture multimedia has changed over the years with the rise of smartphones. Smartphones are readily available, easy to use and capture multimedia with high quality. While consumers capture all of this media, the storage requirements are not changing significantly. Therefore, people look towards cloud storage solutions. The typical consumer stores files within a single provider. They want a solution that is quick to access, reliable, and secure. Using multiple providers can reduce cost and improve overall performance. We present a middleware framework called Distributed Indexed Storage in the Cloud (DISC) to improve all aspects a user expects in a cloud provider. The consumer provides the middleware files, which get processed through user policies, and stored within the cloud. The process of uploading and downloading is essentially transparent. The upload and download performance happens simultaneously by distributing a subset of the file across multiple cloud providers that it deems fit based on policies. Reliability is another important feature of DISC. To improve reliability, we propose a solution that replicates the same subset of the file across different providers. This is beneficial when one provider is unresponsive, the data can be pulled from another provider with the same subset. Security has great importance when dealing with consumer’s data. We inherently gain security when improving reliability. Since the file is distributed using subsets, not one provider has the full file. In our experiment, performance improvements show when delivering and retrieving files compared to the standard approach. The results are promising, saving upwards of eight seconds in processing time. With the expansion of more cloud providers, the results are expected to improve.} 
    }
    


  2. Matthew Hancock and Carlos A. Varela. Augmenting Performance For Distributed Cloud Storage. 15th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing (CCGrid 2015), May 2015. Note: Poster. Keyword(s): distributed computing, distributed cloud storage, cloud computing.
    Abstract:
    The device people use to capture multimedia has changed over the years with the rise of smart phones. Smart phones are readily available, easy to use, and capture multimedia with high quality. While consumers capture all of this media, the storage requirements are not changing significantly. Therefore, people look towards cloud storage solutions. The typical consumer stores files within a single provider. They want a solution that is quick to access, reliable, and secure. Using multiple providers can reduce cost and improve overall performance. We present a middleware framework called Distributed Indexed Storage in the Cloud (DISC) to improve all aspects a user expects in a cloud provider. The process of uploading and downloading is essentially transparent to the user. The upload and download performance happens simultaneously by distributing a subset of the file across multiple cloud providers that it deems fit based on policies. Reliability is another important feature of DISC. To improve reliability, we propose a solution that replicates the same subset of the file across different providers. This is beneficial when one provider is unresponsive, the data can be pulled from another provider with the same subset. Security has great importance when dealing with consumers data. We inherently gain security when improving reliability. Since the file is distributed using subsets, not one provider has the full file. In our experiment, performance improvements are observed when delivering and retrieving files compared to the standard approach. The results are promising, saving upwards of eight seconds in processing time. With the expansion of more cloud providers, the results are expected to improve.

    @Misc{hancock-varela-ccgrid-2015,
    author = {Matthew Hancock and Carlos A. Varela},
    title = {Augmenting Performance For Distributed Cloud Storage},
    howpublished = {15th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing (CCGrid 2015)},
    year = 2015,
    address = {Shenzhen, China},
    month = {May},
    pdf = {http://wcl.cs.rpi.edu/papers/ccgrid2015-disc.pdf},
    note = {Poster},
    keywords = {distributed computing, distributed cloud storage, cloud computing},
    abstract = {The device people use to capture multimedia has changed over the years with the rise of smart phones. Smart phones are readily available, easy to use, and capture multimedia with high quality. While consumers capture all of this media, the storage requirements are not changing significantly. Therefore, people look towards cloud storage solutions. The typical consumer stores files within a single provider. They want a solution that is quick to access, reliable, and secure. Using multiple providers can reduce cost and improve overall performance. We present a middleware framework called Distributed Indexed Storage in the Cloud (DISC) to improve all aspects a user expects in a cloud provider. The process of uploading and downloading is essentially transparent to the user. The upload and download performance happens simultaneously by distributing a subset of the file across multiple cloud providers that it deems fit based on policies. Reliability is another important feature of DISC. To improve reliability, we propose a solution that replicates the same subset of the file across different providers. This is beneficial when one provider is unresponsive, the data can be pulled from another provider with the same subset. Security has great importance when dealing with consumers data. We inherently gain security when improving reliability. Since the file is distributed using subsets, not one provider has the full file. In our experiment, performance improvements are observed when delivering and retrieving files compared to the standard approach. The results are promising, saving upwards of eight seconds in processing time. With the expansion of more cloud providers, the results are expected to improve.} 
    }
    


  3. Shigeru Imai, Stacy Patterson, and Carlos A. Varela. Cost-Efficient High-Performance Internet-Scale Data Analytics over Multi-Cloud Environments. 15th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing (CCGrid 2015), May 2015. Note: Doctoral symposium paper. Keyword(s): distributed computing, cloud computing.
    Abstract:
    To analyze data distributed across the world, one can use distributed computing power to take advantage of data locality and achieve higher throughput. The multi-cloud model, a composition of multiple clouds, can provide cost-effective computing resources to process such distributed data. As multicolour becomes more and more accessible from cloud users, the use of MapReduce/Hadoop over multi-cloud is emerging, however, existing work has two issues in principle. First, it mainly focuses on maximizing throughput by improving data locality, but the perspective of cost optimization is missing. Second, conventional centralized optimization methods would not be able to scale well in multi-cloud environments due to its highly dynamic nature. We plan to solve the first issue by formalizing an optimization framework for MapReduce over multi-cloud including virtual machine and data transfer costs, and then the second issue by creating decentralized resource management middleware that considers multi-criteria (cost and performance) optimization. This paper reports progress we have made so far on these two directions.

    @Misc{imai-patterson-varela-ccgrid-2015,
    author = {Shigeru Imai and Stacy Patterson and Carlos A. Varela},
    title = {Cost-Efficient High-Performance Internet-Scale Data Analytics over Multi-Cloud Environments},
    howpublished = {15th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing (CCGrid 2015)},
    year = 2015,
    address = {Shenzhen, China},
    month = {May},
    pdf = {http://wcl.cs.rpi.edu/papers/ccgrid2015.pdf},
    keywords = {distributed computing, cloud computing},
    note = {Doctoral symposium paper},
    abstract = {To analyze data distributed across the world, one can use distributed computing power to take advantage of data locality and achieve higher throughput. The multi-cloud model, a composition of multiple clouds, can provide cost-effective computing resources to process such distributed data. As multicolour becomes more and more accessible from cloud users, the use of MapReduce/Hadoop over multi-cloud is emerging, however, existing work has two issues in principle. First, it mainly focuses on maximizing throughput by improving data locality, but the perspective of cost optimization is missing. Second, conventional centralized optimization methods would not be able to scale well in multi-cloud environments due to its highly dynamic nature. We plan to solve the first issue by formalizing an optimization framework for MapReduce over multi-cloud including virtual machine and data transfer costs, and then the second issue by creating decentralized resource management middleware that considers multi-criteria (cost and performance) optimization. This paper reports progress we have made so far on these two directions.} 
    }
    


  4. Carlos A Varela. Dynamic Data Driven Avionics Systems. Streaming Technology Requirements, Application and Middleware (STREAM2015), October 2015. Keyword(s): programming languages, cyber physical systems, data streaming.
    Abstract:
    Dynamic Data-Driven Avionics Systems (DDDAS) embody ideas from the Dynamic Data-Driven Application Systems paradigm by creating a data-driven feedback loop that analyzes spatiotemporal data streams coming from aircraft sensors and instruments, looks for errors in the data signaling potential failure modes, and corrects for erroneous data when possible.

    @Misc{varela-stream-2015,
    author = {Carlos A Varela},
    title = {Dynamic Data Driven Avionics Systems},
    howpublished = {Streaming Technology Requirements, Application and Middleware (STREAM2015)},
    month = {October},
    year = 2015,
    ps = {http://wcl.cs.rpi.edu/papers/stream2015.pdf},
    keywords = {programming languages, cyber physical systems, data streaming},
    abstract = {Dynamic Data-Driven Avionics Systems (DDDAS) embody ideas from the Dynamic Data-Driven Application Systems paradigm by creating a data-driven feedback loop that analyzes spatiotemporal data streams coming from aircraft sensors and instruments, looks for errors in the data signaling potential failure modes, and corrects for erroneous data when possible.} 
    }
    



BACK TO INDEX




Disclaimer:

This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All person copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.

Les documents contenus dans ces répertoires sont rendus disponibles par les auteurs qui y ont contribué en vue d'assurer la diffusion à temps de travaux savants et techniques sur une base non-commerciale. Les droits de copie et autres droits sont gardés par les auteurs et par les détenteurs du copyright, en dépit du fait qu'ils présentent ici leurs travaux sous forme électronique. Les personnes copiant ces informations doivent adhérer aux termes et contraintes couverts par le copyright de chaque auteur. Ces travaux ne peuvent pas être rendus disponibles ailleurs sans la permission explicite du détenteur du copyright.




Last modified: Wed Nov 20 17:00:51 2024
Author: led2.


This document was translated from BibTEX by bibtex2html